Sample records for nasa columbia supercomputer

  1. Impact of the Columbia Supercomputer on NASA Space and Exploration Mission

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott

    2006-01-01

    NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.

  2. High Resolution Aerospace Applications using the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha

    2005-01-01

    This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.

  3. NASA's supercomputing experience

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ron

    1990-01-01

    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.

  4. Hurricane Intensity Forecasts with a Global Mesoscale Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Atlas, Robert

    2006-01-01

    It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers (e.g., the NASA Columbia Supercomputer) have changed this. In 2004, the finite-volume General Circulation Model at a 1/4 degree resolution, doubling the resolution used by most of operational NWP center at that time, was implemented and run to obtain promising landfall predictions for major hurricanes (e.g., Charley, Frances, Ivan, and Jeanne). In 2005, we have successfully implemented the 1/8 degree version, and demonstrated its performance on intensity forecasts with hurricane Katrina (2005). It is found that the 1/8 degree model is capable of simulating the radius of maximum wind and near-eye wind structure, and thereby promising intensity forecasts. In this study, we will further evaluate the model s performance on intensity forecasts of hurricanes Ivan, Jeanne, Karl in 2004. Suggestions for further model development will be made in the end.

  5. OpenMP Performance on the Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  6. Large-Scale NASA Science Applications on the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2005-01-01

    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  7. NASA Advanced Supercomputing Facility Expansion

    NASA Technical Reports Server (NTRS)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  8. Next Generation Security for the 10,240 Processor Columbia System

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas; Kolano, Paul; Shaw, Derek; Keller, Chris; Tweton, Dave; Welch, Todd; Liu, Wen (Betty)

    2005-01-01

    This presentation includes a discussion of the Columbia 10,240-processor system located at the NASA Advanced Supercomputing (NAS) division at the NASA Ames Research Center which supports each of NASA's four missions: science, exploration systems, aeronautics, and space operations. It is comprised of 20 Silicon Graphics nodes, each consisting of 512 Itanium II processors. A 64 processor Columbia front-end system supports users as they prepare their jobs and then submits them to the PBS system. Columbia nodes and front-end systems use the Linux OS. Prior to SC04, the Columbia system was used to attain a processing speed of 51.87 TeraFlops, which made it number two on the Top 500 list of the world's supercomputers and the world's fastest "operational" supercomputer since it was fully engaged in supporting NASA users.

  9. Hurricane Forecasts with a Global Mesoscale-resolving Model on the NASA Columbia Supercomputer Preliminary Simulations of Hurricane Katrina (2005)

    NASA Technical Reports Server (NTRS)

    Shen, B.-W.; Atlas, R.; Reale, O.; Chern, J.-D.; Li, S.-J.; Lee, T.; Chang, J.; Henze, C.; Yeh, K.-S.

    2006-01-01

    It is known that the General Circulation Models (GCMs) have sufficient resolution to accurately simulate hurricane near-eye structure and intensity. To overcome this limitation, the mesoscale-resolving finite-element GCM (fvGCM) has been experimentally deployed on the NASA Columbia supercomputer, and its performance is evaluated choosing hurricane Katrina as an example in this study. On late August 2005 Katrina underwent two stages of rapid intensification and became the sixth most intense hurricane in the Atlantic. Six 5-day simulations of Katrina at both 0.25 deg and 0.125 deg show comparable track forecasts, but the 0,125 deg runs provide much better intensity forecasts, producing center pressure with errors of only +/- 12 hPa. The 0.125 deg simulates better near-eye wind distributions and a more realistic average intensification rate. A convection parameterization (CP) is one of the major limitations in a GCM, the 0.125 deg run with CP disabled produces very encouraging results.

  10. The 0.125 degree finite-volume General Circulation Model on the NASA Columbia Supercomputer: Preliminary Simulations of Mesoscale Vortices

    NASA Technical Reports Server (NTRS)

    Shen, B.-W.; Atlas, R.; Chern, J.-D.; Reale, O.; Lin, S.-J.; Lee, T.; Chang, J.

    2005-01-01

    The NASA Columbia supercomputer was ranked second on the TOP500 List in November, 2004. Such a quantum jump in computing power provides unprecedented opportunities to conduct ultra-high resolution simulations with the finite-volume General Circulation Model (fvGCM). During 2004, the model was run in realtime experimentally at 0.25 degree resolution producing remarkable hurricane forecasts [Atlas et al., 2005]. In 2005, the horizontal resolution was further doubled, which makes the fvGCM comparable to the first mesoscale resolving General Circulation Model at the Earth Simulator Center [Ohfuchi et al., 2004]. Nine 5-day 0.125 degree simulations of three hurricanes in 2004 are presented first for model validation. Then it is shown how the model can simulate the formation of the Catalina eddies and Hawaiian lee vortices, which are generated by the interaction of the synoptic-scale flow with surface forcing, and have never been reproduced in a GCM before.)

  11. Simulations of Hurricane Katrina (2005) with the 0.125 degree finite-volume General Circulation Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, B.-W.; Atlas, R.; Reale, O.; Lin, S.-J.; Chern, J.-D.; Chang, J.; Henze, C.

    2006-01-01

    Hurricane Katrina was the sixth most intense hurricane in the Atlantic. Katrina's forecast poses major challenges, the most important of which is its rapid intensification. Hurricane intensity forecast with General Circulation Models (GCMs) is difficult because of their coarse resolution. In this article, six 5-day simulations with the ultra-high resolution finite-volume GCM are conducted on the NASA Columbia supercomputer to show the effects of increased resolution on the intensity predictions of Katrina. It is found that the 0.125 degree runs give comparable tracks to the 0.25 degree, but provide better intensity forecasts, bringing the center pressure much closer to observations with differences of only plus or minus 12 hPa. In the runs initialized at 1200 UTC 25 AUG, the 0.125 degree simulates a more realistic intensification rate and better near-eye wind distributions. Moreover, the first global 0.125 degree simulation without convection parameterization (CP) produces even better intensity evolution and near-eye winds than the control run with CP.

  12. NASA Advanced Supercomputing (NAS) User Services Group

    NASA Technical Reports Server (NTRS)

    Pandori, John; Hamilton, Chris; Niggley, C. E.; Parks, John W. (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides an overview of NAS (NASA Advanced Supercomputing), its goals, and its mainframe computer assets. Also covered are its functions, including systems monitoring and technical support.

  13. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  14. Building Columbia from the SysAdmin View

    NASA Technical Reports Server (NTRS)

    Chan, David

    2005-01-01

    Project Columbia was built at NASA Ames Research Center in partnership with SGI and Intel. Columbia consists of 20 512 processor Altix machines with 440TB of storage and achieved 51.87 TeraPlops to be ranked the second fastest on the top 500 at SuperComputing 2004. Columbia was delivered, installed and put into production in 3 months. On average, a new Columbia node was brought into production in less than a week. Columbia's configuration, installation, and future plans will be discussed.

  15. Role of High-End Computing in Meeting NASA's Science and Engineering Challenges

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Tu, Eugene L.; Van Dalsem, William R.

    2006-01-01

    Two years ago, NASA was on the verge of dramatically increasing its HEC capability and capacity. With the 10,240-processor supercomputer, Columbia, now in production for 18 months, HEC has an even greater impact within the Agency and extending to partner institutions. Advanced science and engineering simulations in space exploration, shuttle operations, Earth sciences, and fundamental aeronautics research are occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. This talk describes how the integrated production environment fostered at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center is accelerating scientific discovery, achieving parametric analyses of multiple scenarios, and enhancing safety for NASA missions. We focus on Columbia s impact on two key engineering and science disciplines: Aerospace, and Climate. We also discuss future mission challenges and plans for NASA s next-generation HEC environment.

  16. Role of High-End Computing in Meeting NASA's Science and Engineering Challenges

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak

    2006-01-01

    High-End Computing (HEC) has always played a major role in meeting the modeling and simulation needs of various NASA missions. With NASA's newest 62 teraflops Columbia supercomputer, HEC is having an even greater impact within the Agency and beyond. Significant cutting-edge science and engineering simulations in the areas of space exploration, Shuttle operations, Earth sciences, and aeronautics research, are already occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. The talk will describe how the integrated supercomputing production environment is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions.

  17. NASA's Pleiades Supercomputer Crunches Data For Groundbreaking Analysis and Visualizations

    NASA Image and Video Library

    2016-11-23

    The Pleiades supercomputer at NASA's Ames Research Center, recently named the 13th fastest computer in the world, provides scientists and researchers high-fidelity numerical modeling of complex systems and processes. By using detailed analyses and visualizations of large-scale data, Pleiades is helping to advance human knowledge and technology, from designing the next generation of aircraft and spacecraft to understanding the Earth's climate and the mysteries of our galaxy.

  18. NASA's Space Shuttle Columbia: Synopsis of the Report of the Columbia Accident Investigation Board

    NASA Technical Reports Server (NTRS)

    Smith, Marcia S.

    2003-01-01

    NASA's space shuttle Columbia broke apart on February 1, 2003 as it returned to Earth from a 16-day science mission. All seven astronauts aboard were killed. NASA created the Columbia Accident Investigation Board (CAIB), chaired by Adm. (Ret.) Harold Gehman, to investigate the accident. The Board released its report (available at [http://www.caib.us]) on August 26, 2003, concluding that the tragedy was caused by technical and organizational failures. The CAIB report included 29 recommendations, 15 of which the Board specified must be completed before the shuttle returns to flight status. This report provides a brief synopsis of the Board's conclusions, recommendations, and observations. Further information on Columbia and issues for Congress are available in CRS Report RS21408. This report will not be updated.

  19. NASA Post-Columbia Safety & Mission Assurance, Review and Assessment Initiatives

    NASA Astrophysics Data System (ADS)

    Newman, J. Steven; Wander, Stephen M.; Vecellio, Don; Miller, Andrew J.

    2005-12-01

    On February 1, 2003, NASA again experienced a tragic accident as the Space Shuttle Columbia broke apart upon reentry, resulting in the loss of seven astronauts. Several of the findings and observations of the Columbia Accident Investigation Board addressed the need to strengthen the safety and mission assurance function at NASA. This paper highlights key steps undertaken by the NASA Office of Safety and Mission Assurance (OSMA) to establish a stronger and more- robust safety and mission assurance function for NASA programs, projects, facilities and operations. This paper provides an overview of the interlocking OSMA Review and Assessment Division (RAD) institutional and programmatic processes designed to 1) educate, inform, and prepare for audits, 2) verify requirements flow-down, 3) verify process capability, 4) verify compliance with requirements, 5) support risk management decision making, 6) facilitate secure web- based collaboration, and 7) foster continual improvement and the use of lessons learned.

  20. Distributed user services for supercomputers

    NASA Technical Reports Server (NTRS)

    Sowizral, Henry A.

    1989-01-01

    User-service operations at supercomputer facilities are examined. The question is whether a single, possibly distributed, user-services organization could be shared by NASA's supercomputer sites in support of a diverse, geographically dispersed, user community. A possible structure for such an organization is identified as well as some of the technologies needed in operating such an organization.

  1. Supercomputer networking for space science applications

    NASA Technical Reports Server (NTRS)

    Edelson, B. I.

    1992-01-01

    The initial design of a supercomputer network topology including the design of the communications nodes along with the communications interface hardware and software is covered. Several space science applications that are proposed experiments by GSFC and JPL for a supercomputer network using the NASA ACTS satellite are also reported.

  2. The Columbia Accident Investigation and The NASA Glenn Ballistic Impact Laboratory Contributions Supporting NASA's Return to Flight

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.

    2007-01-01

    On February 1, 2003, the Space Shuttle Columbia broke apart during reentry, resulting in loss of the vehicle and its seven crewmembers. For the next several months, an extensive investigation of the accident ensued involving a nationwide team of experts from NASA, industry, and academia, spanning dozens of technical disciplines. The Columbia Accident Investigation Board (CAIB), a group of experts assembled to conduct an investigation independent of NASA, concluded in August, 2003 that the most likely cause of the loss of Columbia and its crew was a breach in the left wing leading edge Reinforced Carbon-Carbon (RCC) thermal protection system initiated by the impact of thermal insulating foam that had separated from the orbiters external fuel tank 81 seconds into the mission's launch. During reentry, this breach allowed superheated air to penetrate behind the leading edge and erode the aluminum structure of left wing, which ultimately led to the breakup of the orbiter. The findings of the CAIB were supported by ballistic impact tests, which simulated the physics of External Tank Foam impact on the RCC wing leading edge material. These tests ranged from fundamental material characterization tests to full-scale Orbiter Wing Leading Edge tests. Following the accident investigation, NASA spent the next 18 months focused on returning the shuttle safely to flight. In order to fully evaluate all potential impact threats from the many debris sources on the Space Shuttle during ascent, NASA instituted a significant impact testing program. The results from these tests led to the validation of high-fidelity computer models, capable of predicting actual or potential Shuttle impact events, were used in the certification of STS-114, NASA s Return to Flight Mission, as safe to fly. This presentation will provide a look into the inner workings of the Space Shuttle and a behind the scenes perspective on the impact analysis and testing done for the Columbia Accident Investigation and

  3. Discover Supercomputer 5

    NASA Image and Video Library

    2017-12-08

    Two rows of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS) contain more than 4,000 computer processors. Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  4. Discover Supercomputer 3

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  5. Discover Supercomputer 2

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  6. Discover Supercomputer 4

    NASA Image and Video Library

    2017-12-08

    This close-up view highlights one row—approximately 2,000 computer processors—of the “Discover” supercomputer at the NASA Center for Climate Simulation (NCCS). Discover has a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  7. Discover Supercomputer 1

    NASA Image and Video Library

    2017-12-08

    The heart of the NASA Center for Climate Simulation (NCCS) is the “Discover” supercomputer. In 2009, NCCS added more than 8,000 computer processors to Discover, for a total of nearly 15,000 processors. Credit: NASA/Pat Izzo To learn more about NCCS go to: www.nasa.gov/topics/earth/features/climate-sim-center.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe.

  8. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    NASA Astrophysics Data System (ADS)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  9. A History of High-Performance Computing

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.

  10. Columbia Quilt

    NASA Image and Video Library

    2018-02-22

    A certificate is on display that confirms the transfer of a giant hand-made quilt in honor of space shuttle Columbia and her crew from the Office of Procurement to the Columbia Preservation Room inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida. The quilt was made by Katherine Walsh, a lifelong NASA and space program fan originally from Kentucky. The quilt will be displayed with its certificate in the preservation room as part of NASA's Apollo, Challenger, Columbia Lessons Learned Program.

  11. Renewed Commitment to Excellence: An Assessment of the NASA Agency-Wide Applicability of the Columbia Accident Investigation Board Report

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Space Shuttle fleet has been grounded since the Columbia accident. As a result, 'Return to Flight' has become not just a phrase but a program and the global of virtually everyone associated with NASA. Even those who are not affiliated with the Shuttle Program are looking forward to the safe and successful completion of the next Shuttle mission. In this recovery process, NASA will be guided by the Report of the Columbia Accident Investigation Board (CAIB). The CAIB was an investigating body, convened by NASA Administrator O'Keefe the day of the Columbia accident, according to procedures established after the loss of Space Challenger.

  12. Columbia Quilt

    NASA Image and Video Library

    2018-02-22

    A certificate and quilt square are on display that confirms the transfer of a giant hand-made quilt in honor of space shuttle Columbia and her crew from the Office of Procurement to the Columbia Preservation Room inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida. The quilt was made by Katherine Walsh, a lifelong NASA and space program fan originally from Kentucky. The quilt will be displayed in the preservation room with its certificate as part of NASA's Apollo, Challenger, Columbia Lessons Learned Program.

  13. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  14. Color graphics, interactive processing, and the supercomputer

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, Rudeen

    1987-01-01

    The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.

  15. Scaling of data communications for an advanced supercomputer network

    NASA Technical Reports Server (NTRS)

    Levin, E.; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of NASA's Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations and by remote communication to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. The implications of a projected 20-fold increase in processing power on the data communications requirements are described.

  16. Computer Simulation Performed for Columbia Project Cooling System

    NASA Technical Reports Server (NTRS)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  17. Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1988-01-01

    A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.

  18. Techniques and Tools of NASA's Space Shuttle Columbia Accident Investigation

    NASA Technical Reports Server (NTRS)

    McDanels, Steve J.

    2005-01-01

    The Space Shuttle Columbia accident investigation was a fusion of many disciplines into a single effort. From the recovery and reconstruction of the debris, Figure 1, to the analysis, both destructive and nondestructive, of chemical and metallurgical samples, Figure 2, a multitude of analytical techniques and tools were employed. Destructive and non-destructive testing were utilized in tandem to determine if a breach in the left wing of the Orbiter had occurred, and if so, the path of the resultant high temperature plasma flow. Nondestructive analysis included topometric scanning, laser mapping, and real-time radiography. These techniques were useful in constructing a three dimensional virtual representation of the reconstruction project, specifically the left wing leading edge reinforced carbon/carbon heat protectant panels. Similarly, they were beneficial in determining where sampling should be performed on the debris. Analytic testing included such techniques as Energy Dispersive Electron Microprobe Analysis (EMPA), Electron Spectroscopy Chemical Analysis (ESCA), and X-Ray dot mapping; these techniques related the characteristics of intermetallics deposited on the leading edge of the left wing adjacent to the location of a suspected plasma breach during reentry. The methods and results of the various analyses, along with their implications into the accident, are discussed, along with the findings and recommendations of the Columbia Accident Investigation Board. Likewise, NASA's Return To Flight efforts are highlighted.

  19. Collaborative Supercomputing for Global Change Science

    NASA Astrophysics Data System (ADS)

    Nemani, R.; Votava, P.; Michaelis, A.; Melton, F.; Milesi, C.

    2011-03-01

    There is increasing pressure on the science community not only to understand how recent and projected changes in climate will affect Earth's global environment and the natural resources on which society depends but also to design solutions to mitigate or cope with the likely impacts. Responding to this multidimensional challenge requires new tools and research frameworks that assist scientists in collaborating to rapidly investigate complex interdisciplinary science questions of critical societal importance. One such collaborative research framework, within the NASA Earth sciences program, is the NASA Earth Exchange (NEX). NEX combines state-of-the-art supercomputing, Earth system modeling, remote sensing data from NASA and other agencies, and a scientific social networking platform to deliver a complete work environment. In this platform, users can explore and analyze large Earth science data sets, run modeling codes, collaborate on new or existing projects, and share results within or among communities (see Figure S1 in the online supplement to this Eos issue (http://www.agu.org/eos_elec)).

  20. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  1. Supercomputer applications in molecular modeling.

    PubMed

    Gund, T M

    1988-01-01

    An overview of the functions performed by molecular modeling is given. Molecular modeling techniques benefiting from supercomputing are described, namely, conformation, search, deriving bioactive conformations, pharmacophoric pattern searching, receptor mapping, and electrostatic properties. The use of supercomputers for problems that are computationally intensive, such as protein structure prediction, protein dynamics and reactivity, protein conformations, and energetics of binding is also examined. The current status of supercomputing and supercomputer resources are discussed.

  2. NASA Tech Briefs, November/December 1986, Special Edition

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Topics: Computing: The View from NASA Headquarters; Earth Resources Laboratory Applications Software: Versatile Tool for Data Analysis; The Hypercube: Cost-Effective Supercomputing; Artificial Intelligence: Rendezvous with NASA; NASA's Ada Connection; COSMIC: NASA's Software Treasurehouse; Golden Oldies: Tried and True NASA Software; Computer Technical Briefs; NASA TU Services; Digital Fly-by-Wire.

  3. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  4. NASA Exhibits

    NASA Technical Reports Server (NTRS)

    Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick; hide

    2001-01-01

    A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.

  5. Improvements in the Scalability of the NASA Goddard Multiscale Modeling Framework for Hurricane Climate Studies

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Chern, Jiun-Dar

    2007-01-01

    Improving our understanding of hurricane inter-annual variability and the impact of climate change (e.g., doubling CO2 and/or global warming) on hurricanes brings both scientific and computational challenges to researchers. As hurricane dynamics involves multiscale interactions among synoptic-scale flows, mesoscale vortices, and small-scale cloud motions, an ideal numerical model suitable for hurricane studies should demonstrate its capabilities in simulating these interactions. The newly-developed multiscale modeling framework (MMF, Tao et al., 2007) and the substantial computing power by the NASA Columbia supercomputer show promise in pursuing the related studies, as the MMF inherits the advantages of two NASA state-of-the-art modeling components: the GEOS4/fvGCM and 2D GCEs. This article focuses on the computational issues and proposes a revised methodology to improve the MMF's performance and scalability. It is shown that this prototype implementation enables 12-fold performance improvements with 364 CPUs, thereby making it more feasible to study hurricane climate.

  6. Columbia Crew Survival Investigation Report

    NASA Technical Reports Server (NTRS)

    2009-01-01

    NASA commissioned the Columbia Accident Investigation Board (CAIB) to conduct a thorough review of both the technical and the organizational causes of the loss of the Space Shuttle Columbia and her crew on February 1, 2003. The accident investigation that followed determined that a large piece of insulating foam from Columbia s external tank (ET) had come off during ascent and struck the leading edge of the left wing, causing critical damage. The damage was undetected during the mission. The CAIB's findings and recommendations were published in 2003 and are available on the web at http://caib.nasa.gov/. NASA responded to the CAIB findings and recommendations with the Space Shuttle Return to Flight Implementation Plan. Significant enhancements were made to NASA's organizational structure, technical rigor, and understanding of the flight environment. The ET was redesigned to reduce foam shedding and eliminate critical debris. In 2005, NASA succeeded in returning the space shuttle to flight. In 2010, the space shuttle will complete its mission of assembling the International Space Station and will be retired to make way for the next generation of human space flight vehicles: the Constellation Program. The Space Shuttle Program recognized the importance of capturing the lessons learned from the loss of Columbia and her crew to benefit future human exploration, particularly future vehicle design. The program commissioned the Spacecraft Crew Survival Integrated Investigation Team (SCSIIT). The SCSIIT was asked to perform a comprehensive analysis of the accident, focusing on factors and events affecting crew survival, and to develop recommendations for improving crew survival for all future human space flight vehicles. To do this, the SCSIIT investigated all elements of crew survival, including the design features, equipment, training, and procedures intended to protect the crew. This report documents the SCSIIT findings, conclusions, and recommendations.

  7. Computer Electromagnetics and Supercomputer Architecture

    NASA Technical Reports Server (NTRS)

    Cwik, Tom

    1993-01-01

    The dramatic increase in performance over the last decade for microporcessor computations is compared with that for the supercomputer computations. This performance, the projected performance, and a number of other issues such as cost and the inherent pysical limitations in curent supercomputer technology have naturally led to parallel supercomputers and ensemble of interconnected microprocessors.

  8. Particle simulation on heterogeneous distributed supercomputers

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Dagum, Leonardo

    1993-01-01

    We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.

  9. Supercomputing in the Age of Discovering Superearths, Earths and Exoplanet Systems

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.

    2015-01-01

    NASA's Kepler Mission was launched in March 2009 as NASA's first mission capable of finding Earth-size planets orbiting in the habitable zone of Sun-like stars, that range of distances for which liquid water would pool on the surface of a rocky planet. Kepler has discovered over 1000 planets and over 4600 candidates, many of them as small as the Earth. Today, Kepler's amazing success seems to be a fait accompli to those unfamiliar with her history. But twenty years ago, there were no planets known outside our solar system, and few people believed it was possible to detect tiny Earth-size planets orbiting other stars. Motivating NASA to select Kepler for launch required a confluence of the right detector technology, advances in signal processing and algorithms, and the power of supercomputing.

  10. Data-intensive computing on numerically-insensitive supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, James P; Fasel, Patricia K; Habib, Salman

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  11. TOP500 Supercomputers for June 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  12. The Columbia Debris Loan Program; Examples of Microscopic Analysis

    NASA Technical Reports Server (NTRS)

    Russell, Rick; Thurston, Scott; Smith, Stephen; Marder, Arnold; Steckel, Gary

    2006-01-01

    Following the tragic loss of the Space Shuttle Columbia NASA formed The Columbia Recovery Office (CRO). The CRO was initially formed at the Johnson Space Center after the conclusion of recovery operations on May 1,2003 and then transferred .to the Kennedy Space Center on October 6,2003 and renamed The Columbia Recovery Office and Preservation. An integral part of the preservation project was the development of a process to loan Columbia debris to qualified researchers and technical educators. The purposes of this program include aiding in the advancement of advanced spacecraft design and flight safety development, the advancement of the study of hypersonic re-entry to enhance ground safety, to train and instruct accident investigators and to establish an enduring legacy for Space Shuttle Columbia and her crew. Along with a summary of the debris loan process examples of microscopic analysis of Columbia debris items will be presented. The first example will be from the reconstruction following the STS- 107 accident and how the Materials and Proessteesa m used microscopic analysis to confirm the accident scenario. Additionally, three examples of microstructural results from the debris loan process from NASA internal, academia and private industry will be presented.

  13. Optimization of Supercomputer Use on EADS II System

    NASA Technical Reports Server (NTRS)

    Ahmed, Ardsher

    1998-01-01

    The main objective of this research was to optimize supercomputer use to achieve better throughput and utilization of supercomputers and to help facilitate the movement of non-supercomputing (inappropriate for supercomputer) codes to mid-range systems for better use of Government resources at Marshall Space Flight Center (MSFC). This work involved the survey of architectures available on EADS II and monitoring customer (user) applications running on a CRAY T90 system.

  14. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  15. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less

  16. Columbia Accident Investigation Board Report. Volume 1

    NASA Technical Reports Server (NTRS)

    Gehman, Harold W., Jr.; Barry, John L.; Deal, Duane W.; Hallock, James N.; Hess, Kenneth W.; Hubbard, G. Scott; Logsdon, John M.; Osheroff, Douglas D.; Ride, Sally K.; Tetrault, Roger E.

    2003-01-01

    The Columbia Accident Investigation Board's independent investigation into the tragic February 1, 2003, loss of the Space Shuttle Columbia and its seven-member crew lasted nearly seven months and involved 13 Board members, approximately 120 Board investigators, and thousands of NASA and support personnel. Because the events that initiated the accident were not apparent for some time, the investigation's depth and breadth were unprecedented in NASA history. Further, the Board determined early in the investigation that it intended to put this accident into context. We considered it unlikely that the accident was a random event; rather, it was likely related in some degree to NASA's budgets, history, and program culture, as well as to the politics, compromises, and changing priorities of the democratic process. We are convinced that the management practices overseeing the Space Shuttle Program were as much a cause of the accident as the foam that struck the left wing. The Board was also influenced by discussions with members of Congress, who suggested that this nation needed a broad examination of NASA's Human Space Flight Program, rather than just an investigation into what physical fault caused Columbia to break up during re-entry. Findings and recommendations are in the relevant chapters and all recommendations are compiled in Chapter 11. Volume I is organized into four parts: The Accident; Why the Accident Occurred; A Look Ahead; and various appendices. To put this accident in context, Parts One and Two begin with histories, after which the accident is described and then analyzed, leading to findings and recommendations. Part Three contains the Board's views on what is needed to improve the safety of our voyage into space. Part Four is reference material. In addition to this first volume, there will be subsequent volumes that contain technical reports generated by the Columbia Accident Investigation Board and NASA, as well as volumes containing reference

  17. NASA Center for Climate Simulation (NCCS) Presentation

    NASA Technical Reports Server (NTRS)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  18. TOP500 Supercomputers for November 2003

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  19. Supercomputing in Aerospace

    NASA Technical Reports Server (NTRS)

    Kutler, Paul; Yee, Helen

    1987-01-01

    Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics.

  20. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  1. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  2. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  3. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  4. 48 CFR 225.7012 - Restriction on supercomputers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Restriction on supercomputers. 225.7012 Section 225.7012 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS... supercomputers. ...

  5. KENNEDY SPACE CENTER, FLA. - Scott Thurston, NASA vehicle flow manager, speaks to members of the Columbia Reconstruction Team during transfer of debris from the Columbia Debris Hangar to its permanent storage site in the Vehicle Assembly Building. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Scott Thurston, NASA vehicle flow manager, speaks to members of the Columbia Reconstruction Team during transfer of debris from the Columbia Debris Hangar to its permanent storage site in the Vehicle Assembly Building. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  6. Input/output behavior of supercomputing applications

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  7. Improved Access to Supercomputers Boosts Chemical Applications.

    ERIC Educational Resources Information Center

    Borman, Stu

    1989-01-01

    Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)

  8. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy

    2014-02-06

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  9. Edison - A New Cray Supercomputer Advances Discovery at NERSC

    ScienceCinema

    Dosanjh, Sudip; Parkinson, Dula; Yelick, Kathy; Trebotich, David; Broughton, Jeff; Antypas, Katie; Lukic, Zarija, Borrill, Julian; Draney, Brent; Chen, Jackie

    2018-01-16

    When a supercomputing center installs a new system, users are invited to make heavy use of the computer as part of the rigorous testing. In this video, find out what top scientists have discovered using Edison, a Cray XC30 supercomputer, and how NERSC's newest supercomputer will accelerate their future research.

  10. Desktop supercomputer: what can it do?

    NASA Astrophysics Data System (ADS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  11. Energy Efficient Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anypas, Katie

    2014-10-17

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  12. Energy Efficient Supercomputing

    ScienceCinema

    Anypas, Katie

    2018-05-07

    Katie Anypas, Head of NERSC's Services Department discusses the Lab's research into developing increasingly powerful and energy efficient supercomputers at our '8 Big Ideas' Science at the Theater event on October 8th, 2014, in Oakland, California.

  13. Columbia Accident Investigation Report

    NASA Image and Video Library

    2003-11-06

    Richard Alonzo, in the Mail Room at KSC, prepares stacks of the Columbia Accident Investigation Report, which are being distributed to all employees. The delivery is a prelude to NASA Safety and Mission Success Week Nov. 17-21, during which all employees are being encouraged to consider ways they can support and enhance recommendations for improvement stated in the report.

  14. Columbia Accident Investigation Report

    NASA Image and Video Library

    2003-11-06

    Bill White, in the Mail Room at KSC, stacks copies of the Columbia Accident Investigation Report, which are being distributed to all employees. The delivery is a prelude to NASA Safety and Mission Success Week Nov. 17-21, during which all employees are being encouraged to consider ways they can support and enhance recommendations for improvement stated in the report.

  15. The role of graphics super-workstations in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Levin, E.

    1989-01-01

    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).

  16. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

  17. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2018-02-13

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  18. Mira: Argonne's 10-petaflops supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric

    2013-07-03

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  19. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  20. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  1. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  2. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  3. 48 CFR 252.225-7011 - Restriction on acquisition of supercomputers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of supercomputers. 252.225-7011 Section 252.225-7011 Federal Acquisition Regulations System DEFENSE... CLAUSES Text of Provisions And Clauses 252.225-7011 Restriction on acquisition of supercomputers. As prescribed in 225.7012-3, use the following clause: Restriction on Acquisition of Supercomputers (JUN 2005...

  4. STS-78 Columbia, Orbiter Vehicle (OV) 102, LMS-1 crew insignia

    NASA Image and Video Library

    1996-03-20

    STS078-S-001 (March 1998) --- The STS-78 mission links the past with the present through a crew patch influenced by Pacific Northwest Native American art. Central to the design is the space shuttle Columbia, whose shape evokes the image of the eagle, an icon of power and prestige and the national symbol of the United States. The eagle?s feathers, representing both peace and friendship, symbolize the spirit of international unity on STS-78. An orbit surrounding the mission number recalls the traditional NASA emblem. The Life Sciences and Microgravity Spacelab (LMS) is housed in Columbia?s payload bay and is depicted in a manner reminiscent of totem art. The pulsating sun, a symbol of life, displays three crystals representing STS-78?s three high-temperature microgravity materials processing facilities. The constellation Delphinus recalls the dolphin, friend of sea explorers. Each star represents one member of STS-78?s international crew including the alternate payload specialists Pedro Duque and Luca Urbani. The colored thrust rings at the base of Columbia signify the five continents of Earth united in global cooperation for the advancement of all humankind. The NASA insignia design for space shuttle flights is reserved for use by the astronauts and for other official use as the NASA Administrator may authorize. Public availability has been approved only in the forms of illustrations by the various news media. When and if there is any change in this policy, which is not anticipated, the change will be publicly announced. Photo credit: NASA

  5. Computational Nanotechnology at NASA Ames Research Center, 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.

  6. NASA Vision

    NASA Technical Reports Server (NTRS)

    Fenton, Mary (Editor); Wood, Jennifer (Editor)

    2003-01-01

    This newsletter contains several articles, primarily on International Space Station (ISS) crewmembers and their activities, as well as the activities of NASA administrators. Other subjects covered in the articles include the investigation of the Space Shuttle Columbia accident, activities at NASA centers, Mars exploration, a collision avoidance test on a unmanned aerial vehicle (UAV). The ISS articles cover landing in a Soyuz capsule, photography from the ISS, and the Expedition Seven crew.

  7. Towards Efficient Supercomputing: Searching for the Right Efficiency Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, Chung-Hsing; Kuehn, Jeffery A; Poole, Stephen W

    2012-01-01

    The efficiency of supercomputing has traditionally been in the execution time. In early 2000 s, the concept of total cost of ownership was re-introduced, with the introduction of efficiency measure to include aspects such as energy and space. Yet the supercomputing community has never agreed upon a metric that can cover these aspects altogether and also provide a fair basis for comparison. This paper exam- ines the metrics that have been proposed in the past decade, and proposes a vector-valued metric for efficient supercom- puting. Using this metric, the paper presents a study of where the supercomputing industry has beenmore » and how it stands today with respect to efficient supercomputing.« less

  8. Performance of the Widely-Used CFD Code OVERFLOW on the Pleides Supercomputer

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2017-01-01

    Computational performance studies were made for NASA's widely used Computational Fluid Dynamics code OVERFLOW on the Pleiades Supercomputer. Two test cases were considered: a full launch vehicle with a grid of 286 million points and a full rotorcraft model with a grid of 614 million points. Computations using up to 8000 cores were run on Sandy Bridge and Ivy Bridge nodes. Performance was monitored using times reported in the day files from the Portable Batch System utility. Results for two grid topologies are presented and compared in detail. Observations and suggestions for future work are made.

  9. TOP500 Supercomputers for June 2003

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamosmore » National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.« less

  10. View of the Columbia's RMS arm and end effector grasping IECM

    NASA Image and Video Library

    1982-06-27

    STS004-37-670 (27 June-4 July 1982) --- The North Atlantic Ocean southeast of the Bahamas serves as backdrop for this 70mm scene of the Columbia?s remote manipulator system (RMS) arm and hand-like device (called and end effector) grasping a multi-instrument monitor for detecting contaminants. The experiments is called the induced environment contaminant monitor (IECM). The small box contains 11 instruments for checking the contaminants in and around the orbiter?s cargo bay which might adversely affect delicate experiments carried onboard. Astronauts Thomas K. Mattingly II and Henry W. Hartsfield Jr. manned the Columbia for seven days and one hour. The Columbia?s vertical tail and orbital maneuvering system (OMS) pods are at left foreground. Photo credit: NASA

  11. STS-65 Columbia, Orbiter Vehicle (OV) 102, crew insignia

    NASA Image and Video Library

    1994-03-01

    STS065-S-001 (March 1994) --- Designed by the crew members, the STS-65 insignia features the International Microgravity Laboratory (IML-2) mission and its Spacehab module which will fly aboard the space shuttle Columbia. IML-2 is reflected in the emblem by two gold stars shooting toward the heavens behind the IML lettering. The space shuttle Columbia is depicted orbiting the logo and reaching into space, with Spacehab on an international quest for a better understanding of the effects of spaceflight on materials processing and life sciences. The NASA insignia design for space shuttle flights is reserved for use by the astronauts and for other official use as the NASA Administrator may authorize. Public availability has been approved only in the forms of illustrations by the various news media. When and if there is any change in this policy, which is not anticipated, the change will be publicly announced. Photo credit: NASA

  12. An Implementation Plan for NFS at NASA's NAS Facility

    NASA Technical Reports Server (NTRS)

    Lam, Terance L.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This document discusses how NASA's NAS can benefit from the Sun Microsystems' Network File System (NFS). A case study is presented to demonstrate the effects of NFS on the NAS supercomputing environment. Potential problems are addressed and an implementation strategy is proposed.

  13. NSF Commits to Supercomputers.

    ERIC Educational Resources Information Center

    Waldrop, M. Mitchell

    1985-01-01

    The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)

  14. Automated Help System For A Supercomputer

    NASA Technical Reports Server (NTRS)

    Callas, George P.; Schulbach, Catherine H.; Younkin, Michael

    1994-01-01

    Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.

  15. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    NASA Astrophysics Data System (ADS)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  16. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  17. KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Scott Thurston, NASA vehicle flow manager, addresses the media about efforts to pack the debris stored in the Columbia Debris Hangar. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. An area of the Vehicle Assembly Building is being prepared to store the debris permanently.

    NASA Image and Video Library

    2003-09-11

    KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Scott Thurston, NASA vehicle flow manager, addresses the media about efforts to pack the debris stored in the Columbia Debris Hangar. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. An area of the Vehicle Assembly Building is being prepared to store the debris permanently.

  18. Most Social Scientists Shun Free Use of Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1998-01-01

    Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…

  19. STS-93 orbiter Columbia streaks across Houston sky

    NASA Image and Video Library

    1999-07-27

    S99-08357 (27 July 1999) --- The fly-over of Space Shuttle Columbia's STS-93 re-entry is seen above the Johnson Space Center's Rocket Park. The Saturn V is below the streak that was left by Columbia re-entering the atmosphere. The image was captured with a Hasselblad 503cx medium format camera with a 30mm Hasselblad lens using an 8-second exposure and an aperture setting of f/8. The film was Kodak PMZ 1000 color negative film. The photographer was Mark Sowa of the NASA Johnson Space Center's photography group.

  20. Intelligent supercomputers: the Japanese computer sputnik

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, G.

    1983-11-01

    Japan's government-supported fifth-generation computer project has had a pronounced effect on the American computer and information systems industry. The US firms are intensifying their research on and production of intelligent supercomputers, a combination of computer architecture and artificial intelligence software programs. While the present generation of computers is built for the processing of numbers, the new supercomputers will be designed specifically for the solution of symbolic problems and the use of artificial intelligence software. This article discusses new and exciting developments that will increase computer capabilities in the 1990s. 4 references.

  1. HEC Applications on Columbia Project

    NASA Technical Reports Server (NTRS)

    Taft, Jim

    2004-01-01

    NASA's Columbia system consists of a cluster of twenty 512 processor SGI Altix systems. Each of these systems is 3 TFLOP/s in peak performance - approximately the same as the entire compute capability at NAS just one year ago. Each 512p system is a single system image machine with one Linunx O5, one high performance file system, and one globally shared memory. The NAS Terascale Applications Group (TAG) is chartered to assist in scaling NASA's mission critical codes to at least 512p in order to significantly improve emergency response during flight operations, as well as provide significant improvements in the codes. and rate of scientific discovery across the scientifc disciplines within NASA's Missions. Recent accomplishments are 4x improvements to codes in the ocean modeling community, 10x performance improvements in a number of computational fluid dynamics codes used in aero-vehicle design, and 5x improvements in a number of space science codes dealing in extreme physics. The TAG group will continue its scaling work to 2048p and beyond (10240 cpus) as the Columbia system becomes fully operational and the upgrades to the SGI NUMAlink memory fabric are in place. The NUMlink uprades dramatically improve system scalability for a single application. These upgrades will allow a number of codes to execute faster at higher fidelity than ever before on any other system, thus increasing the rate of scientific discovery even further

  2. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  3. Green Supercomputing at Argonne

    ScienceCinema

    Pete Beckman

    2017-12-09

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently.

  4. Supercomputer Provides Molecular Insight into Cellulose (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2011-02-01

    Groundbreaking research at the National Renewable Energy Laboratory (NREL) has used supercomputing simulations to calculate the work that enzymes must do to deconstruct cellulose, which is a fundamental step in biomass conversion technologies for biofuels production. NREL used the new high-performance supercomputer Red Mesa to conduct several million central processing unit (CPU) hours of simulation.

  5. Integration of Panda Workload Management System with supercomputers

    NASA Astrophysics Data System (ADS)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  6. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Maeno, T

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a

  7. Use of high performance networks and supercomputers for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  8. Ice Storm Supercomputer

    ScienceCinema

    None

    2018-05-01

    A new Idaho National Laboratory supercomputer is helping scientists create more realistic simulations of nuclear fuel. Dubbed "Ice Storm" this 2048-processor machine allows researchers to model and predict the complex physics behind nuclear reactor behavior. And with a new visualization lab, the team can see the results of its simulations on the big screen. For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.

  9. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  10. Automatic discovery of the communication network topology for building a supercomputer model

    NASA Astrophysics Data System (ADS)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  11. Japanese supercomputer technology.

    PubMed

    Buzbee, B L; Ewald, R H; Worlton, W J

    1982-12-17

    Under the auspices of the Ministry for International Trade and Industry the Japanese have launched a National Superspeed Computer Project intended to produce high-performance computers for scientific computation and a Fifth-Generation Computer Project intended to incorporate and exploit concepts of artificial intelligence. If these projects are successful, which appears likely, advanced economic and military research in the United States may become dependent on access to supercomputers of foreign manufacture.

  12. Kriging for Spatial-Temporal Data on the Bridges Supercomputer

    NASA Astrophysics Data System (ADS)

    Hodgess, E. M.

    2017-12-01

    Currently, kriging of spatial-temporal data is slow and limited to relatively small vector sizes. We have developed a method on the Bridges supercomputer, at the Pittsburgh supercomputer center, which uses a combination of the tools R, Fortran, the Message Passage Interface (MPI), OpenACC, and special R packages for big data. This combination of tools now permits us to complete tasks which could previously not be completed, or takes literally hours to complete. We ran simulation studies from a laptop against the supercomputer. We also look at "real world" data sets, such as the Irish wind data, and some weather data. We compare the timings. We note that the timings are suprising good.

  13. NASA Standard for Models and Simulations (M and S): Development Process and Rationale

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Blattnig, Steve R.; Green, Lawrence L.; Hemsch, Michael J.; Luckring, James M.; Morison, Joseph H.; Tripathi, Ram K.

    2009-01-01

    After the Columbia Accident Investigation Board (CAIB) report. the NASA Administrator at that time chartered an executive team (known as the Diaz Team) to identify the CAIB report elements with Agency-wide applicability, and to develop corrective measures to address each element. This report documents the chronological development and release of an Agency-wide Standard for Models and Simulations (M&S) (NASA Standard 7009) in response to Action #4 from the report, "A Renewed Commitment to Excellence: An Assessment of the NASA Agency-wide Applicability of the Columbia Accident Investigation Board Report, January 30, 2004".

  14. Introduction of the Space Shuttle Columbia Accident, Investigation Details, Findings and Crew Survival Investigation Report

    NASA Technical Reports Server (NTRS)

    Chandler, Michael

    2010-01-01

    As the Space Shuttle Program comes to an end, it is important that the lessons learned from the Columbia accident be captured and understood by those who will be developing future aerospace programs and supporting current programs. Aeromedical lessons learned from the Accident were presented at AsMA in 2005. This Panel will update that information, closeout the lessons learned, provide additional information on the accident and provide suggestions for the future. To set the stage, an overview of the accident is required. The Space Shuttle Columbia was returning to Earth with a crew of seven astronauts on 1Feb, 2003. It disintegrated along a track extending from California to Louisiana and observers along part of the track filmed the breakup of Columbia. Debris was recovered from Littlefield, Texas to Fort Polk, Louisiana, along a 567 statute mile track; the largest ever recorded debris field. The Columbia Accident Investigation Board (CAIB) concluded its investigation in August 2003, and released their findings in a report published in February 2004. NASA recognized the importance of capturing the lessons learned from the loss of Columbia and her crew and the Space Shuttle Program managers commissioned the Spacecraft Crew Survival Integrated Investigation Team (SCSIIT) to accomplish this. Their task was to perform a comprehensive analysis of the accident, focusing on factors and events affecting crew survival, and to develop recommendations for improving crew survival, including the design features, equipment, training and procedures intended to protect the crew. NASA released the Columbia Crew Survival Investigation Report in December 2008. Key personnel have been assembled to give you an overview of the Space Shuttle Columbia accident, the medical response, the medico-legal issues, the SCSIIT findings and recommendations and future NASA flight surgeon spacecraft accident response training. Educational Objectives: Set the stage for the Panel to address the

  15. Columbia's payload bay with Earth in the background

    NASA Image and Video Library

    2009-06-24

    STS003-17-806 (22-30 March 1982) --- A 70mm out-the-window view showing Israel, the Dead Sea, Sea of Galilee, Jordan River, Sinai, Jordan, the Red Sea and Egypt (in background). Rested Remote Manipulator System (RMS) arm and part of the aft section of space shuttle Columbia in foreground. Photo credit: NASA

  16. National Test Facility civilian agency use of supercomputers not feasible

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-12-01

    Based on interviews with civilian agencies cited in the House report (DOE, DoEd, HHS, FEMA, NOAA), none would be able to make effective use of NTF`s excess supercomputing capabilities. These agencies stated they could not use the resources primarily because (1) NTF`s supercomputers are older machines whose performance and costs cannot match those of more advanced computers available from other sources and (2) some agencies have not yet developed applications requiring supercomputer capabilities or do not have funding to support such activities. In addition, future support for the hardware and software at NTF is uncertain, making any investment by anmore » outside user risky.« less

  17. View of the Columbia's remote manipulator system (RMS)

    NASA Image and Video Library

    1982-11-13

    STS002-13-226 (13 Nov. 1981) --- Backdropped against Earth's horizon and the darkness of space, the space shuttle Columbia's remote manipulator system (RMS) gets its first workout in zero-gravity during the STS-2 mission. A television camera is mounted near the elbow and another is partially visible near the wrist of the RMS. Photo credit: NASA

  18. Predicting Hurricanes with Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-01-01

    Hurricane Emily, formed in the Atlantic Ocean on July 10, 2005, was the strongest hurricane ever to form before August. By checking computer models against the actual path of the storm, researchers can improve hurricane prediction. In 2010, NOAA researchers were awarded 25 million processor-hours on Argonne's BlueGene/P supercomputer for the project. Read more at http://go.usa.gov/OLh

  19. Supercomputers Of The Future

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  20. Liftoff of STS-62 Space Shuttle Columbia

    NASA Image and Video Library

    1994-03-04

    STS062-S-053 (4 March 1994) --- Carrying a crew of five veteran NASA astronauts and the United States Microgravity Payload (USMP), the Space Shuttle Columbia heads toward its sixteenth mission in Earth-orbit. Launch occurred at 8:53 a.m. (EST), March 4, 1994. Onboard were astronauts John H. Casper, Andrew M. Allen, Marsha S. Ivins, Charles D. (Sam) Gemar and Pierre J. Thuot.

  1. View of the Columbia's open payload bay and the Canadian RMS

    NASA Image and Video Library

    1981-11-13

    STS002-12-833 (13 Nov. 1981) --- Clouds over Earth and black sky form the background for this unique photograph from the space shuttle Columbia in Earth orbit. The photograph was shot through the aft flight deck windows viewing the cargo bay. Part of the scientific payload of the Office of Space and Terrestrial Applications (OSTA-1) is visible in the open cargo bay. The astronauts inside Columbia's cabin were remotely operating the Canadian-built remote manipulator system (RMS). Note television cameras on its elbow and wrist pieces. Photo credit: NASA

  2. Supercomputing Drives Innovation - Continuum Magazine | NREL

    Science.gov Websites

    years, NREL scientists have used supercomputers to simulate 3D models of the primary enzymes and Scientist, discuss a 3D model of wind plant aerodynamics, showing low velocity wakes and impact on

  3. Prospects for Boiling of Subcooled Dielectric Liquids for Supercomputer Cooling

    NASA Astrophysics Data System (ADS)

    Zeigarnik, Yu. A.; Vasil'ev, N. V.; Druzhinin, E. A.; Kalmykov, I. V.; Kosoi, A. S.; Khodakov, K. A.

    2018-02-01

    It is shown experimentally that using forced-convection boiling of dielectric coolants of the Novec 649 Refrigerant subcooled relative to the saturation temperature makes possible removing heat flow rates up to 100 W/cm2 from modern supercomputer chip interface. This fact creates prerequisites for the application of dielectric liquids in cooling systems of modern supercomputers with increased requirements for their operating reliability.

  4. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  5. A Summary of the Space Shuttle Columbia Tragedy and the Use of LS Dyna in the Accident Investigation and Return to Flight Efforts

    NASA Technical Reports Server (NTRS)

    Melis, Matthew; Carney, Kelly; Gabrys, Jonathan; Fasanella, Edwin L.; Lyle, Karen H.

    2004-01-01

    On February 1, 2003, the Space Shuttle Columbia broke apart during reentry resulting in loss of 7 crewmembers and craft. For the next several months an extensive investigation of the accident ensued involving a nationwide team of experts from NASA, industry, and academia, spanning dozens of technical disciplines. The Columbia Accident Investigation Board (CAIB), a group of experts assembled to conduct an investigation independent of NASA concluded in August, 2003 that the cause of the loss of Columbia and its crew was a breach in the left wing leading edge Reinforced Carbon-Carbon (RCC) thermal protection system initiated by the impact of thermal insulating foam that had separated from the orbiters external fuel tank 81 seconds into the missions launch. During reentry, this breach allowed superheated air to penetrate behind the leading edge and erode the aluminum structure of left wing which ultimately led to the breakup of the orbiter. In order to gain a better understanding the foam impact on the orbiters RCC wing leading edge, a multi-center team of NASA and Boeing impact experts was formed to characterize the foam and RCC materials for impact analysis using LS Dyna. Dyna predictions were validated with sub-component and full scale tests. LS Dyna proved to be a valuable asset in supporting both the Columbia Accident Investigation and NASA s return to flight efforts. This paper summarizes Columbia Accident and the nearly seven month long investigation that followed. The use of LS-DYNA in this effort is highlighted. Contributions to the investigation and return to flight efforts of the multicenter team consisting of members from NASA Glenn, NASA Langley, and Boeing Philadelphia are introduced and covered in detail in papers to follow in these proceedings.

  6. KENNEDY SPACE CENTER, FLA. - The Stafford-Covey Return to Flight Task Group (SCTG) visits the Columbia Debris Hangar . Chairing the task group are Richard O. Covey (third from right), former Space Shuttle commander, and Thomas P. Stafford (fourth from right), Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

    NASA Image and Video Library

    2003-08-05

    KENNEDY SPACE CENTER, FLA. - The Stafford-Covey Return to Flight Task Group (SCTG) visits the Columbia Debris Hangar . Chairing the task group are Richard O. Covey (third from right), former Space Shuttle commander, and Thomas P. Stafford (fourth from right), Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

  7. Introducing Mira, Argonne's Next-Generation Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-03-19

    Mira, the new petascale IBM Blue Gene/Q system installed at the ALCF, will usher in a new era of scientific supercomputing. An engineering marvel, the 10-petaflops machine is capable of carrying out 10 quadrillion calculations per second.

  8. STS-68 on Runway with 747 SCA/Columbia Ferry Flyby

    NASA Image and Video Library

    1994-10-11

    The space shuttle Endeavour receives a high-flying salute from its sister shuttle, Columbia, atop NASA's Shuttle Carrier Aircraft, shortly after Endeavor’s landing 11 October 1994, at Edwards, California, to complete mission STS-68. Columbia was being ferried from the Kennedy Space Center, Florida, to Air Force Plant 42, Palmdale, California, where it will undergo six months of inspections, modifications, and systems upgrades. The STS-68 11-day mission was devoted to radar imaging of Earth's geological features with the Space Radar Laboratory. The orbiter is surrounded by equipment and personnel that make up the ground support convoy that services the space vehicles as soon as they land.

  9. STS-68 on Runway with 747 SCA - Columbia Ferry Flyby

    NASA Image and Video Library

    1994-10-11

    The space shuttle Endeavour receives a high-flying salute from its sister shuttle, Columbia, atop NASA's Shuttle Carrier Aircraft, shortly after Endeavor’s landing 11 October 1994, at Edwards, California, to complete mission STS-68. Columbia was being ferried from the Kennedy Space Center, Florida, to Air Force Plant 42, Palmdale, California, where it will undergo six months of inspections, modifications, and systems upgrades. The STS-68 11-day mission was devoted to radar imaging of Earth's geological features with the Space Radar Laboratory. The orbiter is surrounded by equipment and personnel that make up the ground support convoy that services the space vehicles as soon as they land.

  10. Supercomputing Sheds Light on the Dark Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Heitmann, Katrin

    2012-11-15

    At Argonne National Laboratory, scientists are using supercomputers to shed light on one of the great mysteries in science today, the Dark Universe. With Mira, a petascale supercomputer at the Argonne Leadership Computing Facility, a team led by physicists Salman Habib and Katrin Heitmann will run the largest, most complex simulation of the universe ever attempted. By contrasting the results from Mira with state-of-the-art telescope surveys, the scientists hope to gain new insights into the distribution of matter in the universe, advancing future investigations of dark energy and dark matter into a new realm. The team's research was named amore » finalist for the 2012 Gordon Bell Prize, an award recognizing outstanding achievement in high-performance computing.« less

  11. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt'smore » sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.« less

  12. Roadrunner Supercomputer Breaks the Petaflop Barrier

    ScienceCinema

    Los Alamos National Lab - Brian Albright, Charlie McMillan, Lin Yin

    2017-12-09

    At 3:30 a.m. on May 26, 2008, Memorial Day, the "Roadrunner" supercomputer exceeded a sustained speed of 1 petaflop/s, or 1 million billion calculations per second. The sustained performance makes Roadrunner more than twice as fast as the current number 1

  13. Supercomputer Issues from a University Perspective.

    ERIC Educational Resources Information Center

    Beering, Steven C.

    1984-01-01

    Discusses issues related to the access of and training of university researchers in using supercomputers, considering National Science Foundation's (NSF) role in this area, microcomputers on campuses, and the limited use of existing telecommunication networks. Includes examples of potential scientific projects (by subject area) utilizing…

  14. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  15. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubois, David H; Dubois, Andrew J; Boorman, Thomas M

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  16. Finite element methods on supercomputers - The scatter-problem

    NASA Technical Reports Server (NTRS)

    Loehner, R.; Morgan, K.

    1985-01-01

    Certain problems arise in connection with the use of supercomputers for the implementation of finite-element methods. These problems are related to the desirability of utilizing the power of the supercomputer as fully as possible for the rapid execution of the required computations, taking into account the gain in speed possible with the aid of pipelining operations. For the finite-element method, the time-consuming operations may be divided into three categories. The first two present no problems, while the third type of operation can be a reason for the inefficient performance of finite-element programs. Two possibilities for overcoming certain difficulties are proposed, giving attention to a scatter-process.

  17. A mass storage system for supercomputers based on Unix

    NASA Technical Reports Server (NTRS)

    Richards, J.; Kummell, T.; Zarlengo, D. G.

    1988-01-01

    The authors present the design, implementation, and utilization of a large mass storage subsystem (MSS) for the numerical aerodynamics simulation. The MSS supports a large networked, multivendor Unix-based supercomputing facility. The MSS at Ames Research Center provides all processors on the numerical aerodynamics system processing network, from workstations to supercomputers, the ability to store large amounts of data in a highly accessible, long-term repository. The MSS uses Unix System V and is capable of storing hundreds of thousands of files ranging from a few bytes to 2 Gb in size.

  18. Columbia, OV-102, forward middeck locker experiments and meal tray assemblies

    NASA Image and Video Library

    1982-07-04

    STS004-28-330 (27 June-4 July 1982) --- Thanks to a variety of juices and other food items, this array in the middeck area probably represents the most colorful area onboard the Earth-orbiting space shuttle Columbia. Most of the meal items have been carefully fastened to food trays and locker doors (or both). What has not been attached by conventional methods has been safely ?tucked? under something heavy (note jacket shoved into space occupied by one of Columbia?s experiments). The Monodisperse Latex Reflector (MLR), making its second flight on Columbia, is designed to test the feasibility of making large-size, monodisperse (same size), and polystyrene latex micro-spheres using the products of the STS-3 mission as seed particles. The latex spheres are used in calibration of scientific and industrial equipment and have potential medical and research applications. This frame was exposed with a 35mm camera. Onboard the space vehicle for seven days were astronauts Thomas K. Mattingly II and Henry W. Hartsfield Jr. Photo credit: NASA

  19. KENNEDY SPACE CENTER, FLA. - Posing with the plaque dedicated to Columbia Jan. 29, 2004, are (left to right) United Space Alliance project leader for Columbia reconstruction Jim Comer, Shuttle Launch Director Mike Leinbach, astronauts Douglas Hurley and Pam Melroy, Center Director Jim Kennedy and NASA Vehicle Manager Scott Thurston. The dedication of the plaque was made in front of the 40-member preservation team in the “Columbia room,” a permanent repository in the Vehicle Assembly Building of the debris collected in the aftermath of the tragic accident Feb. 1, 2003, that claimed the orbiter and lives of the seven-member crew.

    NASA Image and Video Library

    2004-01-29

    KENNEDY SPACE CENTER, FLA. - Posing with the plaque dedicated to Columbia Jan. 29, 2004, are (left to right) United Space Alliance project leader for Columbia reconstruction Jim Comer, Shuttle Launch Director Mike Leinbach, astronauts Douglas Hurley and Pam Melroy, Center Director Jim Kennedy and NASA Vehicle Manager Scott Thurston. The dedication of the plaque was made in front of the 40-member preservation team in the “Columbia room,” a permanent repository in the Vehicle Assembly Building of the debris collected in the aftermath of the tragic accident Feb. 1, 2003, that claimed the orbiter and lives of the seven-member crew.

  20. Seismic signal processing on heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  1. Multiple DNA and protein sequence alignment on a workstation and a supercomputer.

    PubMed

    Tajima, K

    1988-11-01

    This paper describes a multiple alignment method using a workstation and supercomputer. The method is based on the alignment of a set of aligned sequences with the new sequence, and uses a recursive procedure of such alignment. The alignment is executed in a reasonable computation time on diverse levels from a workstation to a supercomputer, from the viewpoint of alignment results and computational speed by parallel processing. The application of the algorithm is illustrated by several examples of multiple alignment of 12 amino acid and DNA sequences of HIV (human immunodeficiency virus) env genes. Colour graphic programs on a workstation and parallel processing on a supercomputer are discussed.

  2. Library Services in a Supercomputer Center.

    ERIC Educational Resources Information Center

    Layman, Mary

    1991-01-01

    Describes library services that are offered at the San Diego Supercomputer Center (SDSC), which is located at the University of California at San Diego. Topics discussed include the user population; online searching; microcomputer use; electronic networks; current awareness programs; library catalogs; and the slide collection. A sidebar outlines…

  3. Extracting the Textual and Temporal Structure of Supercomputing Logs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, S; Singh, I; Chandra, A

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less

  4. Green Supercomputing at Argonne

    ScienceCinema

    Beckman, Pete

    2018-02-07

    Pete Beckman, head of Argonne's Leadership Computing Facility (ALCF) talks about Argonne National Laboratory's green supercomputing—everything from designing algorithms to use fewer kilowatts per operation to using cold Chicago winter air to cool the machine more efficiently. Argonne was recognized for green computing in the 2009 HPCwire Readers Choice Awards. More at http://www.anl.gov/Media_Center/News/2009/news091117.html Read more about the Argonne Leadership Computing Facility at http://www.alcf.anl.gov/

  5. KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, members of the Stafford-Covey Return to Flight Task Group (SCTG) look at tiles recovered. Chairing the task group are Richard O. Covey, former Space Shuttle commander, and Thomas P. Stafford (center), Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

    NASA Image and Video Library

    2003-08-05

    KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, members of the Stafford-Covey Return to Flight Task Group (SCTG) look at tiles recovered. Chairing the task group are Richard O. Covey, former Space Shuttle commander, and Thomas P. Stafford (center), Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

  6. KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, members of the Stafford-Covey Return to Flight Task Group (SCTG) inspect some of the debris. Chairing the task group are Richard O. Covey, former Space Shuttle commander, and Thomas P. Stafford (fourth from left), Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

    NASA Image and Video Library

    2003-08-05

    KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, members of the Stafford-Covey Return to Flight Task Group (SCTG) inspect some of the debris. Chairing the task group are Richard O. Covey, former Space Shuttle commander, and Thomas P. Stafford (fourth from left), Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

  7. NASA Administrator Dan Goldin talks with STS-78 crew

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA Administrator Dan Goldin (left) chats with STS-78 Mission Commander Terence 'Tom' Henricks (center) and KSC Director Jay Honeycutt underneath the orbiter Columbia. Columbia and her seven-member crew touched down on Runway 33 of KSC's Shuttle Landing Facility at 8:36 a.m. EDT, July 7, bringing to a close the longest Shuttle flight to date. STS-78, which also was the 78th Shuttle flight, lasted 16 days, 21 minutes and 47 seconds.

  8. Video of Tissue Grown in Space in NASA Bioreactor

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Principal investigator Leland Chung grew prostate cancer and bone stromal cells aboard the Space Shuttle Columbia during the STS-107 mission. Although the experiment samples were lost along with the ill-fated spacecraft and crew, he did obtain downlinked video of the experiment that indicates the enormous potential of growing tissues in microgravity. Cells grown aboard Columbia had grown far larger tissue aggregates at day 5 than did the cells grown in a NASA bioreactor on the ground.

  9. Space Shuttle orbiter Columbia touches down at Edwards Air Force Base

    NASA Image and Video Library

    1981-04-14

    S81-30744 (14 April 1981) --- The rear wheels of the space shuttle orbiter Columbia are about to touch down on Rogers Lake (a dry bed) at Edwards Air Force Base in southern California to successfully complete a stay in space of more than two days. Astronauts John W. Young, STS-1 commander, and Robert L. Crippen, pilot, are aboard the vehicle. The mission marked the first NASA flight to end with a wheeled landing and represents the beginning of a new age of spaceflight that will employ the same hardware repeatedly. Photo credit: NASA

  10. NASA Standard for Models and Simulations: Philosophy and Requirements Overview

    NASA Technical Reports Server (NTRS)

    Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.

    2013-01-01

    Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.

  11. NASA Standard for Models and Simulations: Philosophy and Requirements Overview

    NASA Technical Reports Server (NTRS)

    Blattnig, St3eve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.

    2009-01-01

    Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.

  12. A fault tolerant spacecraft supercomputer to enable a new class of scientific discovery

    NASA Technical Reports Server (NTRS)

    Katz, D. S.; McVittie, T. I.; Silliman, A. G., Jr.

    2000-01-01

    The goal of the Remote Exploration and Experimentation (REE) Project is to move supercomputeing into space in a coste effective manner and to allow the use of inexpensive, state of the art, commercial-off-the-shelf components and subsystems in these space-based supercomputers.

  13. A Look at the Impact of High-End Computing Technologies on NASA Missions

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart

    2012-01-01

    From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.

  14. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarje, Abhinav; Jacobsen, Douglas W.; Williams, Samuel W.

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  15. STS-32 COLUMBIA - ORBITER VEHICLE (OV)-102 - OFFICIAL CREW PORTRAIT

    NASA Image and Video Library

    1989-10-27

    S89-48342 (October 1989) --- These five astronauts have been assigned to serve as crewmembers for NASA's STS-32 mission aboard the Space Shuttle Columbia in December of this year. In front are Astronauts Daniel C. Brandenstein (left), commander, and James D. Wetherbee, pilot. In back are Astronauts (l-r) Marsha S. Ivins, G. David Low and Bonnie J. Dunbar, all mission specialists.

  16. NASA Scientific Balloon in Antarctica

    NASA Image and Video Library

    2017-12-08

    NASA image captured December 25, 2011 A NASA scientific balloon awaits launch in McMurdo, Antarctica. The balloon, carrying Indiana University's Cosmic Ray Electron Synchrotron Telescope (CREST), was launched on December 25. After a circum-navigational flight around the South Pole, the payload landed on January 5. The CREST payload is one of two scheduled as part of this seasons' annual NASA Antarctic balloon Campaign which is conducted in cooperation with the National Science Foundation's Office of Polar Programs. The campaign's second payload is the University of Arizona's Stratospheric Terahertz Observatory (STO). You can follow the flights at the Columbia Scientific Balloon Facility's web site at www.csbf.nasa.gov/antarctica/ice.htm Credit: NASA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  17. An Application-Based Performance Characterization of the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Djomehri, Jahed M.; Hood, Robert; Jin, Hoaqiang; Kiris, Cetin; Saini, Subhash

    2005-01-01

    Columbia is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processors each, and currently ranked as the second-fastest computer in the world. In this paper, we present the performance characteristics of Columbia obtained on up to four computing nodes interconnected via the InfiniBand and/or NUMAlink4 communication fabrics. We evaluate floating-point performance, memory bandwidth, message passing communication speeds, and compilers using a subset of the HPC Challenge benchmarks, and some of the NAS Parallel Benchmarks including the multi-zone versions. We present detailed performance results for three scientific applications of interest to NASA, one from molecular dynamics, and two from computational fluid dynamics. Our results show that both the NUMAlink4 and the InfiniBand hold promise for application scaling to a large number of processors.

  18. View of the Columbia's remote manipulator system

    NASA Image and Video Library

    1982-03-30

    STS003-09-444 (22-30 March 1982) --- The darkness of space provides the backdrop for this scene of the plasma diagnostics package (PDR) experiment in the grasp of the end effector or ?hand? of the remote manipulator system (RMS) arm, and other components of the Office of Space Sciences (OSS-1) package in the aft section of the Columbia?s cargo hold. The PDP is a compact, comprehensive assembly of electromagnetic and particle sensors that will be used to study the interaction of the orbiter with its surrounding environment; to test the capabilities of the shuttle?s remote manipulator system; and to carry out experiments in conjunction with the fast pulse electron generator of the vehicle charging and potential experiment, another experiment on the OSS-1 payload pallet. This photograph was exposed with a 70mm handheld camera by the astronaut crew of STS-3, with a handheld camera aimed through the flight deck?s aft window. Photo credit: NASA

  19. Multi-petascale highly efficient parallel supercomputer

    DOEpatents

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  20. Earth observations taken from shuttle orbiter Columbia

    NASA Image and Video Library

    1995-10-24

    STS073-725-031 (24 October 1995) --- The contrasting colors of fall in New England are captured on this northward-looking photo of Martha's Vineyard, Nantucket Island, and the famous hook-shaped Cape Cod. Light-colored patches of urbanization are scattered throughout the scene, the most evident being the greater Boston area along the shores of Massachusetts Bay. The cape is composed of rock debris that, according to NASA scientists studying Columbia's photo collection, was deposited along the end of glacier some 20,000 years ago.

  1. 2018 NASA Day of Remembrance

    NASA Image and Video Library

    2018-01-25

    Inside the Center for Space Education at the Kennedy Space Center Visitor Complex, spaceport employees and guests join others throughout NASA for the Day of Remembrance ceremony, honoring the contributions of astronauts who have perished in the conquest of space. Following the ceremony, guests walk to the Space Mirror Memorial. The names of fallen astronauts from Apollo 1, Challenger and Columbia, as well as the astronauts who perished in training and commercial airplane accidents are emblazoned on the monument. Each year spaceport employees and guests join others throughout NASA honoring the contributions of astronauts who have perished in the conquest of space.

  2. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilk, Todd

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  3. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE PAGES

    Yilk, Todd

    2018-02-17

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  4. QCD on the BlueGene/L Supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  5. Development of seismic tomography software for hybrid supercomputers

    NASA Astrophysics Data System (ADS)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  6. Demonstration of NICT Space Weather Cloud --Integration of Supercomputer into Analysis and Visualization Environment--

    NASA Astrophysics Data System (ADS)

    Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.

    2010-12-01

    In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on

  7. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  8. A Long History of Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  9. Introducing Argonne’s Theta Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Theta, the Argonne Leadership Computing Facility’s (ALCF) new Intel-Cray supercomputer, is officially open to the research community. Theta’s massively parallel, many-core architecture puts the ALCF on the path to Aurora, the facility’s future Intel-Cray system. Capable of nearly 10 quadrillion calculations per second, Theta enables researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.

  10. NSF Establishes First Four National Supercomputer Centers.

    ERIC Educational Resources Information Center

    Lepkowski, Wil

    1985-01-01

    The National Science Foundation (NSF) has awarded support for supercomputer centers at Cornell University, Princeton University, University of California (San Diego), and University of Illinois. These centers are to be the nucleus of a national academic network for use by scientists and engineers throughout the United States. (DH)

  11. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  12. KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Shuttle Launch Director Mike Leinbach answers questions from the Stafford-Covey Return to Flight Task Group (SCTG). Chairing the task group are Richard O. Covey (fifth from left), former Space Shuttle commander, and Thomas P. Stafford, Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

    NASA Image and Video Library

    2003-08-05

    KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Shuttle Launch Director Mike Leinbach answers questions from the Stafford-Covey Return to Flight Task Group (SCTG). Chairing the task group are Richard O. Covey (fifth from left), former Space Shuttle commander, and Thomas P. Stafford, Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

  13. KENNEDY SPACE CENTER, FLA. - The media listen to Scott Thurston, NASA vehicle flow manager, talk about efforts to pack the debris stored in the Columbia Debris Hangar. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. An area of the Vehicle Assembly Building is being prepared to store the debris permanently.

    NASA Image and Video Library

    2003-09-11

    KENNEDY SPACE CENTER, FLA. - The media listen to Scott Thurston, NASA vehicle flow manager, talk about efforts to pack the debris stored in the Columbia Debris Hangar. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. An area of the Vehicle Assembly Building is being prepared to store the debris permanently.

  14. NASA Ames Research Center Overview

    NASA Technical Reports Server (NTRS)

    Boyd, Jack

    2006-01-01

    A general overview of the NASA Ames Research Center is presented. The topics include: 1) First Century of Flight, 1903-2003; 2) NACA Research Centers; 3) 65 Years of Innovation; 4) Ames Projects; 5) NASA Ames Research Center Today-founded; 6) Astrobiology; 7) SOFIA; 8) To Explore the Universe and Search for Life: Kepler: The Search for Habitable Planets; 9) Crew Exploration Vehicle/Crew Launch Vehicle; 10) Lunar Crater Observation and Sensing Satellite (LCROSS); 11) Thermal Protection Materials and Arc-Jet Facility; 12) Information Science & Technology; 13) Project Columbia Integration and Installation; 14) Air Traffic Management/Air Traffic Control; and 15) New Models-UARC.

  15. Aeromedical Lessons Learned from the Space Shuttle Columbia Accident Investigation

    NASA Technical Reports Server (NTRS)

    Chandler, Mike

    2011-01-01

    This slide presentation provides an update on the Columbia accident response presented in 2005 with additional information that was not available at that time. It will provide information on the following topics: (1) medical response and Search and Rescue, (2) medico-legal issues associated with the accident, (3) the Spacecraft Crew Survival Integrated Investigation Team Report published in 2008, and (4) future NASA flight surgeon spacecraft accident response training.

  16. Success Legacy of the Space Shuttle Program: Changes in Shuttle Post Challenger and Columbia

    NASA Technical Reports Server (NTRS)

    Jarrell, George

    2010-01-01

    This slide presentation reviews the legacy of successes in the space shuttle program particularly with regards to the changes in the culture of NASA's organization after the Challenger and Columbia accidents and some of the changes to the shuttles that were made manifest as a result of the accidents..

  17. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  18. Spatiotemporal modeling of node temperatures in supercomputers

    DOE PAGES

    Storlie, Curtis Byron; Reich, Brian James; Rust, William Newton; ...

    2016-06-10

    Los Alamos National Laboratory (LANL) is home to many large supercomputing clusters. These clusters require an enormous amount of power (~500-2000 kW each), and most of this energy is converted into heat. Thus, cooling the components of the supercomputer becomes a critical and expensive endeavor. Recently a project was initiated to investigate the effect that changes to the cooling system in a machine room had on three large machines that were housed there. Coupled with this goal was the aim to develop a general good-practice for characterizing the effect of cooling changes and monitoring machine node temperatures in this andmore » other machine rooms. This paper focuses on the statistical approach used to quantify the effect that several cooling changes to the room had on the temperatures of the individual nodes of the computers. The largest cluster in the room has 1,600 nodes that run a variety of jobs during general use. Since extremes temperatures are important, a Normal distribution plus generalized Pareto distribution for the upper tail is used to model the marginal distribution, along with a Gaussian process copula to account for spatio-temporal dependence. A Gaussian Markov random field (GMRF) model is used to model the spatial effects on the node temperatures as the cooling changes take place. This model is then used to assess the condition of the node temperatures after each change to the room. The analysis approach was used to uncover the cause of a problematic episode of overheating nodes on one of the supercomputing clusters. Lastly, this same approach can easily be applied to monitor and investigate cooling systems at other data centers, as well.« less

  19. 2018 NASA Day of Remembrance

    NASA Image and Video Library

    2018-01-25

    Guests place flowers near the Space Mirror Memorial at the Kennedy Space Center Visitor Complex. The names of fallen astronauts from Apollo 1, Challenger and Columbia, as well as the astronauts who perished in training and commercial airplane accidents are emblazoned on the monument. During the annual Day of Remembrance, spaceport employees and guests join others throughout NASA honoring the contributions of astronauts who have perished in the conquest of space.

  20. 2018 NASA Day of Remembrance

    NASA Image and Video Library

    2018-01-25

    Flowers are placed near the Space Mirror Memorial at the Kennedy Space Center Visitor Complex. The names of fallen astronauts from Apollo 1, Challenger and Columbia, as well as the astronauts who perished in training and commercial airplane accidents are emblazoned on the monument. During the annual Day of Remembrance, spaceport employees and guests join others throughout NASA honoring the contributions of astronauts who have perished in the conquest of space.

  1. A Long History of Supercomputing

    ScienceCinema

    Grider, Gary

    2018-06-13

    As part of its national security science mission, Los Alamos National Laboratory and HPC have a long, entwined history dating back to the earliest days of computing. From bringing the first problem to the nation’s first computer to building the first machine to break the petaflop barrier, Los Alamos holds many “firsts” in HPC breakthroughs. Today, supercomputers are integral to stockpile stewardship and the Laboratory continues to work with vendors in developing the future of HPC.

  2. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  3. A Layered Solution for Supercomputing Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  4. Advances and issues from the simulation of planetary magnetospheres with recent supercomputer systems

    NASA Astrophysics Data System (ADS)

    Fukazawa, K.; Walker, R. J.; Kimura, T.; Tsuchiya, F.; Murakami, G.; Kita, H.; Tao, C.; Murata, K. T.

    2016-12-01

    Planetary magnetospheres are very large, while phenomena within them occur on meso- and micro-scales. These scales range from 10s of planetary radii to kilometers. To understand dynamics in these multi-scale systems, numerical simulations have been performed by using the supercomputer systems. We have studied the magnetospheres of Earth, Jupiter and Saturn by using 3-dimensional magnetohydrodynamic (MHD) simulations for a long time, however, we have not obtained the phenomena near the limits of the MHD approximation. In particular, we have not studied meso-scale phenomena that can be addressed by using MHD.Recently we performed our MHD simulation of Earth's magnetosphere by using the K-computer which is the first 10PFlops supercomputer and obtained multi-scale flow vorticity for the both northward and southward IMF. Furthermore, we have access to supercomputer systems which have Xeon, SPARC64, and vector-type CPUs and can compare simulation results between the different systems. Finally, we have compared the results of our parameter survey of the magnetosphere with observations from the HISAKI spacecraft.We have encountered a number of difficulties effectively using the latest supercomputer systems. First the size of simulation output increases greatly. Now a simulation group produces over 1PB of output. Storage and analysis of this much data is difficult. The traditional way to analyze simulation results is to move the results to the investigator's home computer. This takes over three months using an end-to-end 10Gbps network. In reality, there are problems at some nodes such as firewalls that can increase the transfer time to over one year. Another issue is post-processing. It is hard to treat a few TB of simulation output due to the memory limitations of a post-processing computer. To overcome these issues, we have developed and introduced the parallel network storage, the highly efficient network protocol and the CUI based visualization tools.In this study, we

  5. Accessing Wind Tunnels From NASA's Information Power Grid

    NASA Technical Reports Server (NTRS)

    Becker, Jeff; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The NASA Ames wind tunnel customers are one of the first users of the Information Power Grid (IPG) storage system at the NASA Advanced Supercomputing Division. We wanted to be able to store their data on the IPG so that it could be accessed remotely in a secure but timely fashion. In addition, incorporation into the IPG allows future use of grid computational resources, e.g., for post-processing of data, or to do side-by-side CFD validation. In this paper, we describe the integration of grid data access mechanisms with the existing DARWIN web-based system that is used to access wind tunnel test data. We also show that the combined system has reasonable performance: wind tunnel data may be retrieved at 50Mbits/s over a 100 base T network connected to the IPG storage server.

  6. Japanese project aims at supercomputer that executes 10 gflops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burskey, D.

    1984-05-03

    Dubbed supercom by its multicompany design team, the decade-long project's goal is an engineering supercomputer that can execute 10 billion floating-point operations/s-about 20 times faster than today's supercomputers. The project, guided by Japan's Ministry of International Trade and Industry (MITI) and the Agency of Industrial Science and Technology encompasses three parallel research programs, all aimed at some angle of the superconductor. One program should lead to superfast logic and memory circuits, another to a system architecture that will afford the best performance, and the last to the software that will ultimately control the computer. The work on logic and memorymore » chips is based on: GAAS circuit; Josephson junction devices; and high electron mobility transistor structures. The architecture will involve parallel processing.« less

  7. Supercomputer analysis of sedimentary basins.

    PubMed

    Bethke, C M; Altaner, S P; Harrison, W J; Upson, C

    1988-01-15

    Geological processes of fluid transport and chemical reaction in sedimentary basins have formed many of the earth's energy and mineral resources. These processes can be analyzed on natural time and distance scales with the use of supercomputers. Numerical experiments are presented that give insights to the factors controlling subsurface pressures, temperatures, and reactions; the origin of ores; and the distribution and quality of hydrocarbon reservoirs. The results show that numerical analysis combined with stratigraphic, sea level, and plate tectonic histories provides a powerful tool for studying the evolution of sedimentary basins over geologic time.

  8. Probing the cosmic causes of errors in supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Cosmic rays from outer space are causing errors in supercomputers. The neutrons that pass through the CPU may be causing binary data to flip leading to incorrect calculations. Los Alamos National Laboratory has developed detectors to determine how much data is being corrupted by these cosmic particles.

  9. NASA Medical Response to Human Spacecraft Accidents

    NASA Technical Reports Server (NTRS)

    Patlach, Robert

    2010-01-01

    Manned space flight is risky business. Accidents have occurred and may occur in the future. NASA's manned space flight programs, with all their successes, have had three fatal accidents, one at the launch pad and two in flight. The Apollo fire and the Challenger and Columbia accidents resulted in a loss of seventeen crewmembers. Russia's manned space flight programs have had three fatal accidents, one ground-based and two in flight. These accidents resulted in the loss of five crewmembers. Additionally, manned spacecraft have encountered numerous close calls with potential for disaster. The NASA Johnson Space Center Flight Safety Office has documented more than 70 spacecraft incidents, many of which could have become serious accidents. At the Johnson Space Center (JSC), medical contingency personnel are assigned to a Mishap Investigation Team. The team deploys to the accident site to gather and preserve evidence for the Accident Investigation Board. The JSC Medical Operations Branch has developed a flight surgeon accident response training class to capture the lessons learned from the Columbia accident. This presentation will address the NASA Mishap Investigation Team's medical objectives, planned response, and potential issues that could arise subsequent to a manned spacecraft accident. Educational Objectives are to understand the medical objectives and issues confronting the Mishap Investigation Team medical personnel subsequent to a human space flight accident.

  10. True 3-D View of 'Columbia Hills' from an Angle

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This mosaic of images from NASA's Mars Exploration Rover Spirit shows a panorama of the 'Columbia Hills' without any adjustment for rover tilt. When viewed through 3-D glasses, depth is much more dramatic and easier to see, compared with a tilt-adjusted version. This is because stereo views are created by producing two images, one corresponding to the view from the panoramic camera's left-eye camera, the other corresponding to the view from the panoramic camera's right-eye camera. The brain processes the visual input more accurately when the two images do not have any vertical offset. In this view, the vertical alignment is nearly perfect, but the horizon appears to curve because of the rover's tilt (because the rover was parked on a steep slope, it was tilted approximately 22 degrees to the west-northwest). Spirit took the images for this 360-degree panorama while en route to higher ground in the 'Columbia Hills.'

    The highest point visible in the hills is 'Husband Hill,' named for space shuttle Columbia Commander Rick Husband. To the right are the rover's tracks through the soil, where it stopped to perform maintenance on its right front wheel in July. In the distance, below the hills, is the floor of Gusev Crater, where Spirit landed Jan. 3, 2004, before traveling more than 3 kilometers (1.8 miles) to reach this point. This vista comprises 188 images taken by Spirit's panoramic camera from its 213th day, or sol, on Mars to its 223rd sol (Aug. 9 to 19, 2004). Team members at NASA's Jet Propulsion Laboratory and Cornell University spent several weeks processing images and producing geometric maps to stitch all the images together in this mosaic. The 360-degree view is presented in a cylindrical-perspective map projection with geometric seam correction.

  11. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, K; Jha, S; Klimentov, A

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This

  12. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  13. Access to Supercomputers. Higher Education Panel Report 69.

    ERIC Educational Resources Information Center

    Holmstrom, Engin Inel

    This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…

  14. Space Shuttle orbiter Columbia on the ground at Edwards Air Force Base

    NASA Image and Video Library

    1981-04-14

    S81-30749 (14 April 1981) --- This high angle view shows the scene at Edwards Air Force Base in southern California soon after the successful landing of the space shuttle orbiter Columbia to end STS-1. Service vehicles approach the spacecraft to perform evaluations for safety, egress preparedness, etc. Astronauts John W. Young, commander, and Robert L. Crippen, pilot, are still inside the spacecraft. Photo credit: NASA

  15. KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Shuttle Launch Director Mike Leinbach (left) talks to members of the Stafford-Covey Return to Flight Task Group (SCTG) about reconstruction efforts. Chairing the task group are Richard O. Covey (second from right), former Space Shuttle commander, and Thomas P. Stafford, Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

    NASA Image and Video Library

    2003-08-05

    KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Shuttle Launch Director Mike Leinbach (left) talks to members of the Stafford-Covey Return to Flight Task Group (SCTG) about reconstruction efforts. Chairing the task group are Richard O. Covey (second from right), former Space Shuttle commander, and Thomas P. Stafford, Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

  16. A Layered Solution for Supercomputing Storage

    ScienceCinema

    Grider, Gary

    2018-06-13

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  17. NASA's space shuttle Atlantis and its 747 carrier taxied on the Edwards Air Force Base flightline as

    NASA Technical Reports Server (NTRS)

    2001-01-01

    NASA's space shuttle Atlantis and its 747 carrier taxied on the Edwards Air Force Base flightline as the unusual combination left for Kennedy Space Center, Florida, on March 1, 2001. Atlantis and the shuttle Columbia were both airborne on the same day as they migrated from California to Florida. Columbia underwent refurbishing at nearby Palmdale, California.

  18. Petascale Computing: Impact on Future NASA Missions

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2006-01-01

    This slide presentation reviews NASA's use of a new super computer, called Columbia, capable of operating at 62 Tera Flops. This computer is the 4th fastest computer in the world. This computer will serve all mission directorates. The applications that it would serve are: aerospace analysis and design, propulsion subsystem analysis, climate modeling, hurricane prediction and astrophysics and cosmology.

  19. Loss of Signal, Aeromedical Lessons Learned for the STS-I07 Columbia Space Shuttle Mishap

    NASA Technical Reports Server (NTRS)

    Patlach, Robert; Stepaniak, Philip C.; Lane, Helen W.

    2014-01-01

    Loss of Signal, a NASA publication to be available in May 2014, presents the aeromedical lessons learned from the Columbia accident that will enhance crew safety and survival on human space flight missions. These lessons were presented to limited audiences at three separate Aerospace Medical Association (AsMA) conferences: in 2004 in Anchorage, Alaska, on the causes of the accident; in 2005 in Kansas City, Missouri, on the response, recovery, and identification aspects of the investigation; and in 2011, again in Anchorage, Alaska, on future implications for human space flight. As we embark on the development of new spacefaring vehicles through both government and commercial efforts, the NASA Johnson Space Center Human Health and Performance Directorate is continuing to make this information available to a wider audience engaged in the design and development of future space vehicles. Loss of Signal summarizes and consolidates the aeromedical impacts of the Columbia mishap process-the response, recovery, identification, investigative studies, medical and legal forensic analysis, and future preparation that are needed to respond to spacecraft mishaps. The goals of this book are to provide an account of the aeromedical aspects of the Columbia accident and the investigation that followed, and to encourage aerospace medical specialists to continue to capture information, learn from it, and improve procedures and spacecraft designs for the safety of future crews.

  20. Supercomputer algorithms for efficient linear octree encoding of three-dimensional brain images.

    PubMed

    Berger, S B; Reis, D J

    1995-02-01

    We designed and implemented algorithms for three-dimensional (3-D) reconstruction of brain images from serial sections using two important supercomputer architectures, vector and parallel. These architectures were represented by the Cray YMP and Connection Machine CM-2, respectively. The programs operated on linear octree representations of the brain data sets, and achieved 500-800 times acceleration when compared with a conventional laboratory workstation. As the need for higher resolution data sets increases, supercomputer algorithms may offer a means of performing 3-D reconstruction well above current experimental limits.

  1. New NASA 3D Animation Shows Seven Days of Simulated Earth Weather

    NASA Image and Video Library

    2014-08-11

    This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5). This particular run, called Nature Run 2, was run on a supercomputer, spanned 2 years of simulation time at 30 minute intervals, and produced Petabytes of output. The visualization spans a little more than 7 days of simulation time which is 354 time steps. The time period was chosen because a simulated category-4 typhoon developed off the coast of China. The 7 day period is repeated several times during the course of the visualization. Credit: NASA's Scientific Visualization Studio Read more or download here: svs.gsfc.nasa.gov/goto?4180 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. Multi-Tasking: First Shuttle Mission Since Columbia Combines Test Flight, Catch-Up ISS Supply and Maintenance

    NASA Technical Reports Server (NTRS)

    Morring, Frank, Jr.

    2005-01-01

    NASA's space shuttle fleet is nearing its return to flight with a complex mission on board Discovery that will combine tests of new hardware and procedures adopted in the wake of Columbia's loss with urgent repairs and resupply for the International Space Station. A seven-member astronaut crew has trained throughout most of the two-year hiatus in shuttle operations for the 13-day mission, shooting for a three-week launch window that opens May 15. The window, and much else about the STS-114 mission, is constrained by NASA's need to ensure it has fixed the ascent/debris problem that doomed Columbia and its crew as they attempted to reenter the atmosphere on Feb. 1, 2003. The window was selected so Discovery's ascent can be photographed in daylight with 107 different ground- and aircraft-based cameras to monitor the redesigned external tank for debris shedding. Fixed cameras and the shuttle crew will also photograph the tank in space after it has been jettisoned.

  3. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC datamore » center.« less

  4. Portland, Mount Hood, & Columbia River Gorge, Oregon, Perspective View

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Portland, the largest city in Oregon, is located on the Columbia River at the northern end of the Willamette Valley. On clear days, Mount Hood highlights the Cascade Mountains backdrop to the east. The Columbia is the largest river in the American Northwest and is navigable up to and well beyond Portland. It is also the only river to fully cross the Cascade Range, and has carved the Columbia River Gorge, which is seen in the left-central part of this view. A series of dams along the river, at topographically favorable sites, provide substantial hydroelectric power to the region.

    This perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM), a Landsat satellite image, and a false sky. Topographic expression is vertically exaggerated two times.

    Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data substantially help in analyzing Landsat images by revealing the third dimension of Earth's surface, topographic height. The Landsat archive is managed by the U.S. Geological Survey's Eros Data Center (USGS EDC).

    Elevation data used in this image were acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet

  5. Designing a connectionist network supercomputer.

    PubMed

    Asanović, K; Beck, J; Feldman, J; Morgan, N; Wawrzynek, J

    1993-12-01

    This paper describes an effort at UC Berkeley and the International Computer Science Institute to develop a supercomputer for artificial neural network applications. Our perspective has been strongly influenced by earlier experiences with the construction and use of a simpler machine. In particular, we have observed Amdahl's Law in action in our designs and those of others. These observations inspire attention to many factors beyond fast multiply-accumulate arithmetic. We describe a number of these factors along with rough expressions for their influence and then give the applications targets, machine goals and the system architecture for the machine we are currently designing.

  6. Building black holes: supercomputer cinema.

    PubMed

    Shapiro, S L; Teukolsky, S A

    1988-07-22

    A new computer code can solve Einstein's equations of general relativity for the dynamical evolution of a relativistic star cluster. The cluster may contain a large number of stars that move in a strong gravitational field at speeds approaching the speed of light. Unstable star clusters undergo catastrophic collapse to black holes. The collapse of an unstable cluster to a supermassive black hole at the center of a galaxy may explain the origin of quasars and active galactic nuclei. By means of a supercomputer simulation and color graphics, the whole process can be viewed in real time on a movie screen.

  7. 2018 NASA Day of Remembrance

    NASA Image and Video Library

    2018-01-25

    Following this year's Day of Remembrance ceremony at the Kennedy Space Center Visitor Complex, guests pick up flowers to place at the Space Mirror Memorial. The names of fallen astronauts from Apollo 1, Challenger and Columbia, as well as the astronauts who perished in training and commercial airplane accidents are emblazoned on the monument. Each year spaceport employees and guests join others throughout NASA honoring the contributions of astronauts who have perished in the conquest of space.

  8. Columbia Accident Investigation Board Report. Volume Two

    NASA Technical Reports Server (NTRS)

    Barry, J. R.; Jenkins, D. R.; White, D. J.; Goodman, P. A.; Reingold, L. A.

    2003-01-01

    Volume II of the Report contains appendices that were cited in Volume I. The Columbia Accident Investigation Board produced many of these appendices as working papers during the investigation into the February 1, 2003 destruction of the Space Shuttle Columbia. Other appendices were produced by other organizations (mainly NASA) in support of the Board investigation. In the case of documents that have been published by others, they are included here in the interest of establishing a complete record, but often at less than full page size. Contents include: CAIB Technical Documents Cited in the Report: Reader's Guide to Volume II; Appendix D. a Supplement to the Report; Appendix D.b Corrections to Volume I of the Report; Appendix D.1 STS-107 Training Investigation; Appendix D.2 Payload Operations Checklist 3; Appendix D.3 Fault Tree Closure Summary; Appendix D.4 Fault Tree Elements - Not Closed; Appendix D.5 Space Weather Conditions; Appendix D.6 Payload and Payload Integration; Appendix D.7 Working Scenario; Appendix D.8 Debris Transport Analysis; Appendix D.9 Data Review and Timeline Reconstruction Report; Appendix D.10 Debris Recovery; Appendix D.11 STS-107 Columbia Reconstruction Report; Appendix D.12 Impact Modeling; Appendix D.13 STS-107 In-Flight Options Assessment; Appendix D.14 Orbiter Major Modification (OMM) Review; Appendix D.15 Maintenance, Material, and Management Inputs; Appendix D.16 Public Safety Analysis; Appendix D.17 MER Manager's Tiger Team Checklist; Appendix D.18 Past Reports Review; Appendix D.19 Qualification and Interpretation of Sensor Data from STS-107; Appendix D.20 Bolt Catcher Debris Analysis.

  9. Simulating the Dynamics of Earth's Core: Using NCCS Supercomputers Speeds Calculations

    NASA Technical Reports Server (NTRS)

    2002-01-01

    If one wanted to study Earth's core directly, one would have to drill through about 1,800 miles of solid rock to reach liquid core-keeping the tunnel from collapsing under pressures that are more than 1 million atmospheres and then sink an instrument package to the bottom that could operate at 8,000 F with 10,000 tons of force crushing every square inch of its surface. Even then, several of these tunnels would probably be needed to obtain enough data. Faced with difficult or impossible tasks such as these, scientists use other available sources of information - such as seismology, mineralogy, geomagnetism, geodesy, and, above all, physical principles - to derive a model of the core and, study it by running computer simulations. One NASA researcher is doing just that on NCCS computers. Physicist and applied mathematician Weijia Kuang, of the Space Geodesy Branch, and his collaborators at Goddard have what he calls the,"second - ever" working, usable, self-consistent, fully dynamic, three-dimensional geodynamic model (see "The Geodynamic Theory"). Kuang runs his model simulations on the supercomputers at the NCCS. He and Jeremy Bloxham, of Harvard University, developed the original version, written in Fortran 77, in 1996.

  10. Low gravity environment on-board Columbia during STS-40

    NASA Technical Reports Server (NTRS)

    Rogers, M. J. B.; Baugher, C. R.; Blanchard, R. C.; Delombard, R.; During, W. W.; Matthiesen, D. H.; Neupert, W.; Roussel, P.

    1993-01-01

    The first NASA Spacelab Life Sciences mission (SLS-I) flew 5 June to 14 June 1991 on the orbiter Columbia (STS-40). The purpose of the mission was to investigate the human body's adaptation to the low gravity conditions of space flight and the body's readjustment after the mission to the 1 g environment of earth. In addition to the life sciences experiments manifested for the Spacelab module, a variety of experiments in other scientific disciplines flew in the Spacelab and in Get Away Special (GAS) Canisters on the GAS Bridge Assembly. Several principal investigators designed and flew specialized accelerometer systems to characterize the low gravity environment. This was done to better assess the results of theft experiments. This was also the first flight of the NASA Microgravity Science and Applications Division (MSAD) sponsored Space Acceleration Measurement System (SAMS) and the first flight of the NASA Orbiter Experiments Office (OEX) sponsored Orbital Acceleration Research Experiment accelerometer (OARE). We present a brief introduction to seven STS-40 accelerometer systems and discuss and compare the resulting data.

  11. Materials Analysis: A Key to Unlocking the Mystery of the Columbia Tragedy

    NASA Technical Reports Server (NTRS)

    Mayeaux, Brian M.; Collins, Thomas E.; Piascik, Robert S.; Russel, Richard W.; Jerman, Gregory A.; Shah, Sandeep R.; McDanels, Steven J.

    2004-01-01

    Materials analyses of key forensic evidence helped unlock the mystery of the loss of space shuttle Columbia that disintegrated February 1, 2003 while returning from a 16-day research mission. Following an intensive four-month recovery effort by federal, state, and local emergency management and law officials, Columbia debris was collected, catalogued, and reassembled at the Kennedy Space Center. Engineers and scientists from the Materials and Processes (M&P) team formed by NASA supported Columbia reconstruction efforts, provided factual data through analysis, and conducted experiments to validate the root cause of the accident. Fracture surfaces and thermal effects of selected airframe debris were assessed, and process flows for both nondestructive and destructive sampling and evaluation of debris were developed. The team also assessed left hand (LH) airframe components that were believed to be associated with a structural breach of Columbia. Analytical data collected by the M&P team showed that a significant thermal event occurred at the left wing leading edge in the proximity of LH reinforced carbon carbon (RCC) panels 8 and 9. The analysis also showed exposure to temperatures in excess of 1,649 C, which would severely degrade the support structure, tiles, and RCC panel materials. The integrated failure analysis of wing leading edge debris and deposits strongly supported the hypothesis that a breach occurred at LH RCC panel 8.

  12. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  13. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  14. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  15. Columbia Accident Investigation Board. Volume One

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Columbia Accident Investigation Board's independent investigation into the February 1, 2003, loss of the Space Shuttle Columbia and its seven-member crew lasted nearly seven months. A staff of more than 120, along with some 400 NASA engineers, supported the Board's 13 members. Investigators examined more than 30,000 documents, conducted more than 200 formal interviews, heard testimony from dozens of expert witnesses, and reviewed more than 3,000 inputs from the general public. In addition, more than 25,000 searchers combed vast stretches of the Western United States to retrieve the spacecraft's debris. In the process, Columbia's tragedy was compounded when two debris searchers with the U.S. Forest Service perished in a helicopter accident. This report concludes with recommendations, some of which are specifically identified and prefaced as 'before return to flight.' These recommendations are largely related to the physical cause of the accident, and include preventing the loss of foam, improved imaging of the Space Shuttle stack from liftoff through separation of the External Tank, and on-orbit inspection and repair of the Thermal Protection System. The remaining recommendations, for the most part, stem from the Board's findings on organizational cause factors. While they are not 'before return to flight' recommendations, they can be viewed as 'continuing to fly' recommendations, as they capture the Board's thinking on what changes are necessary to operate the Shuttle and future spacecraft safely in the mid- to long-term. These recommendations reflect both the Board's strong support for return to flight at the earliest date consistent with the overriding objective of safety, and the Board's conviction that operation of the Space Shuttle, and all human space-flight, is a developmental activity with high inherent risks.

  16. STS-32 Columbia, OV-102, liftoff from KSC LC Pad 39A is reflected in waterway

    NASA Image and Video Library

    1990-01-09

    STS032-S-069 (9 Jan. 1990) --- The space shuttle Columbia, with a five member crew aboard, lifts off for the ninth time as STS-32 begins a 10-day mission in Earth orbit. Leaving from Launch Pad 39A at 7:34:59:98 a.m. EST, in this horizontal (cropped 70mm) frame, Columbia is seen reflected in nearby marsh waters some 24 hours after dubious weather at the return-to-launch site (RTLS) had cancelled a scheduled launch. Onboard the spacecraft were astronauts Daniel C. Brandenstein, James D. Wetherbee, Bonnie J. Dunbar, G. David Low and Marsha S. Ivins. Photo credit: NASA

  17. NASA's Aqua Satellite Sees Partial Solar Eclipse Effect in Western Canada

    NASA Image and Video Library

    2017-12-08

    This image shows how a partial solar eclipse darkened clouds over the Yukon and British Columbia in western Canada. It was taken on Oct. 23 at 21:20 UTC (5:20 p.m. EDT) by the Moderate Resolution Imaging Spectroradiometer instrument that flies aboard NASA's Aqua satellite. Credit: NASA Goddard MODIS Rapid Response Team Unlabeled image NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  18. Mass Storage System Upgrades at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen; Macie, Medora; Saletta, Marty

    2000-01-01

    The NASA Center for Computational Sciences (NCCS) provides supercomputing and mass storage services to over 1200 Earth and space scientists. During the past two years, the mass storage system at the NCCS went through a great deal of changes both major and minor. Tape drives, silo control software, and the mass storage software itself were upgraded, and the mass storage platform was upgraded twice. Some of these upgrades were aimed at achieving year-2000 compliance, while others were simply upgrades to newer and better technologies. In this paper we will describe these upgrades.

  19. NASA Headquarters Space Operations Center: Providing Situational Awareness for Spaceflight Contingency Response

    NASA Technical Reports Server (NTRS)

    Maxwell, Theresa G.; Bihner, William J.

    2010-01-01

    This paper discusses the NASA Headquarters mishap response process for the Space Shuttle and International Space Station programs, and how the process has evolved based on lessons learned from the Space Shuttle Challenger and Columbia accidents. It also describes the NASA Headquarters Space Operations Center (SOC) and its special role in facilitating senior management's overall situational awareness of critical spaceflight operations, before, during, and after a mishap, to ensure a timely and effective contingency response.

  20. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  1. The Sky's the Limit When Super Students Meet Supercomputers.

    ERIC Educational Resources Information Center

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  2. Spirit's Neighborhood in 'Columbia Hills,' in Stereo

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Two Earth years ago, NASA's Mars Exploration Rover Spirit touched down in Gusev Crater. The rover marked its first Mars-year (687 Earth days) anniversary in November 2005. On Nov. 2, 2005, shortly before Spirit's Martian anniversary, the Mars Orbiter Camera on NASA's Mars Global Surveyor acquired an image covering approximately 3 kilometers by 3 kilometers (1.9 miles by 1.9 miles) centered on the rover's location in the 'Columbia Hills.'

    The tinted portion of this image gives a stereo, three-dimensional view when observed through 3-D glasses with a red left eye and blue right eye. The tallest peak is 'Husband Hill,' which was climbed by Spirit during much of 2005. The region south (toward the bottom) of these images shows the area where the rover is currently headed. The large dark patch and other similar dark patches in these images are accumulations of windblown sand and granules. North is up; illumination is from the left. The location is near 14.8 degrees south latitude, 184.6 degrees west longitude.

  3. NIEHS/EPA CEHCs: Columbia Center for Children’s Environmental Health - Columbia University

    EPA Pesticide Factsheets

    The Columbia Center for Children’s Environmental Health (CCCEH) at Columbia University studies long-term health of urban pollutants on children raised in minority neighborhoods in inner-city communities.

  4. Supercomputer use in orthopaedic biomechanics research: focus on functional adaptation of bone.

    PubMed

    Hart, R T; Thongpreda, N; Van Buskirk, W C

    1988-01-01

    The authors describe two biomechanical analyses carried out using numerical methods. One is an analysis of the stress and strain in a human mandible, and the other analysis involves modeling the adaptive response of a sheep bone to mechanical loading. The computing environment required for the two types of analyses is discussed. It is shown that a simple stress analysis of a geometrically complex mandible can be accomplished using a minicomputer. However, more sophisticated analyses of the same model with dynamic loading or nonlinear materials would require supercomputer capabilities. A supercomputer is also required for modeling the adaptive response of living bone, even when simple geometric and material models are use.

  5. KENNEDY SPACE CENTER, FLA. - The Stafford-Covey Return to Flight Task Group (SCTG) inspects debris in the Columbia Debris Hangar. At right is the model of the left wing that has been used during recovery operations. Chairing the task group are Richard O. Covey, former Space Shuttle commander, and Thomas P. Stafford (third from right, foreground), Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

    NASA Image and Video Library

    2003-08-05

    KENNEDY SPACE CENTER, FLA. - The Stafford-Covey Return to Flight Task Group (SCTG) inspects debris in the Columbia Debris Hangar. At right is the model of the left wing that has been used during recovery operations. Chairing the task group are Richard O. Covey, former Space Shuttle commander, and Thomas P. Stafford (third from right, foreground), Apollo commander. Chartered by NASA Administrator Sean O’Keefe, the task group will perform an independent assessment of NASA’s implementation of the final recommendations by the Columbia Accident Investigation Board.

  6. Spirit's Express Route to 'Columbia Hills'

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This map illustrates the Mars Exploration Rover Spirit's position as of sol 112 (April 26, 2004), near the crater called 'Missoula.' Like a train on a tight schedule, Spirit will make regular stops along the way to its ultimate destination, the 'Columbia Hills.' At each stop, or 'station,' the rover will briefly analyze the area's rocks and soils. Each tick mark on the rover's route represents one sol's worth of travel, or about 60 to 70 meters (200 to 230 feet). Rover planners estimate that Spirit will reach the hills around mid-June. Presently, the rover is stopped at a site called 'Plains Station.'

    The color thermal data show how well different surface features hold onto heat. Red indicates warmth; blue indicates coolness. Areas with higher temperatures are more likely to be rocky, as rocks absorb heat. Lower temperatures denote small particles and fewer rocks. During its traverse, Spirit will document the causes of these temperature variations.

    The map comprises data from the camera on NASA's Mars Global Surveyor orbiter and the thermal emission imaging system on NASA's Mars Odyssey orbiter.

  7. Spirit's Express Route to 'Columbia Hills'

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This map illustrates the Mars Exploration Rover Spirit's position as of sol 112 (April 26, 2004), near the crater called 'Missoula.' Like a train on a tight schedule, Spirit will make regular stops along the way to its ultimate destination, the 'Columbia Hills.' At each stop, or 'station,' the rover will briefly analyze the area's rocks and soils. Each tick mark on the rover's route represents one sol's worth of travel, or about 60 to 70 meters (200 to 230 feet). Rover planners estimate that Spirit will reach the hills around mid-June. Presently, the rover is stopped at a site called 'Plains Station.'

    The color thermal data show how well different surface features hold onto heat. Red indicates a higher thermal inertia associated with rocky terrain (cooler in the day, warmer at night); blue indicates a lower thermal inertia associated with smaller particles and fewer rocks (warmer at night, cooler in the day). During its traverse, Spirit will document the causes of these thermal variations.

    The map comprises data from the camera on NASA's Mars Global Surveyor orbiter and the thermal emission imaging system on NASA's Mars Odyssey orbiter.

  8. Loss of Signal, Aeromedical Lessons Learned from the STS-107 Columbia Space Shuttle Mishap

    NASA Technical Reports Server (NTRS)

    Stepaniak, Phillip C.; Patlach, Robert

    2014-01-01

    Loss of Signal, a NASA publication to be available in May 2014 presents the aeromedical lessons learned from the Columbia accident that will enhance crew safety and survival on human space flight missions. These lessons were presented to limited audiences at three separate Aerospace Medical Association (AsMA) conferences: in 2004 in Anchorage, Alaska, on the causes of the accident; in 2005 in Kansas City, Missouri, on the response, recovery, and identification aspects of the investigation; and in 2011, again in Anchorage, Alaska, on future implications for human space flight. As we embark on the development of new spacefaring vehicles through both government and commercial efforts, the NASA Johnson Space Center Human Health and Performance Directorate is continuing to make this information available to a wider audience engaged in the design and development of future space vehicles. Loss of Signal summarizes and consolidates the aeromedical impacts of the Columbia mishap process-the response, recovery, identification, investigative studies, medical and legal forensic analysis, and future preparation that are needed to respond to spacecraft mishaps. The goal of this book is to provide an account of the aeromedical aspects of the Columbia accident and the investigation that followed, and to encourage aerospace medical specialists to continue to capture information, learn from it, and improve procedures and spacecraft designs for the safety of future crews. This poster presents an outline of Loss of Signal contents and highlights from each of five sections - the mission and mishap, the response, the investigation, the analysis and the future.

  9. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-02-14

    In the RLV Hangar, a Columbia Reconstruction Project Team member examines pieces of debris from the Space Shuttle Columbia. The debris has begun arriving at KSC from the collection point at Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident that claimed Columbia and her crew of seven, workers will attempt to reconstruct the orbiter inside the hangar.

  10. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-02-15

    Columbia Reconstruction Project Team members move debris from the Space Shuttle Columbia into a designated sector of the RLV Hangar. The debris is being shipped to KSC from the collection point at Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident that claimed Columbia and her crew of seven, workers will attempt to reconstruct the orbiter inside the hangar.

  11. NOAA announces significant investment in next generation of supercomputers

    Science.gov Websites

    provide more timely, accurate weather forecasts. (Credit: istockphoto.com) Today, NOAA announced the next phase in the agency's efforts to increase supercomputing capacity to provide more timely, accurate turn will lead to more timely, accurate, and reliable forecasts." Ahead of this upgrade, each of

  12. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-02-15

    Columbia Reconstruction Project Team members study diagrams to aid in the placement of debris from the Space Shuttle Columbia in the RLV Hangar. The debris is being shipped to KSC from the collection point at Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident that claimed Columbia and her crew of seven, workers will attempt to reconstruct the orbiter inside the hangar.

  13. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-02-15

    Columbia Reconstruction Project Team members move a piece of debris from the Space Shuttle Columbia into a specified sector of the RLV Hangar. The debris is being shipped to KSC from the collection point at Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident that claimed Columbia and her crew of seven, workers will attempt to reconstruct the orbiter inside the hangar.

  14. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-02-15

    A Columbia Reconstruction Project Team member uses a laptop computer to catalog debris from the Space Shuttle Columbia in the RLV Hangar. The debris is being shipped to KSC from the collection point at Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident that claimed Columbia and her crew of seven, workers will attempt to reconstruct the orbiter inside the hangar.

  15. NASA Managers Set July 20 As Launch Date for Chandra Telescope

    NASA Astrophysics Data System (ADS)

    1999-07-01

    NASA managers set Tuesday, July 20, 1999, as the official launch date for NASA's second Space Shuttle Mission of the year that will mark the launch of the first female Shuttle Commander and the Chandra X-Ray Observatory. Columbia is scheduled to liftoff from Launch Pad 39-B at the Kennedy Space Center on July 20 at the opening of a 46-minute launch window at 12:36 a.m. EDT. Columbia's planned five-day mission is scheduled to end with a night landing at the Kennedy Space Center just after 11:30 p.m. EDT on July 24. Following its deployment from the Shuttle, Chandra will join the Hubble Space Telescope and the Compton Gamma Ray Observatory as the next in NASA's series of "Great Observatories." Chandra will spend at least five years in a highly elliptical orbit which will carry it one-third of the way to the moon to observe invisible and often violent realms of the cosmos containing some of the most intriguing mysteries in astronomy ranging from comets in our solar system to quasars at the edge of the universe. Columbia's 26th flight is led by Air Force Col. Eileen Collins, who will command a Space Shuttle mission following two previous flights as a pilot. The STS-93 Pilot is Navy Captain Jeff Ashby who will be making his first flight into space. The three mission specialists for the flight are: Air Force Lt. Col. Catherine "Cady" Coleman, who will be making her second flight into space; Steven A. Hawley, Ph.D, making his fifth flight; and French Air Force Col. Michel Tognini of the French Space Agency (CNES), who is making his first Space Shuttle flight and second trip into space after spending two weeks on the Mir Space Station as a visiting cosmonaut in 1992. NASA press releases and other information are available automatically by sending an Internet electronic mail message to domo@hq.nasa.gov. In the body of the message (not the subject line) users should type the words "subscribe press-release" (no quotes). The system will reply with a confirmation via E-mail of

  16. C3: A Collaborative Web Framework for NASA Earth Exchange

    NASA Astrophysics Data System (ADS)

    Foughty, E.; Fattarsi, C.; Hardoyo, C.; Kluck, D.; Wang, L.; Matthews, B.; Das, K.; Srivastava, A.; Votava, P.; Nemani, R. R.

    2010-12-01

    The NASA Earth Exchange (NEX) is a new collaboration platform for the Earth science community that provides a mechanism for scientific collaboration and knowledge sharing. NEX combines NASA advanced supercomputing resources, Earth system modeling, workflow management, NASA remote sensing data archives, and a collaborative communication platform to deliver a complete work environment in which users can explore and analyze large datasets, run modeling codes, collaborate on new or existing projects, and quickly share results among the Earth science communities. NEX is designed primarily for use by the NASA Earth science community to address scientific grand challenges. The NEX web portal component provides an on-line collaborative environment for sharing of Eearth science models, data, analysis tools and scientific results by researchers. In addition, the NEX portal also serves as a knowledge network that allows researchers to connect and collaborate based on the research they are involved in, specific geographic area of interest, field of study, etc. Features of the NEX web portal include: Member profiles, resource sharing (data sets, algorithms, models, publications), communication tools (commenting, messaging, social tagging), project tools (wikis, blogs) and more. The NEX web portal is built on the proven technologies and policies of DASHlink.arc.nasa.gov, (one of NASA's first science social media websites). The core component of the web portal is a C3 framework, which was built using Django and which is being deployed as a common framework for a number of collaborative sites throughout NASA.

  17. Sequence search on a supercomputer.

    PubMed

    Gotoh, O; Tagashira, Y

    1986-01-10

    A set of programs was developed for searching nucleic acid and protein sequence data bases for sequences similar to a given sequence. The programs, written in FORTRAN 77, were optimized for vector processing on a Hitachi S810-20 supercomputer. A search of a 500-residue protein sequence against the entire PIR data base Ver. 1.0 (1) (0.5 M residues) is carried out in a CPU time of 45 sec. About 4 min is required for an exhaustive search of a 1500-base nucleotide sequence against all mammalian sequences (1.2M bases) in Genbank Ver. 29.0. The CPU time is reduced to about a quarter with a faster version.

  18. NASA Standard for Models and Simulations: Credibility Assessment Scale

    NASA Technical Reports Server (NTRS)

    Babula, Maria; Bertch, William J.; Green, Lawrence L.; Hale, Joseph P.; Moser, Gary E.; Steele, Martin J.; Sylvester, Andre; Woods, Jody

    2008-01-01

    As one of its many responses to the 2003 Space Shuttle Columbia accident, NASA decided to develop a formal standard for models and simulations (M and S)ii. Work commenced in May 2005. An interim version was issued in late 2006. This interim version underwent considerable revision following an extensive Agency-wide review in 2007 along with some additional revisions as a result of the review by the NASA Engineering Management Board (EMB) in the first half of 2008. Issuance of the revised, permanent version,hereafter referred to as the M and S Standard or just the Standard, occurred in July 2008.

  19. Review of NASA's (National Aeronautics and Space Administration) Numerical Aerodynamic Simulation Program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    NASA has planned a supercomputer for computational fluid dynamics research since the mid-1970's. With the approval of the Numerical Aerodynamic Simulation Program as a FY 1984 new start, Congress requested an assessment of the program's objectives, projected short- and long-term uses, program design, computer architecture, user needs, and handling of proprietary and classified information. Specifically requested was an examination of the merits of proceeding with multiple high speed processor (HSP) systems contrasted with a single high speed processor system. The panel found NASA's objectives and projected uses sound and the projected distribution of users as realistic as possible at this stage. The multiple-HSP, whereby new, more powerful state-of-the-art HSP's would be integrated into a flexible network, was judged to present major advantages over any single HSP system.

  20. Deploying the ODISEES Ontology-guided Search in the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Gleason, J. L.; Cotnoir, M.; Spaulding, R.; Deardorff, G.

    2016-12-01

    Robust, semantically rich metadata can support data discovery and access, and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Despite this, for users not already familiar with the data in a given archive, most metadata is insufficient to help them find appropriate data for their projects. With this in mind, the Ontology-driven Interactive Search Environment (ODISEES) Data Discovery Portal was developed to enable users to find and download data variables that satisfy precise, parameter-level criteria, even when they know little or nothing about the naming conventions employed by data providers, or where suitable data might be archived. ODISEES relies on an Earth science ontology and metadata repository that provide an ontological framework for describing NASA data holdings with enough detail and fidelity to enable researchers to find, compare and evaluate individual data variables. Users can search for data by indicating the specific parameters desired, and comparing the results in a table that lets them quickly determine which data is most suitable. ODISEES and OLYMPUS, a tool for generating the semantically enhanced metadata used by ODISEES, are being developed in collaboration with the NASA Earth Exchange (NEX) project at the NASA Ames Research Center to prototype a robust data discovery and access service that could be made available to NEX users. NEX is a collaborative platform that provides researchers with access to TB to PB-scale datasets and analysis tools to operate on those data. By integrating ODISEES into the NEX Web Portal we hope to enable NEX users to locate datasets relevant to their research and download them directly into the NAS environment, where they can run applications using those datasets on the NAS supercomputers. This poster will describe the prototype integration of ODISEES into the NEX portal development environment, the mechanism implemented to use NASA APIs to retrieve

  1. Application of technology developed for flight simulation at NASA. Langley Research Center

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1991-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations including mathematical model computation and data input/output to the simulators must be deterministic and be completed in as short a time as possible. Personnel at NASA's Langley Research Center are currently developing the use of supercomputers for simulation mathematical model computation for real-time simulation. This, coupled with the use of an open systems software architecture, will advance the state-of-the-art in real-time flight simulation.

  2. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  3. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  4. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  5. 78 FR 37222 - Columbia Organic Chemical Company Site, Columbia, Richland County, South Carolina; Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-20

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9826-7; CERCLA-04-2013-3761] Columbia Organic Chemical... Agency has entered into a settlement with Stephen Reichlyn concerning the Columbia Organic Chemical... comments by site name Columbia Organic Chemical Company by one of the following methods: www.epa.gov...

  6. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    NASA Astrophysics Data System (ADS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-09-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.

  7. Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

    DOE PAGES

    Tang, William; Wang, Bei; Ethier, Stephane; ...

    2016-11-01

    The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less

  8. Optical clock distribution in supercomputers using polyimide-based waveguides

    NASA Astrophysics Data System (ADS)

    Bihari, Bipin; Gan, Jianhua; Wu, Linghui; Liu, Yujie; Tang, Suning; Chen, Ray T.

    1999-04-01

    Guided-wave optics is a promising way to deliver high-speed clock-signal in supercomputer with minimized clock-skew. Si- CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitters. Surface-normal couplers can couple the optical clock signals into and out from the H-tree polyimide waveguides surface-normally, which facilitates the integration of photodetectors to convert optical-signal to electrical-signal. A 45-degree surface- normal couplers has been integrated at each output end. The measured output coupling efficiency is nearly 100 percent. The output profile from 45-degree surface-normal coupler were calculated using Fresnel approximation. the theoretical result is in good agreement with experimental result. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.

  9. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  10. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  11. Knowledge Acquisition and Management for the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Votava, P.; Michaelis, A.; Nemani, R. R.

    2013-12-01

    NASA Earth Exchange (NEX) is a data, computing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As more and more projects are being executed on NEX, we are increasingly focusing on capturing the knowledge of the NEX users and provide mechanisms for sharing it with the community in order to facilitate reuse and accelerate research. There are many possible knowledge contributions to NEX, it can be a wiki entry on the NEX portal contributed by a developer, information extracted from a publication in an automated way, or a workflow captured during code execution on the supercomputing platform. The goal of the NEX knowledge platform is to capture and organize this information and make it easily accessible to the NEX community and beyond. The knowledge acquisition process consists of three main faucets - data and metadata, workflows and processes, and web-based information. Once the knowledge is acquired, it is processed in a number of ways ranging from custom metadata parsers to entity extraction using natural language processing techniques. The processed information is linked with existing taxonomies and aligned with internal ontology (which heavily reuses number of external ontologies). This forms a knowledge graph that can then be used to improve users' search query results as well as provide additional analytics capabilities to the NEX system. Such a knowledge graph will be an important building block in creating a dynamic knowledge base for the NEX community where knowledge is both generated and easily shared.

  12. KENNEDY SPACE CENTER, FLA. - In the Vehicle Assembly Building, Shuttle Launch Director Mike Leinbach, Center Director Jim Kennedy and NASA Vehicle Manager Scott Thurston unveil a plaque honoring “Columbia, the crew of STS-107, and their loved ones.” The site is the “Columbia room,” a permanent repository of the debris collected in the aftermath of the tragic accident Feb. 1, 2003, that claimed the orbiter and lives of the seven-member crew. The dedication of the plaque was made in front of the 40-member preservation team.

    NASA Image and Video Library

    2004-01-29

    KENNEDY SPACE CENTER, FLA. - In the Vehicle Assembly Building, Shuttle Launch Director Mike Leinbach, Center Director Jim Kennedy and NASA Vehicle Manager Scott Thurston unveil a plaque honoring “Columbia, the crew of STS-107, and their loved ones.” The site is the “Columbia room,” a permanent repository of the debris collected in the aftermath of the tragic accident Feb. 1, 2003, that claimed the orbiter and lives of the seven-member crew. The dedication of the plaque was made in front of the 40-member preservation team.

  13. KENNEDY SPACE CENTER, FLA. -- NASA Associate Administrator for Space Flight William F. Readdy addresses the family members of the STS-107 astronauts, other dignitaries, members of the university community and the public gathered for the dedication ceremony of the Columbia Village at the Florida Institute of Technology in Melbourne, Fla. Each of the seven new residence halls in the complex is named for one of the STS-107 astronauts who perished during the Columbia accident -- Rick Husband, Willie McCool, Laurel Clark, Michael Anderson, David Brown, Kalpana Chawla, and Ilan Ramon.

    NASA Image and Video Library

    2003-10-28

    KENNEDY SPACE CENTER, FLA. -- NASA Associate Administrator for Space Flight William F. Readdy addresses the family members of the STS-107 astronauts, other dignitaries, members of the university community and the public gathered for the dedication ceremony of the Columbia Village at the Florida Institute of Technology in Melbourne, Fla. Each of the seven new residence halls in the complex is named for one of the STS-107 astronauts who perished during the Columbia accident -- Rick Husband, Willie McCool, Laurel Clark, Michael Anderson, David Brown, Kalpana Chawla, and Ilan Ramon.

  14. NASA Handbook for Models and Simulations: An Implementation Guide for NASA-STD-7009

    NASA Technical Reports Server (NTRS)

    Steele, Martin J.

    2013-01-01

    The purpose of this Handbook is to provide technical information, clarification, examples, processes, and techniques to help institute good modeling and simulation practices in the National Aeronautics and Space Administration (NASA). As a companion guide to NASA-STD- 7009, Standard for Models and Simulations, this Handbook provides a broader scope of information than may be included in a Standard and promotes good practices in the production, use, and consumption of NASA modeling and simulation products. NASA-STD-7009 specifies what a modeling and simulation activity shall or should do (in the requirements) but does not prescribe how the requirements are to be met, which varies with the specific engineering discipline, or who is responsible for complying with the requirements, which depends on the size and type of project. A guidance document, which is not constrained by the requirements of a Standard, is better suited to address these additional aspects and provide necessary clarification. This Handbook stems from the Space Shuttle Columbia Accident Investigation (2003), which called for Agency-wide improvements in the "development, documentation, and operation of models and simulations"' that subsequently elicited additional guidance from the NASA Office of the Chief Engineer to include "a standard method to assess the credibility of the models and simulations."2 General methods applicable across the broad spectrum of model and simulation (M&S) disciplines were sought to help guide the modeling and simulation processes within NASA and to provide for consistent reporting ofM&S activities and analysis results. From this, the standardized process for the M&S activity was developed. The major contents of this Handbook are the implementation details of the general M&S requirements ofNASA-STD-7009, including explanations, examples, and suggestions for improving the credibility assessment of an M&S-based analysis.

  15. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  16. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Pope, Adrian; Finkel, Hal

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less

  17. Accident Case Study of Organizational Silence Communication Breakdown: Shuttle Columbia, Mission STS-107

    NASA Technical Reports Server (NTRS)

    Rocha, Rodney

    2011-01-01

    This report has been developed by the National Aeronautics and Space Administration (NASA) ESMD Risk and Knowledge Management team. This document provides a point-in-time, cumulative, summary of key lessons learned derived from the official Columbia Accident Investigation Board (CAIB). Lessons learned invariably address challenges and risks and the way in which these areas have been addressed. Accordingly the risk management thread is woven throughout the document. This report is accompanied by a video that will be sent at request

  18. Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.

  19. Supercomputers Join the Fight against Cancer – U.S. Department of Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The Department of Energy has some of the best supercomputers in the world. Now, they’re joining the fight against cancer. Learn about our new partnership with the National Cancer Institute and GlaxoSmithKline Pharmaceuticals.

  20. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less

  1. Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Nitzberg, Bill

    1999-01-01

    The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.

  2. The Columbia River Research Laboratory

    USGS Publications Warehouse

    Maule, Alec

    2005-01-01

    The U.S. Geological Survey's Columbia River Research Laboratory (CRRL) was established in 1978 at Cook, Washington, in the Columbia River Gorge east of Portland, Oregon. The CRRL, as part of the Western Fisheries Research Center, conducts research on fishery issues in the Columbia River Basin. Our mission is to: 'Serve the public by providing scientific information to support the stewardship of our Nation's fish and aquatic resources...by conducting objective, relevant research'.

  3. NASA high performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee; Smith, Paul; Hunter, Paul

    1993-01-01

    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 100-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientist's abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects as well as summaries of individual research and development programs within each project.

  4. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  5. NASA Engineering Excellence: A Case Study on Strengthening an Engineering Organization

    NASA Technical Reports Server (NTRS)

    Shivers, C. Herbert; Wessel, Vernon W.

    2006-01-01

    NASA implemented a system of technical authority following the Columbia Accident Investigation Board (CAE) report calling for independent technical authority to be exercised on the Space Shuttle Program activities via a virtual organization of personnel exercising specific technical authority responsibilities. After the current NASA Administrator reported for duty, and following the first of two planned "Shuttle Return to Flight" missions, the NASA Chief Engineer and the Administrator redirected the Independent Technical Authority to a program of Technical Excellence and Technical Authority exercised within the existing engineering organizations. This paper discusses the original implementation of technical authority and the transition to the new implementation of technical excellence, including specific measures aimed at improving safety of future Shuttle and space exploration flights.

  6. STS-40 Columbia, Orbiter Vehicle (OV) 102, crew insignia

    NASA Image and Video Library

    1990-05-01

    STS40-S-001 (May 1990) --- The STS-40 patch makes a contemporary statement focusing on human beings living and working in space. Against a background of the universe, seven silver stars, interspersed about the Orbital path of the space shuttle Columbia, represent the seven crew members. The orbiter's flight path forms a double-helix, designed to represent the DNA molecule common to all living creatures. In the words of a crew spokesman, "...(the helix) affirms the ceaseless expansion of human life and American involvement in space while simultaneously emphasizing the medical and biological studies to which this flight is dedicated." Above Columbia, the phrase "Spacelab Life Sciences 1" defines both the shuttle mission and its payload. Leonardo Da Vinci's Vitruvian man, silhouetted against the blue darkness of the heavens, is in the upper center portion of the patch. With one foot on Earth and arms extended to touch shuttle's orbit, the crew feels, he serves as a powerful embodiment of the extension of human inquiry from the boundaries of Earth to the limitless laboratory of space. Sturdily poised amid the stars, he serves to link scentists on Earth to the scientists in space asserting the harmony of efforts which produce meaningful scientific spaceflight missions. A brilliant red and yellow Earth limb (center) links Earth to space as it radiates from a native American symbol for the sun. At the frontier of space, the traditional symbol for the sun vividly links America's past to America's future, the crew states. Beneath the orbiting space shuttle, darkness of night rests peacefully over the United States. Drawn by artist Sean Collins, the STS-40 space shuttle patch was designed by the crew members for the flight. The NASA insignia design for space shuttle flights is reserved for use by the astronauts and for other official use as the NASA Administrator may authorize. Public availability has been approved only in the forms of illustrations by the various news media

  7. NASA's Geospatial Interoperability Office(GIO)Program

    NASA Technical Reports Server (NTRS)

    Weir, Patricia

    2004-01-01

    NASA produces vast amounts of information about the Earth from satellites, supercomputer models, and other sources. These data are most useful when made easily accessible to NASA researchers and scientists, to NASA's partner Federal Agencies, and to society as a whole. A NASA goal is to apply its data for knowledge gain, decision support and understanding of Earth, and other planetary systems. The NASA Earth Science Enterprise (ESE) Geospatial Interoperability Office (GIO) Program leads the development, promotion and implementation of information technology standards that accelerate and expand the delivery of NASA's Earth system science research through integrated systems solutions. Our overarching goal is to make it easy for decision-makers, scientists and citizens to use NASA's science information. NASA's Federal partners currently participate with NASA and one another in the development and implementation of geospatial standards to ensure the most efficient and effective access to one another's data. Through the GIO, NASA participates with its Federal partners in implementing interoperability standards in support of E-Gov and the associated President's Management Agenda initiatives by collaborating on standards development. Through partnerships with government, private industry, education and communities the GIO works towards enhancing the ESE Applications Division in the area of National Applications and decision support systems. The GIO provides geospatial standards leadership within NASA, represents NASA on the Federal Geographic Data Committee (FGDC) Coordination Working Group and chairs the FGDC's Geospatial Applications and Interoperability Working Group (GAI) and supports development and implementation efforts such as Earth Science Gateway (ESG), Space Time Tool Kit and Web Map Services (WMS) Global Mosaic. The GIO supports NASA in the collection and dissemination of geospatial interoperability standards needs and progress throughout the agency including

  8. Mössbauer spectroscopy on Mars: goethite in the Columbia Hills at Gusev crater

    NASA Astrophysics Data System (ADS)

    Klingelhöfer, G.; Degrave, E.; Morris, R. V.; van Alboom, A.; de Resende, V. G.; de Souza, P. A.; Rodionov, D.; Schröder, C.; Ming, D. W.; Yen, A.

    2005-11-01

    In January 2004 the USA space agency NASA landed two rovers on the surface of Mars, both carrying the Mainz Mössbauer spectrometer MIMOS II. The instrument on the Mars-Exploration-Rover (MER) Spirit analyzed soils and rocks on the plains and in the Columbia Hills of Gusev crater landing site on Mars. The surface material in the plains have an olivine basaltic signature [1, 5] suggesting physical rather than chemical weathering processes present in the plains. The Mössbauer signature for the Columbia Hills surface material is very different ranging from nearly unaltered material to highly altered material. Some of the rocks, in particular a rock named Clovis, contain a significant amount of the Fe oxyhydroxide goethite, α-FeOOH, which is mineralogical evidence for aqueous processes because it is formed only under aqueous conditions. In this paper we describe the analysis of these data using hyperfine field distributions (HFD) and discuss the results in comparison to terrestrial analogues.

  9. LLMapReduce: Multi-Lingual Map-Reduce for Supercomputing Environments

    DTIC Science & Technology

    2015-11-20

    1990s. Popularized by Google [36] and Apache Hadoop [37], map-reduce has become a staple technology of the ever- growing big data community...Lexington, MA, U.S.A Abstract— The map-reduce parallel programming model has become extremely popular in the big data community. Many big data ...to big data users running on a supercomputer. LLMapReduce dramatically simplifies map-reduce programming by providing simple parallel programming

  10. The BlueGene/L supercomputer

    NASA Astrophysics Data System (ADS)

    Bhanota, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2003-05-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32×32×64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  11. NSF Says It Will Support Supercomputer Centers in California and Illinois.

    ERIC Educational Resources Information Center

    Strosnider, Kim; Young, Jeffrey R.

    1997-01-01

    The National Science Foundation will increase support for supercomputer centers at the University of California, San Diego and the University of Illinois, Urbana-Champaign, while leaving unclear the status of the program at Cornell University (New York) and a cooperative Carnegie-Mellon University (Pennsylvania) and University of Pittsburgh…

  12. STS-55 Columbia, OV-102, crew members board STA NASA 948 at Ellington Field

    NASA Image and Video Library

    1993-03-17

    S93-30754 (September 1992) --- Astronaut Catherine G. Coleman, who had recently begun a year?s training and evaluation program at the Johnson Space Center (JSC), sits in the rear station of a T-38 jet trainer. She was about to take a familiarization flight in the jet. Coleman was later named mission specialist for NASA?s STS-73/United States Microgravity Laboratory (USML-2) mission.

  13. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-03-31

    A member of the Columbia Reconstruction Project Team points to a search grid indicating locations where debris has been found. Approximately 4,500 ground searchers have covered approximately 56 percent of the planned 555,000-acre search area. About 28 percent of the Shuttle Columbia, by weight, has been delivered to the RLV Hangar to date.

  14. NASA Contingency Shuttle Crew Support (CSCS) Medical Operations

    NASA Technical Reports Server (NTRS)

    Adams, Adrien

    2010-01-01

    The genesis of the space shuttle began in the 1930's when Eugene Sanger came up with the idea of a recyclable rocket plane that could carry a crew of people. The very first Shuttle to enter space was the Shuttle "Columbia" which launched on April 12 of 1981. Not only was "Columbia" the first Shuttle to be launched, but was also the first to utilize solid fuel rockets for U.S. manned flight. The primary objectives given to "Columbia" were to check out the overall Shuttle system, accomplish a safe ascent into orbit, and to return back to earth for a safe landing. Subsequent to its first flight Columbia flew 27 more missions but on February 1st, 2003 after a highly successful 16 day mission, the Columbia, STS-107 mission, ended in tragedy. With all Shuttle flight successes come failures such as the fatal in-flight accident of STS 107. As a result of the STS 107 accident, and other close-calls, the NASA Space Shuttle Program developed contingency procedures for a rescue mission by another Shuttle if an on-orbit repair was not possible. A rescue mission would be considered for a situation where a Shuttle and the crew were not in immediate danger, but, was unable to return to Earth or land safely. For Shuttle missions to the International Space Station (ISS), plans were developed so the Shuttle crew would remain on board ISS for an extended period of time until rescued by a "rescue" Shuttle. The damaged Shuttle would subsequently be de-orbited unmanned. During the period when the ISS Crew and Shuttle crew are on board simultaneously multiple issues would need to be worked including, but not limited to: crew diet, exercise, psychological support, workload, and ground contingency support

  15. Emplacement of Columbia River flood basalt

    NASA Astrophysics Data System (ADS)

    Reidel, Stephen P.

    1998-11-01

    Evidence is examined for the emplacement of the Umatilla, Wilbur Creek, and the Asotin Members of Columbia River Basalt Group. These flows erupted in the eastern part of the Columbia Plateau during the waning phases of volcanism. The Umatilla Member consists of two flows in the Lewiston basin area and southwestern Columbia Plateau. These flows mixed to form one flow in the central Columbia Plateau. The composition of the younger flow is preserved in the center and the composition of the older flow is at the top and bottom. There is a complete gradation between the two. Flows of the Wilbur Creek and Asotin Members erupted individually in the eastern Columbia Plateau and also mixed together in the central Columbia Plateau. Comparison of the emplacement patterns to intraflow structures and textures of the flows suggests that very little time elapsed between eruptions. In addition, the amount of crust that formed on the earlier flows prior to mixing also suggests rapid emplacement. Calculations of volumetric flow rates through constrictions in channels suggest emplacement times of weeks to months under fast laminar flow for all three members. A new model for the emplacement of Columbia River Basalt Group flows is proposed that suggests rapid eruption and emplacement for the main part of the flow and slower emplacement along the margins as the of the flow margin expands.

  16. Supercomputer Simulations Help Develop New Approach to Fight Antibiotic Resistance

    ScienceCinema

    Zgurskaya, Helen; Smith, Jeremy

    2018-06-13

    ORNL leveraged powerful supercomputing to support research led by University of Oklahoma scientists to identify chemicals that seek out and disrupt bacterial proteins called efflux pumps, known to be a major cause of antibiotic resistance. By running simulations on Titan, the team selected molecules most likely to target and potentially disable the assembly of efflux pumps found in E. coli bacteria cells.

  17. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  18. NASA-STD-7009 Guidance Document for Human Health and Performance Models and Simulations

    NASA Technical Reports Server (NTRS)

    Walton, Marlei; Mulugeta, Lealem; Nelson, Emily S.; Myers, Jerry G.

    2014-01-01

    Rigorous verification, validation, and credibility (VVC) processes are imperative to ensure that models and simulations (MS) are sufficiently reliable to address issues within their intended scope. The NASA standard for MS, NASA-STD-7009 (7009) [1] was a resultant outcome of the Columbia Accident Investigation Board (CAIB) to ensure MS are developed, applied, and interpreted appropriately for making decisions that may impact crew or mission safety. Because the 7009 focus is engineering systems, a NASA-STD-7009 Guidance Document is being developed to augment the 7009 and provide information, tools, and techniques applicable to the probabilistic and deterministic biological MS more prevalent in human health and performance (HHP) and space biomedical research and operations.

  19. NASA/NOAA Earth Science Electronic Theater 1999. Earth Science Observations, Analysis and Visualization: Roots in the 60s: Vision for the Next Millennium

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz

    1999-01-01

    The Etheater presents visualizations which span the period from the original Suomi/Hasler animations of the first ATS-1 GEO weather satellite images in 1966 ....... to the latest 1999 NASA Earth Science Vision for the next 25 years. Hot off the SGI-Onyx Graphics-Supercomputer are NASA's visualizations of Hurricanes Mitch, Georges, Fran and Linda. These storms have been recently featured on the covers of National Geographic, Time, Newsweek and Popular Science. Highlights will be shown from the NASA hurricane visualization resource video tape in standard and HDTV that has been used repeatedly this season on National and International network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1-min GOES images that appeared in the November BAMS.

  20. Design applications for supercomputers

    NASA Technical Reports Server (NTRS)

    Studerus, C. J.

    1987-01-01

    The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.

  1. The Reconstruction and Failure Analysis of the Space Shuttle Columbia

    NASA Technical Reports Server (NTRS)

    Russell, Richard; Mayeaux, Brian; McDanels, Steven; Piascik, Robert; Sjaj. Samdee[; Jerman, Greg; Collins, Thomas; Woodworth, Warren

    2009-01-01

    Several days following the Columbia accident a team formed and began planning for the reconstruction of Columbia. A hangar at the Kennedy Space Center was selected for this effort due to it's size, available technical workforce and materials science laboratories and access to the vehicle ground processing infrastructure. The Reconstruction team established processes for receiving, handling, decontamination, tracking, identifying, cleaning and assessment of the debris. Initially, a 2-dimensional reconstruction of the Orbiter outer mold line was developed. As the investigation progressed fixtures which allowed a 3-dimensional reconstruction of the forward portions of the left wing's leading edge was developed. To support the reconstructions and forensic analyses a Materials and Processes (M&P) 'team was formed. This M&P team established processes for recording factual observations, debris cleaning, and engineering analysis. Fracture surfaces and thermal effects of selected airframe debris were assessed, and process flows for both nondestructive and destructive sampling and evaluation of debris were developed. The Team also assessed left hand airframe components that were believed to be associated with a structural breach of Columbia. A major portion of this analysis was evaluation of metallic deposits were prevalent on left wing leading edge components. Extensive evaluation of the visual, metallurgical and chemical nature of the deposits provided conclusions that were consistent with the visual assessments and interpretations of the NASA lead teams and the findings of the Columbia Accident Investigation Board. Analytical data collected by the M&P Team showed that a significant thermal event occurred at the left wing leading edge in the proximity of LH RCC Panels 8-9, and a correlation was formed between the deposits and overheating in these areas to the wing leading edge components. The analysis of deposits also showed exposure to temperatures in excess of 1649 C

  2. Multi-petascale highly efficient parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less

  3. STS-93 / Columbia Flight Crew Photo Op & QA at Pad for TCDT

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The primary objective of the STS-93 mission was to deploy the Advanced X-ray Astrophysical Facility, which had been renamed the Chandra X-ray Observatory in honor of the late Indian-American Nobel Laureate Subrahmanyan Chandrasekhar. The mission was launched at 12:31 on July 23, 1999 onboard the space shuttle Columbia. The mission was led by Commander Eileen Collins. The crew was Pilot Jeff Ashby and Mission Specialists Cady Coleman, Steve Hawley and Michel Tognini from the Centre National d'Etudes Spatiales (CNES). This videotape shows a pre-flight press conference. Prior to the astronauts' arrival at the bunker area in front of the launch pad, the narrator discusses some of the training that the astronauts are scheduled to have prior to the launch, particularly the emergency egress procedures. Commander Collins introduces the crew and fields questions from the assembled press. Many questions are asked about the experiences of Commander Collins, and Mission Specialist Coleman as women in NASA. The press conference takes place outside in front of the Shuttle Columbia on the launch pad.

  4. The impact of the U.S. supercomputing initiative will be global

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Dona

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  5. NASA Dryden Flight Research Center: We Fly What Others Only Imagine

    NASA Technical Reports Server (NTRS)

    Ennix-Sandhu, Kimberly

    2006-01-01

    A powerpoint presentation of NASA Dryden's historical and future flight programs is shown. The contents include: 1) Getting To Know NASA; 2) Our Namesake; 3) To Fly What Others Only Imagine; 4) Dryden's Mission: Advancing Technology and Science Through Flight; 5) X-1 The First of the Rocket-Powered Research Aircraft; 6) X-1 Landing; 7) Lunar Landing Research Vehicle (LLRV) Liftoff and Landing; 8) Linear Aerospike SR-71 Experiment (LASRE) Ground Test; 9) M2-F1 (The Flying Bathtub); 10) M2-F2 Drop Test; 11) Enterprise Space Shuttle Prototype; 12) Space Shuttle Columbia STS-1; 13) STS-114 Landing-August 2005; 14) Crew Exploration Vehicle (CEV); 15) What You Can Do To Succeed!; and 16) NASA Dryden Flight Research Center: This is What We Do!

  6. Supercomputing 2002: NAS Demo Abstracts

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor)

    2002-01-01

    The hyperwall is a new concept in visual supercomputing, conceived and developed by the NAS Exploratory Computing Group. The hyperwall will allow simultaneous and coordinated visualization and interaction of an array of processes, such as a the computations of a parameter study or the parallel evolutions of a genetic algorithm population. Making over 65 million pixels available to the user, the hyperwall will enable and elicit qualitatively new ways of leveraging computers to accomplish science. It is currently still unclear whether we will be able to transport the hyperwall to SC02. The crucial display frame still has not been completed by the metal fabrication shop, although they promised an August delivery. Also, we are still working the fragile node issue, which may require transplantation of the compute nodes from the present 2U cases into 3U cases. This modification will increase the present 3-rack configuration to 5 racks.

  7. Developments in the simulation of compressible inviscid and viscous flow on supercomputers

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Buning, P. G.

    1985-01-01

    In anticipation of future supercomputers, finite difference codes are rapidly being extended to simulate three-dimensional compressible flow about complex configurations. Some of these developments are reviewed. The importance of computational flow visualization and diagnostic methods to three-dimensional flow simulation is also briefly discussed.

  8. STS-65 Earth observation of Lake Chad, Africa, taken aboard Columbia, OV-102

    NASA Technical Reports Server (NTRS)

    1994-01-01

    STS-65 Earth observation taken aboard Columbia, Orbiter Vehicle (OV) 102, shows Lake Chad, Africa. This is another long term ecological monitoring site for NASA scientists. Lake Chad was first photographed from space in 1965. A 25-year length-of-record data set exists for this environmentally important area. A number of these scenes have been digitized, rectified, classified and results show that the lake area has been shrinking and only 15% to 20% of the surface water is visible on space images. NASA's objective in monitoring this lake is to document the intra- and interannual areal changes of the largest standing water body in the Sahelian biome of North Africa. These areal changes are an indicator of the presence or absence of drought across the arguably overpopulated, overgrazed, and over biological carrying capacity limits nations of the Sahel.

  9. Storage and network bandwidth requirements through the year 2000 for the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen

    1996-01-01

    The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.

  10. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  11. National Directory of NASA Space Grant Contacts

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Congress enacted the National Space Grant College and Fellowship Program (also known as Space Grant). NASA's Space Grant Program funds education, research, and public service programs in all 50 States, the District of Columbia, and the Commonwealth of Puerto Rico through 52 university-based Space Grant consortia. These consortia form a network of colleges and universities, industry partners, State and local Government agencies, other Federal agencies, museum and science centers, and nonprofit organizations, all with interests in aerospace education, research, and training. Space Grant programs emphasize the diversity of human resources, the participation of students in research, and the communication of the benefits of science and technology to the general public. Each year approximately one-third of the NASA Space Grant funds support scholarships and fellowships for United States students at the undergraduate and graduate levels. Typically, at least 20 percent of these awards go to students from underrepresented groups, and at least 40 percent go to women. Most Space Grant student awards include a mentored research experience with university faculty or NASA scientists or engineers. Space Grant consortia also fund curriculum enhancement and faculty development programs. Consortia members administer precollege and public service education programs in their States. The 52 consortia typically leverage NASA funds with matching contributions from State, local, and other university sources, which more than double the NASA funding. For more information, consult the Space Grant Web site at http://education.nasa.gov/spacegrant/

  12. STS-87 crew and VIPs inspect the orbiter Columbia after landing

    NASA Technical Reports Server (NTRS)

    1997-01-01

    STS-87 crew members regard the tiles underneath the orbiter Columbia shortly after its return to Runway 33 at Kennedy Space Center's Shuttle Landing Facility. Pointing to the tiles is the president of the National Space Development Agency (NASDA) of Japan, Isao Uchida, who is standing next to NASA Administrator Daniel Goldin. STS-87 Commander Kevin Kregel, at right, looks on as Pilot Steve Lindsey follows behind him to continue inspecting the orbiter. STS-87 concluded its mission with a main gear touchdown at 7:20:04 a.m. EST Dec. 5, drawing the 15-day, 16-hour and 34-minute-long mission of 6.5 million miles to a close. Also onboard the orbiter were Mission Specialists Winston Scott; Kalpana Chawla, Ph.D.; and Takao Doi, Ph.D., of NASDA; along with Payload Specialist Leonid Kadenyuk of the National Space Agency of Ukraine. During the 88th Space Shuttle mission, the crew performed experiments on the United States Microgravity Payload-4 and pollinated plants as part of the Collaborative Ukrainian Experiment. This was the 12th landing for Columbia at KSC and the 41st KSC landing in the history of the Space Shuttle program.

  13. Spirit Mini-TES Observations: From Bonneville Crater to the Columbia Hills.

    NASA Astrophysics Data System (ADS)

    Blaney, D. L.; Athena Science

    2004-11-01

    During the Mars Exploration Rover Extended Mission the Spirit rover traveled from the rim of the crater informally known as "Bonneville, Crater" into the hills informally known as the "Columbia Hills" in Gusev Crater. During this >3 km drive Mini-TES (Miniature Thermal Emission Spectrometer) collected systematic observations to characterize spectral diversity and targeted observations of rocks, soils, rover tracks, and trenches. Surface temperatures have steadily decreased during the drive and arrival into the Columbia hills with the approach of winter. Mini-TES covers the 5-29 micron spectral region with a 20 mrad aperture that is co-registered with panoramic and navigation cameras. As at the landing site (Christensen et al., Science, 2004), many dark rocks in the plains between "Bonneville Crater" show long wavelength (15-25 μm) absorptions due to olivine consistent with the detection of olivine-bearing basalt at this site from orbital TES infrared spectroscopy. Rocks with the spectral signature of olivine are rarer in the Columbia Hills. Measurements of outcrops of presumably intact bedrock lack any olivine signature and are consistent with other results indicating that these rocks are highly altered. Rock coatings and fine dust on rocks are common. Soils have thin dust coatings and disturbed soil (e.g rover tracks and trenches) are consistent with basalt. Mini-TES observations were coordinated with Panoramic Camera (Pancam) observations to allow us to search for correlations of visible spectra properties with infrared. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA.

  14. The Latest Developments in NASA's Long Duration Balloon Systems

    NASA Astrophysics Data System (ADS)

    Stilwell, Bryan D.

    The Latest Developments in NASA’s Long Duration Balloon Systems Bryan D. Stilwell, bryan.stilwell@csbf.nasa.gov Columbia Scientific Balloon Facility, Palestine, Texas, USA The Columbia Scientific Balloon Facility, located in Palestine, Texas offers the scientific community a high altitude balloon based communications platform. Scientific payload mass can exceed 2722 kg with balloon float altitudes on average of 40000 km and flight duration of up to 100 days. Many developments in electrical systems have occurred over the more than 25 years of long duration flights. This paper will discuss the latest developments in electronic systems related to long duration flights. Over the years, the long duration flights have increased in durations exceeding 56 days. In order to support these longer flights, the systems have had to increase in complexity and reliability. Several different systems that have been upgraded and/or enhanced will be discussed.

  15. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    PubMed

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  16. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Technical Reports Server (NTRS)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  17. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    NASA Astrophysics Data System (ADS)

    Landgrebe, Anton J.

    1987-03-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  18. Parallel-vector solution of large-scale structural analysis problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1989-01-01

    A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.

  19. Regional Sediment Budget of the Columbia River Littoral Cell, USA

    USGS Publications Warehouse

    Buijsman, Maarten C.; Sherwood, C.R.; Gibbs, A.E.; Gelfenbaum, G.; Kaminsky, G.M.; Ruggiero, P.; Franklin, J.

    2002-01-01

    Summary -- In this Open-File Report we present calculations of changes in bathymetric and topographic volumes for the Grays Harbor, Willapa Bay, and Columbia River entrances and the adjacent coasts of North Beach, Grayland Plains, Long Beach, and Clatsop Plains for four intervals: pre-jetty - 1920s (Interval 1), 1920s - 1950s (Interval 2), 1950s - 1990s (Interval 3), and 1920s 1990s (Interval 4). This analysis is part of the Southwest Washington Coastal Erosion Study (SWCES), the goals of which are to understand and predict the morphologic behavior of the Columbia River littoral cell on a management scale of tens of kilometers and decades. We obtain topographic Light Detection and Ranging (LIDAR) data from a joint project by the U.S. Geological Survey (USGS), National Oceanic and Atmospheric Administration (NOAA), National Aeronautic and Space Administration (NASA), and the Washington State Department of Ecology (DOE) and bathymetric data from the U.S. Coast and Geodetic Survey (USC&GS), U.S. Army Corps of Engineers (USACE), USGS, and the DOE. Shoreline data are digitized from T-Sheets and aerial photographs from the USC&GS and National Ocean Service (NOS). Instead of uncritically adjusting each survey to NAVD88, a common vertical land-based datum, we adjust some surveys to produce optimal results according to the following criteria. First, we minimize offsets in overlapping surveys within the same era, and second, we minimize bathymetric changes (relative to the 1990s) in deep water, where we assume minimal change has taken place. We grid bathymetric and topographic datasets using kriging and triangulation algorithms, calculate bathymetric-change surfaces for each interval, and calculate volume changes within polygons that are overlaid on the bathymetric-change surfaces. We find similar morphologic changes near the entrances to Grays Harbor and the Columbia River following jetty construction between 1898 and 1916 at the Grays Harbor entrance and between 1885 and

  20. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-04-15

    Members of the Columbia Reconstruction Project team gather for a group photo around an enlarged replica of the STS-107 crew emblem just delivered to the RLV Hangar. The emblem will be installed on an outside wall of the hangar. Inside the hangar, the team is identifying pieces of Columbia debris as they arrive at Kennedy Space Center and placing them on a grid approximating the shape of the orbiter.

  1. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  2. Purpose, Principles, and Challenges of the NASA Engineering and Safety Center

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    2016-01-01

    NASA formed the NASA Engineering and Safety Center in 2003 following the Space Shuttle Columbia accident. It is an Agency level, program-independent engineering resource supporting NASA's missions, programs, and projects. It functions to identify, resolve, and communicate engineering issues, risks, and, particularly, alternative technical opinions, to NASA senior management. The goal is to help ensure fully informed, risk-based programmatic and operational decision-making processes. To date, the NASA Engineering and Safety Center (NESC) has conducted or is actively working over 600 technical studies and projects, spread across all NASA Mission Directorates, and for various other U.S. Government and non-governmental agencies and organizations. Since inception, NESC human spaceflight related activities, in particular, have transitioned from Shuttle Return-to-Flight and completion of the International Space Station (ISS) to ISS operations and Orion Multi-purpose Crew Vehicle (MPCV), Space Launch System (SLS), and Commercial Crew Program (CCP) vehicle design, integration, test, and certification. This transition has changed the character of NESC studies. For these development programs, the NESC must operate in a broader, system-level design and certification context as compared to the reactive, time-critical, hardware specific nature of flight operations support.

  3. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  4. A Perspective on Computational Aerothermodynamics at NASA

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2007-01-01

    The evolving role of computational aerothermodynamics (CA) within NASA over the past 20 years is reviewed. The presentation highlights contributions to understanding the Space Shuttle pitching moment anomaly observed in the first shuttle flight, prediction of a static instability for Mars Pathfinder, and the use of CA for damage assessment in post-Columbia mission support. In the view forward, several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified.

  5. NASA's space shuttle Atlantis and its 747 carrier taxied on the Edwards Air Force Base flightline as the unusual combination left for Kennedy Space Center, Florida, on March 1, 2001

    NASA Image and Video Library

    2001-03-01

    NASA's space shuttle Atlantis and its 747 carrier taxied on the Edwards Air Force Base flightline as the unusual combination left for Kennedy Space Center, Florida, on March 1, 2001. Atlantis and the shuttle Columbia were both airborne on the same day as they migrated from California to Florida. Columbia underwent refurbishing at nearby Palmdale, California.

  6. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  7. The Q continuum simulation: Harnessing the power of GPU accelerated supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Frontiere, Nicholas; Sewell, Chris

    2015-08-01

    Modeling large-scale sky survey observations is a key driver for the continuing development of high-resolution, large-volume, cosmological simulations. We report the first results from the "Q Continuum" cosmological N-body simulation run carried out on the GPU-accelerated supercomputer Titan. The simulation encompasses a volume of (1300 Mpc)(3) and evolves more than half a trillion particles, leading to a particle mass resolution of m(p) similar or equal to 1.5 . 10(8) M-circle dot. At thismass resolution, the Q Continuum run is currently the largest cosmology simulation available. It enables the construction of detailed synthetic sky catalogs, encompassing different modeling methodologies, including semi-analyticmore » modeling and sub-halo abundance matching in a large, cosmological volume. Here we describe the simulation and outputs in detail and present first results for a range of cosmological statistics, such as mass power spectra, halo mass functions, and halo mass-concentration relations for different epochs. We also provide details on challenges connected to running a simulation on almost 90% of Titan, one of the fastest supercomputers in the world, including our usage of Titan's GPU accelerators.« less

  8. Development of NASA's Models and Simulations Standard

    NASA Technical Reports Server (NTRS)

    Bertch, William J.; Zang, Thomas A.; Steele, Martin J.

    2008-01-01

    From the Space Shuttle Columbia Accident Investigation, there were several NASA-wide actions that were initiated. One of these actions was to develop a standard for development, documentation, and operation of Models and Simulations. Over the course of two-and-a-half years, a team of NASA engineers, representing nine of the ten NASA Centers developed a Models and Simulation Standard to address this action. The standard consists of two parts. The first is the traditional requirements section addressing programmatics, development, documentation, verification, validation, and the reporting of results from both the M&S analysis and the examination of compliance with this standard. The second part is a scale for evaluating the credibility of model and simulation results using levels of merit associated with 8 key factors. This paper provides an historical account of the challenges faced by and the processes used in this committee-based development effort. This account provides insights into how other agencies might approach similar developments. Furthermore, we discuss some specific applications of models and simulations used to assess the impact of this standard on future model and simulation activities.

  9. Columbia River Component Data Evaluation Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C.S. Cearlock

    2006-08-02

    The purpose of the Columbia River Component Data Compilation and Evaluation task was to compile, review, and evaluate existing information for constituents that may have been released to the Columbia River due to Hanford Site operations. Through this effort an extensive compilation of information pertaining to Hanford Site-related contaminants released to the Columbia River has been completed for almost 965 km of the river.

  10. Experiences From NASA/Langley's DMSS Project

    NASA Technical Reports Server (NTRS)

    1996-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at the NASA Langley Research Center (LaRC) has placed such a system into production use. This paper will present the experiences, both good and bad, we have had with this system since putting it into production usage. The system is comprised of: 1) National Storage Laboratory (NSL)/UniTree 2.1, 2) IBM 9570 HIPPI attached disk arrays (both RAID 3 and RAID 5), 3) IBM RS6000 server, 4) HIPPI/IPI3 third party transfers between the disk array systems and the supercomputer clients, a CRAY Y-MP and a CRAY 2, 5) a "warm spare" file server, 6) transition software to convert from CRAY's Data Migration Facility (DMF) based system to DMSS, 7) an NSC PS32 HIPPI switch, and 8) a STK 4490 robotic library accessed from the IBM RS6000 block mux interface. This paper will cover: the performance of the DMSS in the following areas: file transfer rates, migration and recall, and file manipulation (listing, deleting, etc.); the appropriateness of a workstation class of file server for NSL/UniTree with LaRC's present storage requirements in mind the role of the third party transfers between the supercomputers and the DMSS disk array systems in DMSS; a detailed comparison (both in performance and functionality) between the DMF and DMSS systems LaRC's enhancements to the NSL/UniTree system administration environment the mechanism for DMSS to provide file server redundancy the statistics on the availability of DMSS the design and experiences with the locally developed transparent transition software which allowed us to make over 1.5 million DMF files available to NSL/UniTree with minimal system outage

  11. Integration of PanDA workload management system with Titan supercomputer at OLCF

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  12. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential ofmore » PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.« less

  13. The ChemViz Project: Using a Supercomputer To Illustrate Abstract Concepts in Chemistry.

    ERIC Educational Resources Information Center

    Beckwith, E. Kenneth; Nelson, Christopher

    1998-01-01

    Describes the Chemistry Visualization (ChemViz) Project, a Web venture maintained by the University of Illinois National Center for Supercomputing Applications (NCSA) that enables high school students to use computational chemistry as a technique for understanding abstract concepts. Discusses the evolution of computational chemistry and provides a…

  14. STS-52 Columbia, OV-102, soars into the sky after liftoff from KSC LC Pad 39B

    NASA Image and Video Library

    1992-10-22

    STS052-S-053 (22 Oct. 1992) --- This low-angle 35mm image shows the space shuttle Columbia on its way toward a ten-day Earth-orbital mission with a crew of five NASA astronauts and a Canadian payload specialist. Liftoff occurred at 1:09:39 p.m. (EDT), Oct. 22, from Kennedy Space Center?s (KSC) Launch Pad 39B. Crew members onboard are astronauts James D. Wetherbee, Michael A. Baker, Tamara E. Jernigan, Charles L. (Lacy) Veach and William M. Shepherd, along with payload specialist Steven G. MacLean. Payloads onboard include the Laser Geodynamic Satellite II (LAGEOS II), which will be deployed early in the mission, a series of Canadian experiments, and the United States Microgravity Payload-1 (USMP-1). Photo credit: NASA

  15. Will Your Next Supercomputer Come from Costco?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way throughmore » the procurement process.« less

  16. Aboard the mid-deck of the Earth-orbiting Space Shuttle Columbia, astronaut Charles J. Brady,

    NASA Technical Reports Server (NTRS)

    1996-01-01

    STS-78 ONBOARD VIEW --- Aboard the mid-deck of the Earth-orbiting Space Shuttle Columbia, astronaut Charles J. Brady, mission specialist and a licensed amateur radio operator or ham, talks to students on Earth. Some of the crew members devoted some of their off-duty time to continue a long-standing Shuttle tradition of communicating with students and other hams between their shifts of assigned duty. Brady joined four other NASA astronauts and two international payload specialists for almost 17-days of research in support of the Life and Microgravity Spacelab (LMS-1) mission.

  17. The Pawsey Supercomputer geothermal cooling project

    NASA Astrophysics Data System (ADS)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  18. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    ERIC Educational Resources Information Center

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  19. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  20. City of Columbia, Columbia, SC

    EPA Pesticide Factsheets

    Located in the heart of South Carolina, Columbia (population 124,818) first experienced industrial growth along the Congaree, Saluda, and Broad Rivers. Plantations, cotton mills, trains, and other industries lined the riverbanks. The City claimed numerous vacant, dilapidated structures in the neighborhoods of the Congaree region. They included industrial, railroad, and petroleum properties. Uncertainties related to contamination inhibited redevelopment efforts in the region. Brownfield assessments helped the city to resolve some of the uncertainties, and increased the marketability of the sites to prospective purchasers and developers.

  1. NASA High Performance Computing and Communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee; Smith, Paul; Hunter, Paul

    1994-01-01

    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 1(X)-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientists' abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project, exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects, as well as summaries of early accomplishments and the significance, status, and plans for individual research and development programs within each project. Areas of emphasis include benchmarking, testbeds, software and simulation methods.

  2. STS-65 Columbia, OV-102, with drag chute deployed lands at KSC SLF

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Space Shuttle Columbia, Orbiter Vehicle (OV) 102, its drag chute fully deployed, completes a record duration mission as it lands on Runway 33 at the Kennedy Space Center (KSC) Shuttle Landing Facility (SLF). A helicopter flying overhead observes as OV-102's nose landing gear (NLG) and main landing gear (MLG) roll along the runway. Landing occurred at 6:38 am (Eastern Daylight Time (EDT)). STS-65 mission duration was 14 days 17 hours and 56 minutes. Onboard were six NASA astronauts and a Japanese payload specialist who conducted experiments in support of the International Microgravity Laboratory 2 (IML-2) during the mission.

  3. Solving global shallow water equations on heterogeneous supercomputers

    PubMed Central

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved. PMID:28282428

  4. Opportunities for leveraging OS virtualization in high-end supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke

    2010-11-01

    This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.

  5. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  6. "Night" scene of the STS-5 Columbia in orbit over the earth

    NASA Image and Video Library

    1982-11-17

    S82-39796 (11-16 Nov. 1982) --- A ?night? scene of the STS-5 space shuttle Columbia in orbit over Earth?s glowing horizon was captured by an astronaut crew member aiming a 70mm handheld camera through the aft windows of the flight deck. The aft section of the cargo bay contains two closed protective shields for satellites which were deployed on the flight. The nearest ?cradle? or shield houses the Satellite Business System?s (SBS-3) spacecraft and is visible in this frame while the Telesta Canada ANIK C-3 shield is out of view. The vertical stabilizer, illuminated by the sun, is flanked by two orbital maneuvering system (OMS) pods. Photo credit: NASA

  7. Supercomputer modeling of hydrogen combustion in rocket engines

    NASA Astrophysics Data System (ADS)

    Betelin, V. B.; Nikitin, V. F.; Altukhov, D. I.; Dushin, V. R.; Koo, Jaye

    2013-08-01

    Hydrogen being an ecological fuel is very attractive now for rocket engines designers. However, peculiarities of hydrogen combustion kinetics, the presence of zones of inverse dependence of reaction rate on pressure, etc. prevents from using hydrogen engines in all stages not being supported by other types of engines, which often brings the ecological gains back to zero from using hydrogen. Computer aided design of new effective and clean hydrogen engines needs mathematical tools for supercomputer modeling of hydrogen-oxygen components mixing and combustion in rocket engines. The paper presents the results of developing verification and validation of mathematical model making it possible to simulate unsteady processes of ignition and combustion in rocket engines.

  8. The PMS project: Poor man's supercomputer

    NASA Astrophysics Data System (ADS)

    Csikor, F.; Fodor, Z.; Hegedüs, P.; Horváth, V. K.; Katz, S. D.; Piróth, A.

    2001-02-01

    We briefly describe the Poor Man's Supercomputer (PMS) project carried out at Eötvös University, Budapest. The goal was to construct a cost effective, scalable, fast parallel computer to perform numerical calculations of physical problems that can be implemented on a lattice with nearest neighbour interactions. To this end we developed the PMS architecture using PC components and designed a special, low cost communication hardware and the driver software for Linux OS. Our first implementation of PMS includes 32 nodes (PMS1). The performance of PMS1 was tested by Lattice Gauge Theory simulations. Using pure SU(3) gauge theory or the bosonic part of the minimal supersymmetric extention of the standard model (MSSM) on PMS1 we obtained 3 / Mflops and 0.60 / Mflops price-to-sustained performance ratio for double and single precision operations, respectively. The design of the special hardware and the communication driver are freely available upon request for non-profit organizations.

  9. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    PubMed

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  10. Progress and supercomputing in computational fluid dynamics; Proceedings of U.S.-Israel Workshop, Jerusalem, Israel, December 1984

    NASA Technical Reports Server (NTRS)

    Murman, E. M. (Editor); Abarbanel, S. S. (Editor)

    1985-01-01

    Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.

  11. Shuttle 'Challenger' aerodynamic performance from flight data - Comparisons with predicted values and 'Columbia' experience

    NASA Technical Reports Server (NTRS)

    Findlay, J. T.; Kelly, G. M.; Mcconnell, J. G.; Compton, H. R.

    1984-01-01

    Longitudinal aerodynamic performance comparisons between flight extracted and predicted values are presented for the first eight NASA Space Shuttle entry missions. Challenger results are correlated with the ensemble five flight Columbia experience and indicate effects due to differing angle-of-attack and body flap deflection profiles. An Appendix is attached showing the results of each flight using both the LaRC LAIRS and NOAA atmospheres. Discussions are presented which review apparent density anomalies observed in the flight data, with particular emphasis on the suggested shears and turbulence encountered during STS-2 and STS-4. Atmospheres derived from Shuttle data are presented which show structure different than that remotely sensed and imply regions of unstable air masses as a plausible explanation. Though additional aerodynamic investigations are warranted, an added benefit of Shuttle flight data for atmospheric research is discussed, in particular, as applicable to future NASA space vehicles such as AOTVs and tethered satellites.

  12. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    NASA Astrophysics Data System (ADS)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  13. The Age of the Supercomputer Gives Way to the Age of the Super Infrastructure.

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    1997-01-01

    In October 1997, the National Science Foundation will discontinue financial support for two university-based supercomputer facilities to concentrate resources on partnerships led by facilities at the University of California, San Diego and the University of Illinois, Urbana-Champaign. The reconfigured program will develop more user-friendly and…

  14. STS-65 Pilot Halsell cleans window on the aft flight deck of Columbia, OV-102

    NASA Technical Reports Server (NTRS)

    1994-01-01

    On the aft flight deck of Columbia, Orbiter Vehicle (OV) 102, STS-65 Pilot James D. Halsell, Jr cleans off overhead window W8. Mission Specialist (MS) Carl E. Walz looks on (photo's edge). A plastic toy dinosaur, velcroed in front of W9, also appears to be watching the housekeeping activity. A variety of onboard equipment including procedural checklists, a spotmeter, a handheld microphone, and charts are seen in the view. The two shared over fourteen days in Earth orbit with four other NASA astronauts and a Japanese payload specialist in support of the second International Microgravity Laboratory (IML-2) mission.

  15. Changing course. Columbia the buyer becomes Columbia the builder as the company seeks to overcome market impediments.

    PubMed

    Japsen, B; Snow, C

    1997-04-14

    In an attempt to overcome market roadblocks, Columbia/HCA Healthcare Corp is revising its strategy from buying existing hospitals to constructing new ones. In this issue we take a look at the investor-owned giant's changing tactics as well as its sometimes rocky relations with the media. We also examine Columbia's performance in its former headquarters city, Louisville, Ky.

  16. NASA Administrator Goldin talks with STS-93 Commander Collins at the SLF

    NASA Technical Reports Server (NTRS)

    1999-01-01

    At the Shuttle Landing Facility, NASA Administrator Daniel Goldin (foreground) talks with STS-93 Commander Eileen Collins beside the Space Shuttle orbiter Columbia following the successful completion of her mission. Marshall Space Flight Center Director Arthur G. Stephenson (far left) looks on. Landing occurred on runway 33 with main gear touchdown at 11:20:35 p.m. EDT on July 27. The mission's primary objective was to deploy the Chandra X- ray Observatory, which will allow scientists from around the world to study some of the most distant, powerful and dynamic objects in the universe. This was the 95th flight in the Space Shuttle program and the 26th for Columbia. The landing was the 19th consecutive Shuttle landing in Florida and the 12th night landing in Shuttle program history. On this mission, Collins became the first woman to serve as a Shuttle commander.

  17. NASA Accident Precursor Analysis Handbook, Version 1.0

    NASA Technical Reports Server (NTRS)

    Groen, Frank; Everett, Chris; Hall, Anthony; Insley, Scott

    2011-01-01

    Catastrophic accidents are usually preceded by precursory events that, although observable, are not recognized as harbingers of a tragedy until after the fact. In the nuclear industry, the Three Mile Island accident was preceded by at least two events portending the potential for severe consequences from an underappreciated causal mechanism. Anomalies whose failure mechanisms were integral to the losses of Space Transportation Systems (STS) Challenger and Columbia had been occurring within the STS fleet prior to those accidents. Both the Rogers Commission Report and the Columbia Accident Investigation Board report found that processes in place at the time did not respond to the prior anomalies in a way that shed light on their true risk implications. This includes the concern that, in the words of the NASA Aerospace Safety Advisory Panel (ASAP), "no process addresses the need to update a hazard analysis when anomalies occur" At a broader level, the ASAP noted in 2007 that NASA "could better gauge the likelihood of losses by developing leading indicators, rather than continue to depend on lagging indicators". These observations suggest a need to revalidate prior assumptions and conclusions of existing safety (and reliability) analyses, as well as to consider the potential for previously unrecognized accident scenarios, when unexpected or otherwise undesired behaviors of the system are observed. This need is also discussed in NASA's system safety handbook, which advocates a view of safety assurance as driving a program to take steps that are necessary to establish and maintain a valid and credible argument for the safety of its missions. It is the premise of this handbook that making cases for safety more experience-based allows NASA to be better informed about the safety performance of its systems, and will ultimately help it to manage safety in a more effective manner. The APA process described in this handbook provides a systematic means of analyzing candidate

  18. View of the Columbia's open payload bay

    NASA Image and Video Library

    1981-11-13

    STS002-13-208 (12-14 Nov. 1981) --- This clear view of the aft section of the Earth-orbiting space shuttle Columbia's cargo bay and some of its cargo was photographed through the flight deck's aft windows. Visible in the center of the photo are the twin orbital maneuvering system (OMS) pods. The vertical stabilizer or tail splits the top part of the image in half. The Induced Environment Contamination Monitor (IECM) Location experiment is located in the back center of the cargo bay, near the top. There is a grapple fixture attached to the side of the IECM. Various components of the Office of Space Terrestrial Applications (OSTA-1) payload are seen near the aft section of the cargo bay, such as the Feature Identification and Location Experiment (FILE) (the long cone shaped object on the right back), the Shuttle Multispectral Infrared Radiometer (SMIRR) (on pallet base) and the SIR-A recorder in the right foreground. In the left foreground the Shuttle Imaging Radar-A (SIR-A) antenna can be seen. Photo credit: NASA

  19. Optimal wavelength-space crossbar switches for supercomputer optical interconnects.

    PubMed

    Roudas, Ioannis; Hemenway, B Roe; Grzybowski, Richard R; Karinou, Fotini

    2012-08-27

    We propose a most economical design of the Optical Shared MemOry Supercomputer Interconnect System (OSMOSIS) all-optical, wavelength-space crossbar switch fabric. It is shown, by analysis and simulation, that the total number of on-off gates required for the proposed N × N switch fabric can scale asymptotically as N ln N if the number of input/output ports N can be factored into a product of small primes. This is of the same order of magnitude as Shannon's lower bound for switch complexity, according to which the minimum number of two-state switches required for the construction of a N × N permutation switch is log2 (N!).

  20. Image Analysis via Fuzzy-Reasoning Approach: Prototype Applications at NASA

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steven J.

    2004-01-01

    A set of imaging techniques based on Fuzzy Reasoning (FR) approach was built for NASA at Kennedy Space Center (KSC) to perform complex real-time visual-related safety prototype tasks, such as detection and tracking of moving Foreign Objects Debris (FOD) during the NASA Space Shuttle liftoff and visual anomaly detection on slidewires used in the emergency egress system for Space Shuttle at the launch pad. The system has also proved its prospective in enhancing X-ray images used to screen hard-covered items leading to a better visualization. The system capability was used as well during the imaging analysis of the Space Shuttle Columbia accident. These FR-based imaging techniques include novel proprietary adaptive image segmentation, image edge extraction, and image enhancement. Probabilistic Neural Network (PNN) scheme available from NeuroShell(TM) Classifier and optimized via Genetic Algorithm (GA) was also used along with this set of novel imaging techniques to add powerful learning and image classification capabilities. Prototype applications built using these techniques have received NASA Space Awards, including a Board Action Award, and are currently being filed for patents by NASA; they are being offered for commercialization through the Research Triangle Institute (RTI), an internationally recognized corporation in scientific research and technology development. Companies from different fields, including security, medical, text digitalization, and aerospace, are currently in the process of licensing these technologies from NASA.

  1. Using Perilog to Explore "Decision Making at NASA"

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W.

    2005-01-01

    Perilog, a context intensive text mining system, is used as a discovery tool to explore topics and concerns in "Decision Making at NASA," chapter 6 of the Columbia Accident Investigation Board (CAIB) Report, Volume I. Two examples illustrate how Perilog can be used to discover highly significant safety-related information in the text without prior knowledge of the contents of the document. A third example illustrates how "if-then" statements found by Perilog can be used in logical analysis of decision making. In addition, in order to serve as a guide for future work, the technical details of preparing a PDF document for input to Perilog are included in an appendix.

  2. The Evolving Landscape of the Columbia River Gorge: Lewis and Clark and Cataclysms on the Columbia

    USGS Publications Warehouse

    O'Connor, James E.

    2004-01-01

    TAVELERS RETRACING LEWIS AND CLARKE JOURNEY to the Pacific over the past two hundred years have witnessed tre mendous change to the Columbia River Gorge and its pri mary feature, the Columbia River. Dams, reservoirs, timber harvest, altered fisheries, transportation infrastructure, and growth and shrinkage of communities have transformed the river and valley.1 This radically different geography of human use and habitation is commonly contrasted with the sometimes romantic view of a prior time provided both by early nineteenth-century chroniclers and present-day critics of the modern condition ? an ecotopia of plentiful and perpetual resources sustaining a stable culture from time immemorial. Reality is more com plicated. Certainly the human-caused changes to the Columbia River and the gorge since Lewis and Clark have been profound; but the geologic his tory of immense floods, landslides, and volcanic eruptions that occurred before their journey had equally, if not more, acute effects on landscapes and societies of the gorge. In many ways, the Lewis and Clark Expedi tion can be viewed as a hinge point for the Columbia River, the changes engineered to the river and its valley in the two hundred years since their visit mirrored by tremendous changes geologically engendered in the thousands of years before. 

  3. The Evolving Landscape of the Columbia River Gorge: Lewis and Clark and Cataclysms on the Columbia

    USGS Publications Warehouse

    O'Connor, James E.

    2004-01-01

    Travelers reacting Lewis and Clark's journey to the Pacific over the past two hundred years have witnessed tremendous change to the Columbia River Gorge and its primary feature, the Columbia River. Dams, reservoirs, timer harvest, altered fisheries, transportation infrastructure, and growth and shrinkage of communities have transformed the river and valley. This radically different geography of human use and habitation is  commonly contrasted with the sometimes romantic view of a prior time provided both by early nineteenth-century chronicle and present day critics of the modern condition - an ectopia of plentiful and perpetual resources sustaining a stable culture from time immemorial. Reality is more complicated. Certainly the human-caused changes to the Columbia River and the gorge since Lewis and Clark have been profound; by the geologic history of immense floods, landslides, and volcanic eruptions that occurred before the journey had equally, if not more, acute effects on landscapes and societies of the gorge. In many ways, the Lewis and Clark Expidition can be viewed as a hinge point for the Columbia River, the changes engineered to the river and its valley in the two hundred years since their visit mirrored by tremendous cchanges geologically engendered in the thousands of years before. 

  4. Overview of NASA MSFC IEC Federated Engineering Collaboration Capability

    NASA Technical Reports Server (NTRS)

    Moushon, Brian; McDuffee, Patrick

    2005-01-01

    The MSFC IEC federated engineering framework is currently developing a single collaborative engineering framework across independent NASA centers. The federated approach allows NASA centers the ability to maintain diversity and uniqueness, while providing interoperability. These systems are integrated together in a federated framework without compromising individual center capabilities. MSFC IEC's Federation Framework will have a direct affect on how engineering data is managed across the Agency. The approach is directly attributed in response to the Columbia Accident Investigation Board (CAB) finding F7.4-11 which states the Space Shuttle Program has a wealth of data sucked away in multiple databases without a convenient way to integrate and use the data for management, engineering, or safety decisions. IEC s federated capability is further supported by OneNASA recommendation 6 that identifies the need to enhance cross-Agency collaboration by putting in place common engineering and collaborative tools and databases, processes, and knowledge-sharing structures. MSFC's IEC Federated Framework is loosely connected to other engineering applications that can provide users with the integration needed to achieve an Agency view of the entire product definition and development process, while allowing work to be distributed across NASA Centers and contractors. The IEC DDMS federation framework eliminates the need to develop a single, enterprise-wide data model, where the goal of having a common data model shared between NASA centers and contractors is very difficult to achieve.

  5. British Columbia

    ERIC Educational Resources Information Center

    Walton, Gerald

    2006-01-01

    The province of British Columbia has a dubious history where support for lesbian, gay, bisexual, and transgendered (LGBT) issues in education is concerned. Most notable is the Surrey School Board's decision in 1997 to ban three picture books for children that depict families with two moms or two dads. The North Vancouver School Board has also…

  6. NASA Institute for Advanced Concepts

    NASA Technical Reports Server (NTRS)

    Cassanova, Robert A.

    1999-01-01

    The purpose of NASA Institute for Advanced Concepts (NIAC) is to provide an independent, open forum for the external analysis and definition of space and aeronautics advanced concepts to complement the advanced concepts activities conducted within the NASA Enterprises. The NIAC will issue Calls for Proposals during each year of operation and will select revolutionary advanced concepts for grant or contract awards through a peer review process. Final selection of awards will be with the concurrence of NASA's Chief Technologist. The operation of the NIAC is reviewed biannually by the NIAC Science, Exploration and Technology Council (NSETC) whose members are drawn from the senior levels of industry and universities. The process of defining the technical scope of the initial Call for Proposals was begun with the NIAC "Grand Challenges" workshop conducted on May 21-22, 1998 in Columbia, Maryland. These "Grand Challenges" resulting from this workshop became the essence of the technical scope for the first Phase I Call for Proposals which was released on June 19, 1998 with a due date of July 31, 1998. The first Phase I Call for Proposals attracted 119 proposals. After a thorough peer review, prioritization by NIAC and technical concurrence by NASA, sixteen subgrants were awarded. The second Phase I Call for Proposals was released on November 23, 1998 with a due date of January 31, 1999. Sixty-three (63) proposals were received in response to this Call. On December 2-3, 1998, the NSETC met to review the progress and future plans of the NIAC. The next NSETC meeting is scheduled for August 5-6, 1999. The first Phase II Call for Proposals was released to the current Phase I grantees on February 3,1999 with a due date of May 31, 1999. Plans for the second year of the contract include a continuation of the sequence of Phase I and Phase II Calls for Proposals and hosting the first NIAC Annual Meeting and USRA/NIAC Technical Symposium in NASA HQ.

  7. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  8. The CHPRC Columbia River Protection Project Quality Assurance Project Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fix, N. J.

    Pacific Northwest National Laboratory researchers are working on the CHPRC Columbia River Protection Project (hereafter referred to as the Columbia River Project). This is a follow-on project, funded by CH2M Hill Plateau Remediation Company, LLC (CHPRC), to the Fluor Hanford, Inc. Columbia River Protection Project. The work scope consists of a number of CHPRC funded, related projects that are managed under a master project (project number 55109). All contract releases associated with the Fluor Hanford Columbia River Project (Fluor Hanford, Inc. Contract 27647) and the CHPRC Columbia River Project (Contract 36402) will be collected under this master project. Each projectmore » within the master project is authorized by a CHPRC contract release that contains the project-specific statement of work. This Quality Assurance Project Plan provides the quality assurance requirements and processes that will be followed by the Columbia River Project staff.« less

  9. Historic Columbia River Highway oral history : final report.

    DOT National Transportation Integrated Search

    2009-08-01

    The Historic Columbia River Highway: Oral History Project compliments a larger effort in Oregon to reconnect abandoned sections of the Historic Columbia River Highway. The goals of the larger reconnection project, Milepost 2016 Reconnection Projec...

  10. 40 CFR 81.108 - Columbia Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Quality Control Regions § 81.108 Columbia Intrastate Air Quality Control Region. The Columbia Intrastate Air Quality Control Region (South Carolina) consists of the territorial area encompassed by the... 40 Protection of Environment 17 2011-07-01 2011-07-01 false Columbia Intrastate Air Quality...

  11. 40 CFR 81.108 - Columbia Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Quality Control Regions § 81.108 Columbia Intrastate Air Quality Control Region. The Columbia Intrastate Air Quality Control Region (South Carolina) consists of the territorial area encompassed by the... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Columbia Intrastate Air Quality...

  12. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  13. Vectorized program architectures for supercomputer-aided circuit design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzoli, V.; Ferlito, M.; Neri, A.

    1986-01-01

    Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. Thismore » could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.« less

  14. Characterization of Space Shuttle External Tank Thermal Protection System (TPS) Materials in Support of the Columbia Accident Investigation

    NASA Technical Reports Server (NTRS)

    Wingard, Charles D.

    2004-01-01

    NASA suffered the loss of the seven-member crew of the Space Shuttle Columbia on February 1, 2003 when the vehicle broke apart upon re-entry to the Earth's atmosphere. The final report of the Columbia Accident Investigation Board (CAIB) determined that the accident was caused by a launch ascent incident-a suitcase-sized chunk of insulating foam on the Shuttle's External Tank (ET) broke off, and moving at almost 500 mph, struck an area of the leading edge of the Shuttle s left wing. As a result, one or more of the protective Reinforced Carbon-Carbon (RCC) panels on the wing leading edge were damaged. Upon re-entry, superheated air approaching 3,000 F breached the wing damage and caused the vehicle breakup and loss of crew. The large chunk of insulating foam that broke off during the Columbia launch was determined to come from the so-called bipod ramp area where the Shuttle s orbiter (containing crew) is attached to the ET. Underneath the foam in the bipod ramp area is a layer of TPS that is a cork-filled silicone rubber composite. In March 2003, the NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama received cured samples of the foam and composite for testing from the Michoud Assembly Facility (MAF) in New Orleans, Louisiana. The MAF is where the Shuttle's ET is manufactured. The foam and composite TPS materials for the ET have been well characterized for mechanical property data at the super-cold temperatures of the liquid oxygen and hydrogen fuels used in the ET. However, modulus data on these materials is not as well characterized. The TA Instruments 2980 Dynamic Mechanical Analyzer (DMA) was used to determine the modulus of the two TPS materials over a range of -145 to 95 C in the dual cantilever bending mode. Multi-strain, fixed frequency DMA tests were followed by multi-frequency, fixed strain tests to determine the approximate bounds of linear viscoelastic behavior for the two materials. Additional information is included in the original extended

  15. STS-52 Columbia, Orbiter Vehicle (OV) 102, crew insignia

    NASA Technical Reports Server (NTRS)

    1992-01-01

    STS-52 Columbia, Orbiter Vehicle (OV) 102, crew insignia (logo), the Official insignia of the NASA STS-52 mission, features a large gold star to symbolize the crew's mission on the frontiers of space. A gold star is often used to symbolize the frontier period of the American West. The red star in the shape of the Greek letter lambda represents both the laser measurements to be taken from the Laser Geodynamic Satellite (LAGEOS II) and the Lambda Point Experiment, which is part of the United States Microgravity Payload (USMP-1). The LAGEOS II is a joint Italian United States (U.S.) satellite project intended to further our understanding of global plate tectonics. The USMP-1 is a microgravity facility which has French and U.S. experiments designed to test the theory of cooperative phase transitions and to study the solidliquid interface of a metallic alloy in the low gravity environment. The remote manipulator system (RMS) arm and maple leaf are emblematic of the Canadian payload speci

  16. STS-55 Columbia, OV-102, crew poses for onboard portrait in SL-D2 module

    NASA Image and Video Library

    1993-05-06

    STS055-203-009 (26 April-6 May 1993) --- The seven crew members who spent 10 days aboard the space shuttle Columbia pose for the traditional in-flight portrait in the Spacelab D-2 Science Module. Front, left to right, are Terence T. (Tom) Henricks, Steven R. Nagel, Ulrich Walter and Charles J. Precourt. In the rear are (left to right) Bernard A. Harris Jr., Hans Schlegel and Jerry L. Ross. Nagel served as mission commander; Henricks was the pilot and Ross, the payload commander. Harris and Precourt were mission specialists and Schlegel and Walter were payload specialists representing the German Aerospace Research Establishment (DLR). Photo credit: NASA

  17. Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers

    PubMed Central

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-01-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250

  18. Calculation of Free Energy Landscape in Multi-Dimensions with Hamiltonian-Exchange Umbrella Sampling on Petascale Supercomputer.

    PubMed

    Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît

    2012-11-13

    An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.

  19. STS-65 Columbia, OV-102, rises above KSC LC Pad 39A during liftoff

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Columbia, Orbiter Vehicle (OV) 102, rises above Kennedy Space Center (KSC) Launch Complex (LC) Pad 39A after liftoff at 12:43 pm Eastern Daylight Time (EDT). An exhaust cloud covers the launch pad area and the glow of the space shuttle main engine (SSME) and solid rocket booster (SRB) firings is reflected in a nearby marsh as OV-102 atop its external tank (ET) heads toward Earth orbit. A small flock of birds is visible at the right. Once in Earth's orbit, STS-65's six NASA astronauts and a Japanese Payload Specialist aboard OV-102 will begin two weeks of experimentation in support of the second International Microgravity Laboratory (IML-2) mission.

  20. STS-65 Columbia, OV-102, rises above KSC LC Pad 39A during liftoff

    NASA Image and Video Library

    1994-07-08

    Columbia, Orbiter Vehicle (OV) 102, rises above Kennedy Space Center (KSC) Launch Complex (LC) Pad 39A after liftoff at 12:43 pm Eastern Daylight Time (EDT). An exhaust cloud covers the launch pad area and the glow of the space shuttle main engine (SSME) and solid rocket booster (SRB) firings is reflected in a nearby marsh as OV-102 atop its external tank (ET) heads toward Earth orbit. A small flock of birds is visible at the right. Once in Earth's orbit, STS-65's six NASA astronauts and a Japanese Payload Specialist aboard OV-102 will begin two weeks of experimentation in support of the second International Microgravity Laboratory (IML-2) mission.

  1. Supercomputer analysis of purine and pyrimidine metabolism leading to DNA synthesis.

    PubMed

    Heinmets, F

    1989-06-01

    A model-system is established to analyze purine and pyrimidine metabolism leading to DNA synthesis. The principal aim is to explore the flow and regulation of terminal deoxynucleoside triophosphates (dNTPs) in various input and parametric conditions. A series of flow equations are established, which are subsequently converted to differential equations. These are programmed (Fortran) and analyzed on a Cray chi-MP/48 supercomputer. The pool concentrations are presented as a function of time in conditions in which various pertinent parameters of the system are modified. The system is formulated by 100 differential equations.

  2. STS-65 Columbia, OV-102, lifts off from KSC LC Pad 39A

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Columbia, Orbiter Vehicle (OV) 102, begins its roll maneuver after clearing the fixed service structure (FSS) tower as it rises above Kennedy Space Center (KSC) Launch Complex (LC) Pad 39A. In the foreground of this horizontal scene is Florida brush and a waterway. Beyond the brush, the shuttle's exhaust cloud envelops the immediate launch pad area. Launch occurred at 12:43 pm Eastern Daylight Time (EDT). The glow of the space shuttle main engine (SSME) and solid rocket booster (SRB) firings is reflected in the nearby waterway. Once in Earth orbit, STS-65's six NASA astronauts and a Japanese Payload Specialist aboard OV-102 will begin two weeks of experimentation in support of the second International Microgravity Laboratory (IML-2).

  3. STS-109/Columbia/HST Pre-Launch Activities/Launch On Orbit-Landing-Crew Egress

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The STS-109 Space Shuttle Mission begins with introduction of the seven crew members: Commander Scott D. Altman, pilot Duane G. Carey, payload commander John M. Grunsfeld, mission specialists: Nancy J. Currie, James H. Newman, Richard M. Linnehan, and Michael J. Massimino. Spacewalking NASA astronauts revive the Hubble Space Telescope's (HST) sightless infrared eyes, outfitting the observatory with an experimental refrigerator designed to resuscitate a comatose camera. During this video presentation John Grunsfeld and Rick Linnehan bolt the new cryogenic cooler inside HST and hung a huge radiator outside the observatory and replaces the telescope power switching station. In the video we can see how the shuttle robot arm operator, Nancy Currie, releases the 13-ton HST. Also, the landing of the Space Shuttle Columbia is presented.

  4. STS-65 Columbia, OV-102, lifts off from KSC LC Pad 39A

    NASA Image and Video Library

    1994-07-08

    Columbia, Orbiter Vehicle (OV) 102, begins its roll maneuver after clearing the fixed service structure (FSS) tower as it rises above Kennedy Space Center (KSC) Launch Complex (LC) Pad 39A. In the foreground of this horizontal scene is Florida brush and a waterway. Beyond the brush, the shuttle's exhaust cloud envelops the immediate launch pad area. Launch occurred at 12:43 pm Eastern Daylight Time (EDT). The glow of the space shuttle main engine (SSME) and solid rocket booster (SRB) firings is reflected in the nearby waterway. Once in Earth orbit, STS-65's six NASA astronauts and a Japanese Payload Specialist aboard OV-102 will begin two weeks of experimentation in support of the second International Microgravity Laboratory (IML-2).

  5. STS-28 Columbia, OV-102, crewmembers leave KSC O&C Bldg en route to LC Pad 39

    NASA Image and Video Library

    1989-08-08

    STS028-S-001 (8 Aug 1989) --- The five astronaut crewmembers for STS-28 leave the operations and checkout building to board a transfer van en route to Launch Complex 39 for a date with Columbia. Front to back are Brewster H. Shaw Jr., Richard N. Richards, David C. Leestma, James C. Adamson and Mark N. Brown. At the rear of the line are Astronaut Michael L. Coats, acting chief of the astronaut office; and Donald R. Puddy, director of flight crew operations at JSC. Coats later flew a NASA Shuttle training aircraft for pre-launch and launch monitoring activities.

  6. 4. South Elevation Columbia Island Abutment Four; South Elevation ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. South Elevation - Columbia Island Abutment Four; South Elevation - Washington Abutment One - Arlington Memorial Bridge, Spanning Potomac River between Lincoln Memorial & Arlington National Cemetery, Washington, District of Columbia, DC

  7. Status of the interior Columbia Basin: summary of scientific findings.

    Treesearch

    Forest Service. U.S. Department of Agriculture

    1996-01-01

    The Status of the Interior Columbia Basin is a summary of the scientific findings from the Interior Columbia Basin Ecosystem Management Project. The Interior Columbia Basin includes some 145 million acres within the northwestern United Stales. Over 75 million acres of this area are managed by the USDA Forest Service or the USDI Bureau of Land Management. A framework...

  8. Strategies for Information Retrieval and Virtual Teaming to Mitigate Risk on NASA's Missions

    NASA Technical Reports Server (NTRS)

    Topousis, Daria; Williams, Gregory; Murphy, Keri

    2007-01-01

    Following the loss of NASA's Space Shuttle Columbia in 2003, it was determined that problems in the agency's organization created an environment that led to the accident. One component of the proposed solution resulted in the formation of the NASA Engineering Network (NEN), a suite of information retrieval and knowledge sharing tools. This paper describes the implementation of this set of search, portal, content management, and semantic technologies, including a unique meta search capability for data from distributed engineering resources. NEN's communities of practice are formed along engineering disciplines where users leverage their knowledge and best practices to collaborate and take informal learning back to their personal jobs and embed it into the procedures of the agency. These results offer insight into using traditional engineering disciplines for virtual teaming and problem solving.

  9. KENNEDY SPACE CENTER, FLA. - Dr. Dennis Morrison, NASA Johnson Space Center, processes one of the experiments carried on mission STS-107. Several experiments were found during the search for Columbia debris. Included in the Commercial ITA Biomedical Experiments payload on mission STS-107 are urokinase cancer research, microencapsulation of drugs, the Growth of Bacterial Biofilm on Surfaces during Spaceflight (GOBBSS), and tin crystal formation.

    NASA Image and Video Library

    2003-05-07

    KENNEDY SPACE CENTER, FLA. - Dr. Dennis Morrison, NASA Johnson Space Center, processes one of the experiments carried on mission STS-107. Several experiments were found during the search for Columbia debris. Included in the Commercial ITA Biomedical Experiments payload on mission STS-107 are urokinase cancer research, microencapsulation of drugs, the Growth of Bacterial Biofilm on Surfaces during Spaceflight (GOBBSS), and tin crystal formation.

  10. KENNEDY SPACE CENTER, FLA. - Dr. Dennis Morrison, NASA Johnson Space Center, works with one of the experiments carried on mission STS-107. Several experiments were found during the search for Columbia debris. Included in the Commercial ITA Biomedical Experiments payload on mission STS-107 are urokinase cancer research, microencapsulation of drugs, the Growth of Bacterial Biofilm on Surfaces during Spaceflight (GOBBSS), and tin crystal formation.

    NASA Image and Video Library

    2003-05-07

    KENNEDY SPACE CENTER, FLA. - Dr. Dennis Morrison, NASA Johnson Space Center, works with one of the experiments carried on mission STS-107. Several experiments were found during the search for Columbia debris. Included in the Commercial ITA Biomedical Experiments payload on mission STS-107 are urokinase cancer research, microencapsulation of drugs, the Growth of Bacterial Biofilm on Surfaces during Spaceflight (GOBBSS), and tin crystal formation.

  11. Columbia River Component Data Gap Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L. C. Hulstrom

    2007-10-23

    This Data Gap Analysis report documents the results of a study conducted by Washington Closure Hanford (WCH) to compile and reivew the currently available surface water and sediment data for the Columbia River near and downstream of the Hanford Site. This Data Gap Analysis study was conducted to review the adequacy of the existing surface water and sediment data set from the Columbia River, with specific reference to the use of the data in future site characterization and screening level risk assessments.

  12. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  13. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports tomore » slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.« less

  14. Open NASA Earth Exchange (OpenNEX): A Public-Private Partnership for Climate Change Research

    NASA Astrophysics Data System (ADS)

    Nemani, R. R.; Lee, T. J.; Michaelis, A.; Ganguly, S.; Votava, P.

    2014-12-01

    NASA Earth Exchange (NEX) is a data, computing and knowledge collaborative that houses satellite, climate and ancillary data where a community of researchers can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As a part of broadening the community beyond NASA-funded researchers, NASA through an agreement with Amazon Inc. made available to the public a large collection of Climate and Earth Sciences satellite data. The data, available through the Open NASA Earth Exchange (OpenNEX) platform hosted by Amazon Web Services (AWS) public cloud, consists of large amounts of global land surface imaging, vegetation conditions, climate observations and climate projections. In addition to the data, users of OpenNEX platform can also watch lectures from leading experts, learn basic access and use of the available data sets. In order to advance White House initiatives such as Open Data, Big Data and Climate Data and the Climate Action Plan, NASA over the past six months conducted the OpenNEX Challenge. The two-part challenge was designed to engage the public in creating innovative ways to use NASA data and address climate change impacts on economic growth, health and livelihood. Our intention was that the challenges allow citizen scientists to realize the value of NASA data assets and offers NASA new ideas on how to share and use that data. The first "ideation" challenge, closed on July 31st attracted over 450 participants consisting of climate scientists, hobbyists, citizen scientists, IT experts and App developers. Winning ideas from the first challenge will be incorporated into the second "builder" challenge currently targeted to launch mid-August and close by mid-November. The winner(s) will be formally announced at AGU in December of 2014. We will share our experiences and lessons learned over the past year from OpenNEX, a public-private partnership for

  15. Collective Bargaining Agreement between Board of Trustees of Lower Columbia College District 13 and Lower Columbia Faculty Association, 1987-1990.

    ERIC Educational Resources Information Center

    Lower Columbia Coll., Longview, WA.

    This contractual agreement between the Board of Trustees of Lower Columbia College (LCC) District 13 and the Lower Columbia College Faculty Association outlines the terms of employment for all academic employees of the district. The 13 articles in the agreement set forth provisions related to: (1) recognition of the association as exclusive…

  16. SemanticOrganizer: A Customizable Semantic Repository for Distributed NASA Project Teams

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Berrios, Daniel C.; Carvalho, Robert E.; Hall, David R.; Rich, Stephen J.; Sturken, Ian B.; Swanson, Keith J.; Wolfe, Shawn R.

    2004-01-01

    SemanticOrganizer is a collaborative knowledge management system designed to support distributed NASA projects, including diverse teams of scientists, engineers, and accident investigators. The system provides a customizable, semantically structured information repository that stores work products relevant to multiple projects of differing types. SemanticOrganizer is one of the earliest and largest semantic web applications deployed at NASA to date, and has been used in diverse contexts ranging from the investigation of Space Shuttle Columbia's accident to the search for life on other planets. Although the underlying repository employs a single unified ontology, access control and ontology customization mechanisms make the repository contents appear different for each project team. This paper describes SemanticOrganizer, its customization facilities, and a sampling of its applications. The paper also summarizes some key lessons learned from building and fielding a successful semantic web application across a wide-ranging set of domains with diverse users.

  17. President Ronald Reagan speaks to a crowd of more than 45,000 people at NASA's Dryden Flight Research Center following the landing of STS-4 on July 4, 1982

    NASA Image and Video Library

    1982-07-04

    President Ronald Reagan speaks to a crowd of more than 45,000 people at NASA's Dryden Flight Research Center following the landing of STS-4 on July 4, 1982. To the right of the President are Mrs. Reagan and NASA Administrator James M. Beggs. To the left are STS-4 Columbia astronauts Thomas K. Mattingly and Henry W. Hartsfield, Jr. Prototype Space Shuttle Enterprise is in the background.

  18. NASA/NOAA/AMS Earth Science Electronic Theatre

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz; Pierce, Hal; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes & tornadic thunderstorms. See the latest spectacular images from NASA and NOAA remote sensing missions like GOES, NOAA, TRMM, SeaWiFS, Landsat 7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using 1 m resolution spy-satellite technology from the Space Imaging IKONOS satellite, Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.

  19. New Columbia Admission Act

    THOMAS, 112th Congress

    Rep. Norton, Eleanor Holmes [D-DC-At Large

    2011-01-12

    House - 02/08/2011 Referred to the Subcommittee on Health Care, District of Columbia, Census and the National Archives . (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  20. The Columbia River--on the Leading Edge

    NASA Astrophysics Data System (ADS)

    O'Connor, J. E.

    2005-05-01

    On the leading edge of the North American plate, the Columbia River is the largest of the world's 40 or so rivers with drainage areas greater than 500,000 square kilometers to drain toward a convergent plate boundary. This unique setting results in a unique continental river basin; marked by episodic and cataclysmic geologic disturbance, but also famously fecund with perhaps 10 to 16 million salmon historically spawning in its waters each year. Now transformed by dams, transportation infrastructure, dikes and diversions, the Columbia River presents an expensive conundrum for management of its many values. Inclusion of river ecology and geomorphology in discussions of river management is generally limited to observations of the last 200 years-a time period of little natural disturbance and low sediment transport. However, consideration of longer timescales provides additional perspective of historical ecologic and geomorphic conditions. Only 230 km from its mouth, the Columbia River bisects the volcanic arc of the Cascade Range, forming the Columbia River Gorge. Cenozoic lava flows have blocked the river, forcing diversions and new canyon cutting. Holocene eruptions of Mount Mazama (Crater Lake), Mount Hood, Mount St. Helens, and Mount Rainier have shed immense quantities of sediment into the lower Columbia River, forming a large percentage of the Holocene sediment transported through the lower river. Quaternary landslides, perhaps triggered by great earthquakes, have descended from the 1000-m-high gorge walls, also blocking and diverting the river, one as recently as 550 years ago. These geologic disturbances, mostly outside the realm of historical observation and operating at timescales of 100s to 1000s of years in the gorge and elsewhere, have clearly affected basin geomorphology, riverine ecology, and past and present cultural utilization of river resources. The historic productivity of the river, however, hints at extraordinary resilience (and perhaps

  1. AIC Computations Using Navier-Stokes Equations on Single Image Supercomputers For Design Optimization

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru

    2004-01-01

    A procedure to accurately generate AIC using the Navier-Stokes solver including grid deformation is presented. Preliminary results show good comparisons between experiment and computed flutter boundaries for a rectangular wing. A full wing body configuration of an orbital space plane is selected for demonstration on a large number of processors. In the final paper the AIC of full wing body configuration will be computed. The scalability of the procedure on supercomputer will be demonstrated.

  2. Achieving supercomputer performance for neural net simulation with an array of digital signal processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, U.A.; Baumle, B.; Kohler, P.

    1992-10-01

    Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.

  3. LightForce Photon-Pressure Collision Avoidance: Updated Efficiency Analysis Utilizing a Highly Parallel Simulation Approach

    DTIC Science & Technology

    2014-09-01

    simulation time frame from 30 days to one year. This was enabled by porting the simulation to the Pleiades supercomputer at NASA Ames Research Center, a...including the motivation for changes to our past approach. We then present the software implementation (3) on the NASA Ames Pleiades supercomputer...significantly updated since last year’s paper [25]. The main incentive for that was the shift to a highly parallel approach in order to utilize the Pleiades

  4. STS-65 Columbia, OV-102, lifts off from KSC Launch Complex (LC) Pad 39A

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Columbia, Orbiter Vehicle (OV) 102, atop its external tank (ET) rises above the Kennedy Space Center (KSC) Launch Complex (LC) Pad 39A after liftoff at 12:43 pm Eastern Daylight Time (EDT). OV-102 starboard side and one of the two solid rocket boosters (SRBs) are visible in this launch view. The retracted rotating service structure (RSS) is nearly covered in the shuttle's exhaust at the left as OV-102 clears the fixed service structure (FSS) tower. The space shuttle main engines produce a diamond shock effect. Once in orbit, STS-65's six NASA astronauts and a Japanese Payload Specialist will begin two weeks of experimentation in support of the second International Microgravity Laboratory (IML-2) mission.

  5. STS-65 Columbia, OV-102, clears launch tower after liftoff from KSC LC 39A

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Columbia, Orbiter Vehicle (OV) 102, heads skyward after clearing the fixed service structure (FSS) tower at Kennedy Space Center (KSC) Launch Complex (LC) Pad 39A. Florida plant life appears in the foreground. The exhaust cloud produced by OV-102's solid rocket boosters (SRBs) covers the launch pad area with the exception of the sound suppression water system tower. OV-102's starboard side and the right SRB are visible from this angle. Launch occurred at 12:43 pm Eastern Daylight Time (EDT). Once in Earth orbit, STS-65's six NASA astronauts and a Japanese Payload Specialist aboard OV-102 will begin two weeks of experimentation in support of the second International Microgravity Laboratory (IML-2).

  6. STS-65 Columbia, OV-102, clears launch tower after liftoff from KSC LC 39A

    NASA Image and Video Library

    1994-07-08

    Columbia, Orbiter Vehicle (OV) 102, heads skyward after clearing the fixed service structure (FSS) tower at Kennedy Space Center (KSC) Launch Complex (LC) Pad 39A. Florida plant life appears in the foreground. The exhaust cloud produced by OV-102's solid rocket boosters (SRBs) covers the launch pad area with the exception of the sound suppression water system tower. OV-102's starboard side and the right SRB are visible from this angle. Launch occurred at 12:43 pm Eastern Daylight Time (EDT). Once in Earth orbit, STS-65's six NASA astronauts and a Japanese Payload Specialist aboard OV-102 will begin two weeks of experimentation in support of the second International Microgravity Laboratory (IML-2).

  7. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  8. I/O Performance Characterization of Lustre and NASA Applications on Pleiades

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Rappleye, Jason; Chang, Johnny; Barker, David Peter; Biswas, Rupak; Mehrotra, Piyush

    2012-01-01

    In this paper we study the performance of the Lustre file system using five scientific and engineering applications representative of NASA workload on large-scale supercomputing systems such as NASA s Pleiades. In order to facilitate the collection of Lustre performance metrics, we have developed a software tool that exports a wide variety of client and server-side metrics using SGI's Performance Co-Pilot (PCP), and generates a human readable report on key metrics at the end of a batch job. These performance metrics are (a) amount of data read and written, (b) number of files opened and closed, and (c) remote procedure call (RPC) size distribution (4 KB to 1024 KB, in powers of 2) for I/O operations. RPC size distribution measures the efficiency of the Lustre client and can pinpoint problems such as small write sizes, disk fragmentation, etc. These extracted statistics are useful in determining the I/O pattern of the application and can assist in identifying possible improvements for users applications. Information on the number of file operations enables a scientist to optimize the I/O performance of their applications. Amount of I/O data helps users choose the optimal stripe size and stripe count to enhance I/O performance. In this paper, we demonstrate the usefulness of this tool on Pleiades for five production quality NASA scientific and engineering applications. We compare the latency of read and write operations under Lustre to that with NFS by tracing system calls and signals. We also investigate the read and write policies and study the effect of page cache size on I/O operations. We examine the performance impact of Lustre stripe size and stripe count along with performance evaluation of file per process and single shared file accessed by all the processes for NASA workload using parameterized IOR benchmark.

  9. Columbia basin project, Washington: Adams, Douglas, Franklin, Grant, Lincoln, and Walla Walla Counties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-01-01

    The Columbia Basin Project is a multipurpose development utilizing a portion of the resources of the Columbia River in the central part of the State of Washington. The key structure, Grand Coulee Dam, is on the main stem of the Columbia River about 90 miles west of Spokane, Wash. The extensive irrigation works extend southward on the Columbia Plateau 125 miles to the vicinity of Pasco, Wash., where the Snake and Columbia Rivers join.

  10. Binary Black Hole Mergers, Gravitational Waves, and LISA

    NASA Astrophysics Data System (ADS)

    Centrella, Joan; Baker, J.; Boggs, W.; Kelly, B.; McWilliams, S.; van Meter, J.

    2007-12-01

    The final merger of comparable mass binary black holes is expected to be the strongest source of gravitational waves for LISA. Since these mergers take place in regions of extreme gravity, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute black hole mergers using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Within the past few years, however, this situation has changed dramatically, with a series of remarkable breakthroughs. We will present the results of new simulations of black hole mergers with unequal masses and spins, focusing on the gravitational waves emitted and the accompanying astrophysical "kicks.” The magnitude of these kicks has bearing on the production and growth of supermassive blackholes during the epoch of structure formation, and on the retention of black holes in stellar clusters. This work was supported by NASA grant 06-BEFS06-19, and the simulations were carried out using Project Columbia at the NASA Advanced Supercomputing Division (Ames Research Center) and at the NASA Center for Computational Sciences (Goddard Space Flight Center).

  11. A performance comparison of scalar, vector, and concurrent vector computers including supercomputers for modeling transport of reactive contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Tripathi, Vijay S.; Yeh, G. T.

    1993-06-01

    Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.

  12. Supercomputer requirements for selected disciplines important to aerospace

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1989-01-01

    Speed and memory requirements placed on supercomputers by five different disciplines important to aerospace are discussed and compared with the capabilities of various existing computers and those projected to be available before the end of this century. The disciplines chosen for consideration are turbulence physics, aerodynamics, aerothermodynamics, chemistry, and human vision modeling. Example results for problems illustrative of those currently being solved in each of the disciplines are presented and discussed. Limitations imposed on physical modeling and geometrical complexity by the need to obtain solutions in practical amounts of time are identified. Computational challenges for the future, for which either some or all of the current limitations are removed, are described. Meeting some of the challenges will require computer speeds in excess of exaflop/s (10 to the 18th flop/s) and memories in excess of petawords (10 to the 15th words).

  13. Earth Institute at Columbia University ADVANCE Program: Addressing Needs for Women in Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Bell, R. E.; Cane, M.; Mutter, J.; Miller, R.; Pfirman, S.; Laird, J.

    2004-12-01

    The Earth Institute has received a major NSF ADVANCE grant targeted at increasing the participation and advancement of women scientists and engineers in the Academy through institutional transformation. The Earth Institute at Columbia University includes 9 research institutes including Lamont-Doherty Earth Observatory, Center for Environmental Research and Conservation (CERC), Center for International Earth Science Information Network (CIESIN), International Research Institute (IRI) for Climate Prediction, Earth Engineering Center, NASA-Goddard Institute for Space Studies, Center for Risks and Hazards, Center for Globalization and Sustainable Development, and Center for Global Health and Economic Development and six academic departments including Ecology, Evolution and Environmental Biology (E3B, School of Arts and Sciences), Earth and Environmental Engineering (DEEE, School of Engineering and Applied Sciences), Department of Environmental Health (School of Public Health), Department of Earth and Environmental Sciences (DEES, School of Arts and Sciences), Department of International and Public Affairs (School of International and Policy Affairs), and Barnard College Department of Environmental Science. The Earth Institute at Columbia University's ADVANCE program is based both on a study of the status of women at Columbia and research on the progression of women in science elsewhere. The five major targets of the Columbia ADVANCE program are to (1) change the demographics of the faculty through intelligent hiring practices, (2) provide support to women scientists through difficult life transitions including elder care and adoption or birth of a child, (3) enhance mentoring and networking opportunities, (4) implement transparent promotion procedures and policies, and (5) conduct an institutional self study. The Earth Institute ADVANCE program is unique in that it addresses issues that tend to manifest themselves in the earth and environmental fields, such as extended

  14. STS-65 Commander Cabana with SAREX-II on Columbia's, OV-102's, flight deck

    NASA Image and Video Library

    1994-07-23

    STS065-44-014 (8-23 July 1994) --- Astronaut Robert D. Cabana, mission commander, is seen on the Space Shuttle Columbia's flight deck with the Shuttle Amateur Radio Experiment (SAREX). SAREX was established by NASA, the American Radio League/Amateur Radio Satellite Corporation and the Johnson Space Center (JSC) Amateur Radio Club to encourage public participation in the space program through a project to demonstrate the effectiveness of conducting short-wave radio transmissions between the Shuttle and ground-based radio operators at low-cost ground stations with amateur and digital techniques. As on several previous missions, SAREX was used on this flight as an educational opportunity for students around the world to learn about space firsthand by speaking directly to astronauts aboard the Shuttle.

  15. STS-107 Columbia's engine no. 2 removal for inspection

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. -- In the Orbiter Processing Facility, Columbia's engine no. 2 is about to be removed. After small cracks were discovered on the LH2 Main Propulsion System (MPS) flow liners in two other orbiters, program managers decided to move forward with inspections on Columbia before clearing it for flight on STS-107. The heat shields were removed, and after removing the three main engines, inspections of the flow liners will follow. The July 19 launch of Columbia on STS-107 has been delayed a few weeks

  16. STS-35 crew and NASA management inspect OV-102 after landing at EAFB, Calif

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-35 NASA JSC Flight Crew Operations Directorate (FCOD) Director Donald R. Puddy (center) joins the STS-35 crewmembers in a post landing walk-around inspection of Columbia, Orbiter Vehicle (OV) 102, at Edwards Air Force Base (EAFB), California. Crewmembers, wearing launch and entry suits (LESs), include (left to right) Commander Vance D. Brand, Mission Specialist (MS) John M. Lounge, Payload Specialist Ronald A. Parise, Pilot Guy S. Gardner, and MS Jeffrey A. Hoffman. NASA Associate Administrator for Space Flight Dr. William B. Lenoir is at far left in the background. OV-102 landed on concrete runway 22 at EAFB at 9:54:09 pm (Pacific Standard Time (PST)). OV-102's nose cone and nose landing gear (NLG) door are visible at the left corner of the frame.

  17. City of Columbia, Missouri - Clean Water Act Public Notice

    EPA Pesticide Factsheets

    The EPA is providing notice of a proposed Administrative Penalty Assessment against the City of Columbia, MO, regarding alleged violations at the City's Landfill and Yard Waste Compost Facility, located at 5700 Peabody Road, Columbia, Boone County, MO, 652

  18. Fourth Master Agreement between the University of the District of Columbia and University of the District of Columbia Faculty Association/NEA.

    ERIC Educational Resources Information Center

    District of Columbia Univ., Washington, DC.

    The collective bargaining agreement between the University of the District of Columbia and the University of the District of Columbia Faculty Association, an affiliate of the National Education Association, for the period October 1, 1988 to September 30, 1993 is presented. The agreement's 33 articles cover the following: purpose and intent, scope…

  19. Columbia Bay, Alaska: an 'upside down' estuary

    USGS Publications Warehouse

    Walters, R.A.; Josberger, E.G.; Driedger, C.L.

    1988-01-01

    Circulation and water properties within Columbia Bay, Alaska, are dominated by the effects of Columbia Glacier at the head of the Bay. The basin between the glacier terminus and the terminal moraine (sill depth of about 22 m) responds as an 'upside down' estuary with the subglacial discharge of freshwater entering at the bottom of the basin. The intense vertical mixing caused by the bouyant plume of freshwater creates a homogeneous water mass that exchanges with the far-field water through either a two- or a three-layer flow. In general, the glacier acts as a large heat sink and creates a water mass which is cooler than that in fjords without tidewater glaciers. The predicted retreat of Columbia Glacier would create a 40 km long fjord that has characteristics in common with other fjords in Prince William Sound. ?? 1988.

  20. An analysis of file migration in a UNIX supercomputing environment

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1992-01-01

    The super computer center at the National Center for Atmospheric Research (NCAR) migrates large numbers of files to and from its mass storage system (MSS) because there is insufficient space to store them on the Cray supercomputer's local disks. This paper presents an analysis of file migration data collected over two years. The analysis shows that requests to the MSS are periodic, with one day and one week periods. Read requests to the MSS account for the majority of the periodicity; as write requests are relatively constant over the course of a week. Additionally, reads show a far greater fluctuation than writes over a day and week since reads are driven by human users while writes are machine-driven.

  1. STS-35 crew & NASA management inspect OV-102 after landing at EAFB, Calif

    NASA Image and Video Library

    1990-12-10

    STS035-S-091 (10 Dec 1990) --- Donald R. Puddy (center), Director of Flight Crew Operations at the Johnson Space Center (JSC), joins the STS-35 crewmembers in a post-landing walk-around inspection of the Columbia at Edwards Air Force Base. Crewmembers pictured are, left to right, Vance D. Brand, John M. (Mike) Lounge, Ronald A. Parise, Guy S. Gardner and Jeffrey A. Hoffman. Obscured or out of frame are Samuel T. Durrance and Robert A. R. Parker. Dr. William B. Lenoir, NASA Associate Administrator for Space Flight, is at far left background.

  2. NAS Technical Summaries, March 1993 - February 1994

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1993-94 operational year concluded with 448 high-speed processor projects and 95 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.

  3. NAS technical summaries. Numerical aerodynamic simulation program, March 1992 - February 1993

    NASA Technical Reports Server (NTRS)

    1994-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1992-93 operational year concluded with 399 high-speed processor projects and 91 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.

  4. Columbia River System Analysis Model - Phase 1

    DTIC Science & Technology

    1991-10-01

    Reach reservoirs due to the impact of APPENDIX D 6 Wenatchee River flows and additional inflow downstream of Rocky Reach. An inflow link terminates at...AD-A246 639I 11 11111 till11 1 111 US Army Corps of Engineers Hydrologic Engineering Center Columbia River System Analysis Model - Phase I Libby...WORK UNIT ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) Columbia River System Analysis - Phase I 12. PERSONAL AUTHOR(S

  5. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  6. 2. Historic American Buildings Survey District of Columbia Fire Department ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. Historic American Buildings Survey District of Columbia Fire Department Photo FRONT ELEVATION, 1961 - Engine Company Number Seventeen, Firehouse, 1227 Monroe Street Northeast, Washington, District of Columbia, DC

  7. STS-35 MS Hoffman is greeted by JSC manager Puddy and NASA administrator Lenoir

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA Associate Administrator for Space Flight Dr. William B. Lenoir (second left) shakes hands with Mission Specialist (MS) Jeffrey A. Hoffman soon after the seven crewmembers egressed Columbia, Orbiter Vehicle (OV) 102, at Edwards Air Force Base (EAFB), California. Also pictured are JSC Flight Crew Operations Directorate (FCOD) Director Donald R. Puddy (left) and Commander Vance D. Brand. OV-102 landed on EAFB concrete runway 22 at 9:54:09 pm (Pacific Standard Time) ending its nine-day STS-35 Astronomy Laboratory 1 (ASTRO-1) mission.

  8. Columbia Smelting & Refining Works Red Hook, Brooklyn, New York

    EPA Pesticide Factsheets

    The site is the former location of a secondary lead smelter called Columbia Smelting and Refining Works (Columbia), and the extent of lead-contaminated soil from the smelter, in the mixed-use neighborhood of Red Hook in Brooklyn, New York. The footprint of

  9. Refining the Ares V Design to Carry Out NASA's Exploration Initiative

    NASA Technical Reports Server (NTRS)

    Creech, Steve

    2008-01-01

    NASA's Ares V cargo launch vehicle is part of an overall architecture for u.S. space exploration that will span decades. The Ares V, together with the Ares I crew launch vehicle, Orion crew exploration vehicle and Altair lunar lander, will carry out the national policy goals of retiring the Space Shuttle, completing the International Space Station program, and expanding exploration of the Moon as a steps toward eventual human exploration of Mars. The Ares fleet (Figure 1) is the product of the Exploration Systems Architecture study which, in the wake of the Columbia accident, recommended separating crew from cargo transportation. Both vehicles are undergoing rigorous systems design to maximize safety, reliability, and operability. They take advantage of the best technical and operational lessons learned from the Apollo, Space Shuttle and more recent programs. NASA also seeks to maximize commonality between the crew and cargo vehicles in an effort to simplify and reduce operational costs for sustainable, long-term exploration.

  10. 1. Historic American Buildings Survey District of Columbia Fire Department ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Historic American Buildings Survey District of Columbia Fire Department Photo FRONT ELEVATION, PRIOR TO 1960 - Engine Company Number Seventeen, Firehouse, 1227 Monroe Street Northeast, Washington, District of Columbia, DC

  11. Trends in selected water-quality characteristics, Flathead River at Flathead, British Columbia, and at Columbia Falls, Montana, water years, 1975-86

    USGS Publications Warehouse

    Cary, L.E.

    1989-01-01

    Data for selected water quality variables were evaluated for trends at two sampling stations--Flathead River at Flathead, British Columbia (Flathead station) and Flathead River at Columbia Falls, Montana (Columbia Falls station). The results were compared between stations. The analyses included data from water years 1975-86 at the Flathead station and water years 1979-86 at the Columbia Falls station. The seasonal Kendall test was applied to adjusted concentrations for variables related to discharge and to unadjusted concentrations for the remaining variables. Slope estimates were made for variables with significant trends unless data were reported as less than the detection limit. At the Flathead station, concentrations of dissolved solids, calcium, magnesium, sodium, dissolved nitrite plus nitrate nitrogen, ammonia nitrogen (total and dissolved), total organic nitrogen, and total phosphorus increased during the study period. Concentrations of total nitrite plus nitrate nitrogen and dissolved iron decreased during the same period. At the Columbia Falls station, concentrations increased for calcium and magnesium and decreased for sulfate and dissolved phosphorus. No trends were detected for 10 other variables tested at each station. Data for the Flathead station were reanalyzed for water years 1979-86. Trends in the data increased for magnesium and dissolved nitrite plus nitrate nitrogen and decreased for dissolved iron. Magnesium was the only variable that displayed a trend (increasing) at both stations. The increasing trends that were detected probably will not adversely affect the water quality of the Flathead River in the near future. (USGS)

  12. Are the Columbia River Basalts, Columbia Plateau, Idaho, Oregon, and Washington, USA, a viable geothermal target? A preliminary analysis

    USGS Publications Warehouse

    Burns, Erick R.; Williams, Colin F.; Tolan, Terry; Kaven, Joern Ole

    2016-01-01

    The successful development of a geothermal electric power generation facility relies on (1) the identification of sufficiently high temperatures at an economically viable depth and (2) the existence of or potential to create and maintain a permeable zone (permeability >10-14 m2) of sufficient size to allow efficient long-term extraction of heat from the reservoir host rock. If both occur at depth under the Columbia Plateau, development of geothermal resources there has the potential to expand both the magnitude and spatial extent of geothermal energy production. However, a number of scientific and technical issues must be resolved in order to evaluate the likelihood that the Columbia River Basalts, or deeper geologic units under the Columbia Plateau, are viable geothermal targets.Recent research has demonstrated that heat flow beneath the Columbia Plateau Regional Aquifer System may be higher than previously measured in relatively shallow (<600 m depth) wells, indicating that sufficient temperatures for electricity generation occur at depths 5 km. The remaining consideration is evaluating the likelihood that naturally high permeability exists, or that it is possible to replicate the high average permeability (approximately 10-14 to 10-12 m2) characteristic of natural hydrothermal reservoirs. From a hydraulic perspective, Columbia River Basalts are typically divided into dense, impermeable flow interiors and interflow zones comprising the top of one flow, the bottom of the overlying flow, and any sedimentary interbed. Interflow zones are highly variable in texture but, at depths <600 m, some of them form highly permeable regional aquifers with connectivity over many tens of kilometers. Below depths of ~600 m, permeability reduction occurs in many interflow zones, caused by the formation of low-temperature hydrothermal alteration minerals (corresponding to temperatures above ~35 °C). However, some high permeability (>10-14 m2) interflows are documented at depths up

  13. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phasemore » of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.« less

  14. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  15. UNIX security in a supercomputing environment

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1989-01-01

    The author critiques some security mechanisms in most versions of the Unix operating system and suggests more effective tools that either have working prototypes or have been implemented, for example in secure Unix systems. Although no computer (not even a secure one) is impenetrable, breaking into systems with these alternate mechanisms will cost more, require more skill, and be more easily detected than penetrations of systems without these mechanisms. The mechanisms described fall into four classes (with considerable overlap). User authentication at the local host affirms the identity of the person using the computer. The principle of least privilege dictates that properly authenticated users should have rights precisely sufficient to perform their tasks, and system administration functions should be compartmentalized; to this end, access control lists or capabilities should either replace or augment the default Unix protection system, and mandatory access controls implementing multilevel security models and integrity mechanisms should be available. Since most users access supercomputing environments using networks, the third class of mechanisms augments authentication (where feasible). As no security is perfect, the fourth class of mechanism logs events that may indicate possible security violations; this will allow the reconstruction of a successful penetration (if discovered), or possibly the detection of an attempted penetration.

  16. Retired NASA research pilot and former astronaut Gordon Fullerton was greeted by scores of NASA Dryden staff who bid him farewell after his final NASA flight.

    NASA Image and Video Library

    2007-12-21

    Long-time NASA Dryden research pilot and former astronaut C. Gordon Fullerton capped an almost 50-year flying career, including more than 38 years with NASA, with a final flight in a NASA F/A-18 on Dec. 21, 2007. Fullerton and Dryden research pilot Jim Smolka flew a 90-minute pilot proficiency formation aerobatics flight with another Dryden F/A-18 and a Dryden T-38 before concluding with two low-level formation flyovers of Dryden before landing. Fullerton was honored with a water-cannon spray arch provided by two fire trucks from the Edwards Air Force Base fire department as he taxied the F/A-18 up to the Dryden ramp, and was then greeted by his wife Marie and several hundred Dryden staff after his final flight. Fullerton began his flying career with the U.S. Air Force in 1958 after earning bachelor's and master's degrees in mechanical engineering from the California Institute of Technology. Initially trained as a fighter pilot, he later transitioned to multi-engine bombers and became a bomber operations test pilot after attending the Air Force Aerospace Research Pilot School at Edwards Air Force Base, Calif. He then was assigned to the flight crew for the planned Air Force Manned Orbital Laboratory in 1966. Upon cancellation of that program, the Air Force assigned Fullerton to NASA's astronaut corps in 1969. He served on the support crews for the Apollo 14, 15, 16 and 17 lunar missions, and was later assigned to one of the two flight crews that piloted the space shuttle prototype Enterprise during the Approach and Landing Test program at Dryden. He then logged some 382 hours in space when he flew on two early space shuttle missions, STS-3 on Columbia in 1982 and STS-51F on Challenger in 1985. He joined the flight crew branch at NASA Dryden after leaving the astronaut corps in 1986. During his 21 years at Dryden, Fullerton was project pilot on a number of high-profile research efforts, including the Propulsion Controlled Aircraft, the high-speed landing tests of

  17. Managing the Columbia Basin for Sustainable Economy, Society, Environment

    EPA Science Inventory

    The Columbia River Basin (CRB) is a vast region of the Pacific Northwest covering parts of the United States, Canada and Tribal lands. As the Columbia River winds its way from Canada into the US, the river passes through numerous multi-purpose reservoirs and hydroelectric genera...

  18. Earth observations taken from shuttle orbiter Columbia

    NASA Image and Video Library

    1995-10-26

    STS073-708-089 (26 October 1995) --- As evidenced by this 70mm photograph from the Earth-orbiting Space Shuttle Columbia, international borders have become easier to see from space in recent decades. This, according to NASA scientists studying the STS-73 photo collection, is particularly true in arid and semi-arid environments. The scientists go on to cite this example of the razor-sharp vegetation boundary between southern Israel and Gaza and the Sinai. The nomadic grazing practices to the south (the lighter areas of the Sinai and Gaza, top left) have removed most of the vegetation from the desert surface. On the north side of the border, Israel uses advanced irrigation techniques in Israel, mainly "trickle irrigation" by which small amounts of water are delivered directly to plant roots. These water-saving techniques have allowed precious supplies from the Jordan River to be used on farms throughout the country. Numerous fields of dark green can be seen in this detailed view. Scientists say this redistribution of the Jordan River waters has increased the Israeli vegetation cover to densities that approach those that may have been common throughout the Mid-East in wetter early Biblical times. A small portion of the Mediterranean Sea appears top right.

  19. 77 FR 74587 - Safety Zone; Grain-Shipment Vessels, Columbia and Willamette Rivers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-17

    ... 1625-AA00 Safety Zone; Grain-Shipment Vessels, Columbia and Willamette Rivers AGENCY: Coast Guard, DHS... inbound and outbound grain-shipment vessels involved in commerce with the Columbia Grain facility on the Willamette River in Portland, OR, and the United Grain Corporation facility on the Columbia River in...

  20. Supercomputing resources empowering superstack with interactive and integrated systems

    NASA Astrophysics Data System (ADS)

    Rückemann, Claus-Peter

    2012-09-01

    This paper presents the results from the development and implementation of Superstack algorithms to be dynamically used with integrated systems and supercomputing resources. Processing of geophysical data, thus named geoprocessing, is an essential part of the analysis of geoscientific data. The theory of Superstack algorithms and the practical application on modern computing architectures was inspired by developments introduced with processing of seismic data on mainframes and within the last years leading to high end scientific computing applications. There are several stacking algorithms known but with low signal to noise ratio in seismic data the use of iterative algorithms like the Superstack can support analysis and interpretation. The new Superstack algorithms are in use with wave theory and optical phenomena on highly performant computing resources for huge data sets as well as for sophisticated application scenarios in geosciences and archaeology.

  1. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  2. NASA high performance computing, communications, image processing, and data visualization-potential applications to medicine.

    PubMed

    Kukkonen, C A

    1995-06-01

    High-speed information processing technologies being developed and applied by the Jet Propulsion Laboratory for NASA and Department of Defense mission needs have potential dual-uses in telemedicine and other medical applications. Fiber optic ground networks connected with microwave satellite links allow NASA to communicate with its astronauts in Earth orbit or on the moon, and with its deep space probes billions of miles away. These networks monitor the health of astronauts and or robotic spacecraft. Similar communications technology will also allow patients to communicate with doctors anywhere on Earth. NASA space missions have science as a major objective. Science sensors have become so sophisticated that they can take more data than our scientists can analyze by hand. High performance computers--workstations, supercomputer and massively parallel computers are being used to transform this data into knowledge. This is done using image processing, data visualization and other techniques to present the data--one's and zero's in forms that a human analyst can readily relate to and understand. Medical sensors have also explored in the in data output--witness CT scans, MRI, and ultrasound. This data must be presented in visual form and computers will allow routine combination of many two dimensional MRI images into three dimensional reconstructions of organs that then can be fully examined by physicians. Emerging technologies such as neural networks that are being "trained" to detect craters on planets or incoming missiles amongst decoys can be used to identify microcalcification in mammograms.

  3. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less

  4. The Interior Columbia Basin Ecosystem Management Project: scientific assessment.

    Treesearch

    1999-01-01

    This CD-ROM contains digital versions (PDF) of the major scientific documents prepared for the Interior Columbia Basin Ecosystem Management Project (ICBEMP). "A Framework for Ecosystem Management in the Interior Columbia Basin and Portions of the Klamath and Great Basins" describes a general planning model for ecosystem management. The "Highlighted...

  5. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  6. Taxonomic status of the Columbia duskysnail (Truncatelloidea, Amnicolidae, Colligyrus).

    PubMed

    Liu, Hsiu-Ping; Hershler, Robert; Rossel, Christopher S

    2015-01-01

    Undescribed freshwater snails (Amnicolidae: Colligyrus) from the Mount Hood region (northwestern United States) identified as a new species (commonly known as the Columbia duskysnail) in grey literature have been provided federal protection under the "survey and manage" provisions of the Northwest Forest Plan and have been placed on conservation watch lists. However, there are no published studies of the identity of these snails aside from a molecular phylogenetic analysis which delineated a close relationship between the single sampled population and Colligyrusgreggi, which is distributed more than 750 km to the east of the Mount Hood area. Here we examine the taxonomic status of the Columbia duskysnail based on additional molecular sampling of mitochondrial DNA sequences (COI) and morphological evidence. We found that the Columbia duskysnail is not a monophyletic group and forms a strongly supported clade with Colligyrusgreggi. The COI divergence between these broadly disjunct groups (2.1%) was somewhat larger than that within Colligyrusgreggi (1.0%) but considerably less than that among the three currently recognized species of Colligyrus (8.7-12.1%). Additionally we found that the Columbia duskysnail and Colligyrusgreggi cannot be consistently differentiated by previously reported diagnostic characters (size and shape of shell spire, pigmentation of body and penis) and are closely similar in other aspects of morphology. Based on these results we conclude that the Columbia duskysnail is conspecific with Colligyrusgreggi.

  7. A high performance linear equation solver on the VPP500 parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi

    1994-12-31

    This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.

  8. Seasonal variabilty of surface velocities and ice discharge of Columbia Glacier, Alaska using high-resolution TanDEM-X satellite time series and NASA IceBridge data

    NASA Astrophysics Data System (ADS)

    Vijay, Saurabh; Braun, Matthias

    2014-05-01

    developed basal drainage system speeds are at their minimum. We also analyze the variation in conjunction with the prevailing meteorological conditions as well as changes in calving front position in order to exclude other potential influencing factors. In a second step, we also exploit TanDEM-X data to generate various digital elevation models (DEMs) at different time steps. The multi-temporal DEMs are used to estimate the difference in surface elevation and respective ice thickness changes. All TanDEM-X DEMs are well tied with a SPOT reference DEM. Errors are estimated over ice free moraines and rocky areas. The quality of the TanDEM-X DEMs on snow and ice covered areas are further assessed by a comparison to laser scanning data from NASA Icebridge campaigns. The time wise closest TanDEM-X DEMs were compared to the Icebridge tracks from winter and summer surveys in order to judge errors resulting from the radar penetration of the x/band radar signal into snow, ice and firn. The average differences between laser scanning and TanDEM-X in August, 2011 and March, 2012 are observed to be 8.48 m and 14.35 m respectively. Retreat rates of the glacier front are derived manually by digitizing the terminus position. By combining the data sets of ice velocity, ice thickness and the retreat rates at different time steps, we estimate the seasonal variability of the ice discharge of Columbia Glacier.

  9. A comparison of low-gravity measurements on-board Columbia during STS-40

    NASA Technical Reports Server (NTRS)

    Rogers, M. J. B.; Baugher, C. R.; Blanchard, R. C.; Delombard, R.; Durgin, W. W.; Matthiesen, D. H.; Neupert, W.; Roussel, P.

    1993-01-01

    The first NASA Spacelab Life Sciences mission (SLS-1) flew 5 June to 14 June 1991 on the orbiter Columbia (STS-40). The purpose of the mission was to investigate the human body's adaptation to the low-gravity conditions of space flight and the body's readjustment after the mission to the 1g environment of earth. In addition to the life sciences experiments manifested for the Spacelab module, a variety of experiments in other scientific disciplines flew in the Spacelab and in Get Away Special (GAS) Canisters on the GAS Bridge Assembly. Several principal investigators designed and flew specialized accelerometer systems to better assess the results of their experiments by means of a low-gravity environment characterization. This was also the first flight of the NASA Microgravity Science and Applications Division (MSAD) sponsored Space Acceleration Measurement System (SAMS) and the first flight of the NASA Orbiter Experiments Office (OEX) sponsored Orbital Acceleration Research Experiment accelerometer (OARE). We present a brief introduction to seven STS-40 accelerometer systems and discuss and compare the resulting data. During crew sleep periods, acceleration magnitudes in the 10(exp -6) to 10(exp -5)g range were recorded in the Spacelab module and on the GAS Bridge Assembly. Magnitudes increased to the 10(exp -4) level during periods of nominal crew activity. Vernier thruster firings caused acceleration shifts on the order of 10(exp -4)g and primary thruster firings caused accelerations as great as 10(exp -2) g. Frequency domain analysis revealed typical excitation of Orbiter and Spacelab structural modes at 3.5, 4.7, 5.2, 6.2, 7, and 17 Hz.

  10. A comparison of low-gravity measurements on-board Columbia during STS-40

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Baugher, C. R.; Blanchard, R. C.; Delombard, R.; Durgin, W. W.; Matthiesen, D. H.; Neupert, W.; Roussel, P.

    1993-01-01

    The first NASA Spacelab Life Sciences mission (SLS-1) flew 5 Jun. to 14 Jun. 1991 on the orbiter Columbia (STS-40). The purpose of the mission was to investigate the human body's adaptation to the low-gravity conditions of space flight and the body's readjustment after the mission to the 1 g environment of earth. In addition to the life sciences experiments manifested for the Spacelab module, a variety of experiments in other scientific disciplines flew in the Spacelab and in Get Away Special (GAS) Canisters on the GAS Bridge Assembly. Several principal investigators designed and flew specialized accelerometer systems to better assess the results of their experiments by means of a low-gravity environment characterization. This was also the first flight of the NASA Microgravity Science and Applications Division (MSAD) sponsored Space Acceleration Measurement System (SAMS) and the first flight of the NASA Orbiter Experiments Office (OEX) sponsored Orbital Acceleration Research Experiment accelerometer (OARE). A brief introduction to seven STS-40 accelerometer systems are presented and the resulting data are discussed and compared. During crew sleep periods, acceleration magnitudes in the 10(exp -6) to 10(exp -5) g range were recorded in the Spacelab module and on the GAS Bridge Assembly. Magnitudes increased to the 10(exp -4) g level during periods of nominal crew activity. Vernier thruster firings caused acceleration shifts on the order of 10(exp -4) g and primary thruster firings caused accelerations as great as 10(exp -2) g. Frequency domain analysis revealed typical excitation of Orbiter and Spacelab structural modes at 3.5, 4.7, 5.2, 6.2, 7, and 17 Hz.

  11. Spaceship Columbia's first flight

    NASA Technical Reports Server (NTRS)

    Young, J. W.; Crippen, R. L.

    1981-01-01

    This is a review of the initial flight of the spaceship Columbia - the first of four test missions of the nation's space transportation system. Engineering test pilot/astronaut activity associated with operation, control, and monitoring of the spaceship are discussed. Demonstrated flying qualities and performance of the Space Shuttle are covered.

  12. 75 FR 81464 - Safety Zone; Columbia River, The Dalles Lock and Dam

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ...-AA00 Safety Zone; Columbia River, The Dalles Lock and Dam AGENCY: Coast Guard, DHS. ACTION: Temporary... Columbia River in the vicinity of The Dalles Lock and Dam while the Army Corps of Engineers completes...; Columbia River, The Dalles Lock and Dam (a) Location. The following is a safety zone: All waters of the...

  13. Columbia Debris

    NASA Image and Video Library

    2003-05-06

    George D'Heilly and John Cassanto, scientists with Instrumentation Technology Associates, Inc., display for the media part of the apparatus recovered during the search for Columbia debris. It was part of the Commercial ITA Biomedical Experiments payload on mission STS-107 that included the Growth of Bacterial Biofilm on Surfaces during Spaceflight (GOBBSS) experiment and crystals grown for cancer research. The GOBBSS experiment was sponsored by the Planetary Society, with joint participation of an Israeli and a Palestinian student, and developed by the Israeli Aerospace Medical Institute and JSC Astrobiology Center.

  14. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    NASA Astrophysics Data System (ADS)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  15. Mid Columbia sturgeon incubation and rearing study

    USGS Publications Warehouse

    Parsley, Michael J.; Kofoot, Eric; Blubaugh, J

    2011-01-01

    This report describes the results from the second year of a three-year investigation on the effects of different thermal regimes on incubation and rearing early life stages of white sturgeon Acipenser transmontanus. The Columbia River has been significantly altered by the construction of dams resulting in annual flows and water temperatures that differ from historical levels. White sturgeon have been demonstrated to spawn in two very distinct sections of the Columbia River in British Columbia, Canada, which are both located immediately downstream of hydropower facilities. The thermal regimes differ substantially between these two areas. The general approach of this study was to incubate and rear white sturgeon early life stages under two thermal regimes; one mimicking the current, cool water regime of the Columbia River downstream from Revelstoke Dam, and one mimicking a warmer regime similar to conditions found on the Columbia River at the international border. Second-year results suggest that thermal regimes during incubation influence rate of egg development and size at hatch. Eggs incubated under the warm thermal regime hatched sooner than those incubated under the cool thermal regime. Mean length of free embryos at hatch was significantly different between thermal regimes with free embryos from the warm thermal regime being longer at hatch. However, free embryos from the cool thermal regime had a significantly higher mean weight at hatch. This is in contrast with results obtained during 2009. The rearing trials revealed that growth of fish reared in the cool thermal regime was substantially less than growth of fish reared in the warm thermal regime. The magnitude of mortality was greatest in the warm thermal regime prior to initiation of exogenous feeding, but chronic low levels of mortality in the cool thermal regime were higher throughout the period. The starvation trials showed that the fish in the warm thermal regime exhausted their yolk reserves faster

  16. Development of the general interpolants method for the CYBER 200 series of supercomputers

    NASA Technical Reports Server (NTRS)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  17. Massively parallel algorithm and implementation of RI-MP2 energy calculation for peta-scale many-core supercomputers.

    PubMed

    Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito

    2016-11-15

    A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Integration of Titan supercomputer at OLCF with ATLAS Production System

    NASA Astrophysics Data System (ADS)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  19. 33 CFR 110.228 - Columbia River, Oregon and Washington.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Columbia River, Oregon and... SECURITY ANCHORAGES ANCHORAGE REGULATIONS Anchorage Grounds § 110.228 Columbia River, Oregon and Washington... Astoria, Oregon, at latitude 46°12′00.79″ N, longitude 123°49′55.40″ W; thence continuing easterly to...

  20. 33 CFR 110.228 - Columbia River, Oregon and Washington.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Columbia River, Oregon and... SECURITY ANCHORAGES ANCHORAGE REGULATIONS Anchorage Grounds § 110.228 Columbia River, Oregon and Washington... Astoria, Oregon, at latitude 46°12′00.79″ N, longitude 123°49′55.40″ W; thence continuing easterly to...

  1. 33 CFR 110.228 - Columbia River, Oregon and Washington.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Columbia River, Oregon and... SECURITY ANCHORAGES ANCHORAGE REGULATIONS Anchorage Grounds § 110.228 Columbia River, Oregon and Washington... Astoria, Oregon, at latitude 46°12′00.79″ N, longitude 123°49′55.40″ W; thence continuing easterly to...

  2. 33 CFR 110.228 - Columbia River, Oregon and Washington.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Columbia River, Oregon and... SECURITY ANCHORAGES ANCHORAGE REGULATIONS Anchorage Grounds § 110.228 Columbia River, Oregon and Washington... Astoria, Oregon, at latitude 46°12′00.79″ N, longitude 123°49′55.40″ W; thence continuing easterly to...

  3. 33 CFR 110.228 - Columbia River, Oregon and Washington.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Columbia River, Oregon and... SECURITY ANCHORAGES ANCHORAGE REGULATIONS Anchorage Grounds § 110.228 Columbia River, Oregon and Washington... Astoria, Oregon, at latitude 46°12′00.79″ N, longitude 123°49′55.40″ W; thence continuing easterly to...

  4. British Columbia log export policy: historical review and analysis.

    Treesearch

    Craig W. Shinn

    1993-01-01

    Log exports have been restricted in British Columbia for over 100 years. The intent of the restriction is to use the timber in British Columbia to encourage development of forest industry, employment, and well-being in the Province. Logs have been exempted from the within-Province manufacturing rule at various times, in varying amounts, for different reasons, and by...

  5. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    NASA Astrophysics Data System (ADS)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  6. Rocks of the Columbia Hills

    USGS Publications Warehouse

    Squyres, S. W.; Arvidson, R. E.; Blaney, D.L.; Clark, B. C.; Crumpler, L.; Farrand, W. H.; Gorevan, S.; Herkenhoff, K. E.; Hurowitz, J.; Kusack, A.; McSween, H.Y.; Ming, D. W.; Morris, R.V.; Ruff, S.W.; Wang, A.; Yen, A.

    2006-01-01

    The Mars Exploration Rover Spirit has identified five distinct rock types in the Columbia Hills of Gusev crater. Clovis Class rock is a poorly sorted clastic rock that has undergone substantial aqueous alteration. We interpret it to be aqueously altered ejecta deposits formed by impacts into basaltic materials. Wishstone Class rock is also a poorly sorted clastic rock that has a distinctive chemical composition that is high in Ti and P and low in Cr. Wishstone Class rock may be pyroclastic or impact in origin. Peace Class rock is a sedimentary material composed of ultramafic sand grains cemented by significant quantities of Mg- and Ca-sulfates. Peace Class rock may have formed when water briefly saturated the ultramafic sands and evaporated to allow precipitation of the sulfates. Watchtower Class rocks are similar chemically to Wishstone Class rocks and have undergone widely varying degrees of near-isochemical aqueous alteration. They may also be ejecta deposits, formed by impacts into Wishstone-rich materials and altered by small amounts of water. Backstay Class rocks are basalt/trachybasalt lavas that were emplaced in the Columbia Hills after the other rock classes were, either as impact ejecta or by localized volcanic activity. The geologic record preserved in the rocks of the Columbia Hills reveals a period very early in Martian history in which volcanic materials were widespread, impact was a dominant process, and water was commonly present. Copyright 2006 by the American Geophysical Union.

  7. A premerger profile of Columbia and HCA hospitals.

    PubMed

    McCue, M J

    1996-01-01

    This article profiles the premerger marketing, management, and mission characteristics of the combined Columbia and Hospital Corporation of America (HCA) entity relative to local market hospitals. The findings show that the Columbia/HCA hospitals had fewer Medicaid patients, lower proportion of outpatient revenues, higher operating cash flow per bed, lower occupancy rates, lower salary expense per discharge, higher debt to total assets, fewer beds, and a higher case-mix index relative to local competitors.

  8. Mercury concentrations in Pacific lamprey ( Entosphenus tridentatus ) and sediments in the Columbia River basin: Mercury in Columbia River Pacific lamprey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linley, Timothy; Krogstad, Eirik; Mueller, Robert

    2016-06-21

    We investigated mercury accumulation in Pacific lamprey and sediments in the Columbia River basin. Mercury concentrations in larval lamprey differed significantly among sample locations (P < 0.001) and were correlated with concentrations in sediments (r 2 = 0.83), whereas adult concentrations were highly variable (range 0.1–9.5 µg/g) and unrelated to holding time after collection. The results suggest that Pacific lamprey in the Columbia River basin may be exposed to mercury levels that have adverse ecological effects.

  9. STS-87 Payload Specialist Leonid Kadenyuk chats with NASA Administrator Daniel Goldin shortly after

    NASA Technical Reports Server (NTRS)

    1997-01-01

    STS-87 Payload Specialist Leonid Kadenyuk of the National Space Agency of Ukraine (NSAU), at left, chats with NASA Administrator Daniel Goldin shortly after the landing of Columbia at Kennedy Space Center. Looking on is back-up Payload Specialist Yaroslav Pustovyi, also of NSAU. STS-87 concluded its mission with a main gear touchdown at 7:20:04 a.m. EST Dec. 5, at KSC's Shuttle Landing Facility Runway 33, drawing the 15-day, 16-hour and 34- minute-long mission of 6.5 million miles to a close. Also onboard the orbiter were Commander Kevin Kregel; Pilot Steven Lindsey; and Mission Specialists Winston Scott, Kalpana Chawla, Ph.D., and Takao Doi, Ph.D., of the National Space Development Agency of Japan. During the 88th Space Shuttle mission, the crew performed experiments on the United States Microgravity Payload-4 and pollinated plants as part of the Collaborative Ukrainian Experiment. This was the 12th landing for Columbia at KSC and the 41st KSC landing in the history of the Space Shuttle program.

  10. Wind energy on the horizon in British Columbia. A review and evaluation of the British Columbia wind energy planning framework

    NASA Astrophysics Data System (ADS)

    Day, Jason

    This study examines the wind energy planning frameworks from ten North American jurisdictions, drawing important lessons that British Columbia could use to build on its current model which has been criticized for its limited scope and restriction of local government powers. This study contributes to similar studies conducted by Kimrey (2006), Longston (2006), and Eriksen (2009). This study concludes that inclusion of wind resource zones delineated through strategic environmental assessment, programme assessment, and conducting research-oriented studies could improve the current British Columbia planning framework. The framework should also strengthen its bat impact assessment practices and incorporate habitat compensation. This research also builds upon Rosenberg's (2008) wind energy planning framework typologies. I conclude that the typology utilized in Texas should be employed in British Columbia in order to facilitate utilizing wind power. The only adaptation needed is the establishment of a cross-jurisdictional review committee for project assessment to address concerns about local involvement and site-specific environmental and social concerns.

  11. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE PAGES

    Wang, Bei; Ethier, Stephane; Tang, William; ...

    2017-06-29

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  12. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  13. Anomaly Analysis: NASA's Engineering and Safety Center Checks Recurring Shuttle Glitches

    NASA Technical Reports Server (NTRS)

    Morring, Frank, Jr.

    2004-01-01

    The NASA Engineering and Safety Center (NESC), set up in the wake of the Columbia accident to backstop engineers in the space shuttle program, is reviewing hundreds of recurring anomalies that the program had determined don't affect flight safety to see if in fact they might. The NESC is expanding its support to other programs across the agency, as well. The effort, which will later extend to the International Space Station (ISS), is a principal part of the attempt to overcome the normalization of deviance--a situation in which organizations proceeded as if nothing was wrong in the face of evidence that something was wrong--cited by sociologist Diane Vaughn as contributing to both space shuttle disasters.

  14. 78 FR 3893 - Columbia Gas Transmission, LLC; Notice of Request Under Blanket Authorization

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-17

    ... any natural gas service; however, Columbia would terminate service to one free gas customer pursuant to the terms of the lease agreement between the customer and Columbia. Columbia estimates that it... contact FERC Online Support at FERC Online[email protected] or call toll-free at (866) 206-3676, or, for...

  15. Union-Active School Librarians and School Library Advocacy: A Modified Case Study of the British Columbia Teacher-Librarians' Association and the British Columbia Teachers' Federation

    ERIC Educational Resources Information Center

    Ewbank, Ann Dutton

    2015-01-01

    This modified case study examines how the members of the British Columbia Teacher-Librarians' Association (BCTLA), a Provincial Specialist Association (PSA) of the British Columbia Teachers' Federation (BCTF), work together to advocate for strong school library programs headed by a credentialed school librarian. Since 2002, despite nullification…

  16. Return to the river: strategies for salmon restoration in the Columbia River Basin.

    Treesearch

    Richard N. Williams; Jack A. Standford; James A. Lichatowich; William J. Liss; Charles C. Coutant; Willis E. McConnaha; Richard R. Whitney; Phillip R. Mundy; Peter A. Bisson; Madison S. Powell

    2006-01-01

    The Columbia River today is a great "organic machine" (White 1995) that dominates the economy of the Pacific Northwest. Even though natural attributes remain—for example, salmon production in Washington State's Hanford Reach, the only unimpounded reach of the mainstem Columbia River—the Columbia and Snake River mainstems are dominated...

  17. 28 CFR Appendix A to Part 812 - Qualifying District of Columbia Code Offenses

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... FOR THE DISTRICT OF COLUMBIA COLLECTION AND USE OF DNA INFORMATION Pt. 812, App. A Appendix A to Part... Columbia, the DNA Sample Collection Act of 2001 identifies the criminal offenses listed in Table 1 of this appendix as “qualifying District of Columbia offenses” for the purposes of the DNA Analysis Backlog...

  18. 28 CFR Appendix A to Part 812 - Qualifying District of Columbia Code Offenses

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... FOR THE DISTRICT OF COLUMBIA COLLECTION AND USE OF DNA INFORMATION Pt. 812, App. A Appendix A to Part... Columbia, the DNA Sample Collection Act of 2001 identifies the criminal offenses listed in Table 1 of this appendix as “qualifying District of Columbia offenses” for the purposes of the DNA Analysis Backlog...

  19. 28 CFR Appendix A to Part 812 - Qualifying District of Columbia Code Offenses

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... FOR THE DISTRICT OF COLUMBIA COLLECTION AND USE OF DNA INFORMATION Pt. 812, App. A Appendix A to Part... Columbia, the DNA Sample Collection Act of 2001 identifies the criminal offenses listed in Table 1 of this appendix as “qualifying District of Columbia offenses” for the purposes of the DNA Analysis Backlog...

  20. 28 CFR Appendix A to Part 812 - Qualifying District of Columbia Code Offenses

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... FOR THE DISTRICT OF COLUMBIA COLLECTION AND USE OF DNA INFORMATION Pt. 812, App. A Appendix A to Part... Columbia, the DNA Sample Collection Act of 2001 identifies the criminal offenses listed in Table 1 of this appendix as “qualifying District of Columbia offenses” for the purposes of the DNA Analysis Backlog...

  1. 28 CFR Appendix A to Part 812 - Qualifying District of Columbia Code Offenses

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... FOR THE DISTRICT OF COLUMBIA COLLECTION AND USE OF DNA INFORMATION Pt. 812, App. A Appendix A to Part... Columbia, the DNA Sample Collection Act of 2001 identifies the criminal offenses listed in Table 1 of this appendix as “qualifying District of Columbia offenses” for the purposes of the DNA Analysis Backlog...

  2. Melvin Burke, Ike Gillam, Fitz Fulton, and Deke Slayton give the Space Shuttle Columbia a humorous sendoff before it's ferry flight back to KSC in Florida

    NASA Image and Video Library

    1981-04-28

    After completing it's first orbital mission with a landing at Edwards Air Force Base on April 14, 1981, Space Shuttle Columbia received a humorous sendoff before it's ferry flight atop a modified 747 back to the Kennedy Space Center in Florida. Holding the sign are, left to right: Melvin Burke, DFRC Orbital Flight Test (OFT) Program Manager; Isaac 'Ike' Gillam, DFRC Center Director; Fitzhugh 'Fitz' L. Fulton Jr., NASA DFRC 747 SCA Pilot; and Donald K. 'Deke' Slayton, JSC OFT Project Manager.

  3. BCASP and the Evolution of School Psychology in British Columbia

    ERIC Educational Resources Information Center

    Agar, Douglas J.

    2016-01-01

    Since 1992, the British Columbia Association of School Psychologists (BCASP) has been the professional body for school psychologists in British Columbia. In the intervening 24 years, BCASP has been very successful in performing the dual roles of a certifying body and a professional development organization for school psychologists in British…

  4. Columbia University to Open Network of International Collaborative-Research Centers

    ERIC Educational Resources Information Center

    Labi, Aisha

    2009-01-01

    In what university officials say represents a new approach to the internationalization of higher education, Columbia University is building a network of six to eight research institutes in capitals around the world. The Columbia Global Centers, as they are called, are designed for faculty members and students from various disciplines to…

  5. 33 CFR 117.869 - Columbia River.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Oregon § 117.869 Columbia River. (a) The draws of the... the Burlington Northern Santa Fe railroad bridge, mile 201.2, between Celilo, Oregon, and Wishram...

  6. 33 CFR 117.869 - Columbia River.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Oregon § 117.869 Columbia River. (a) The draws of the... the Burlington Northern Santa Fe railroad bridge, mile 201.2, between Celilo, Oregon, and Wishram...

  7. 33 CFR 117.869 - Columbia River.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Oregon § 117.869 Columbia River. (a) The draws of the... the Burlington Northern Santa Fe railroad bridge, mile 201.2, between Celilo, Oregon, and Wishram...

  8. 33 CFR 117.869 - Columbia River.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Oregon § 117.869 Columbia River. (a) The draws of the... the Burlington Northern Santa Fe railroad bridge, mile 201.2, between Celilo, Oregon, and Wishram...

  9. 33 CFR 117.869 - Columbia River.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Oregon § 117.869 Columbia River. (a) The draws of the... the Burlington Northern Santa Fe railroad bridge, mile 201.2, between Celilo, Oregon, and Wishram...

  10. Supercomputer description of human lung morphology for imaging analysis.

    PubMed

    Martonen, T B; Hwang, D; Guan, X; Fleming, J S

    1998-04-01

    A supercomputer code that describes the three-dimensional branching structure of the human lung has been developed. The algorithm was written for the Cray C94. In our simulations, the human lung was divided into a matrix containing discrete volumes (voxels) so as to be compatible with analyses of SPECT images. The matrix has 3840 voxels. The matrix can be segmented into transverse, sagittal and coronal layers analogous to human subject examinations. The compositions of individual voxels were identified by the type and respective number of airways present. The code provides a mapping of the spatial positions of the almost 17 million airways in human lungs and unambiguously assigns each airway to a voxel. Thus, the clinician and research scientist in the medical arena have a powerful new tool to be used in imaging analyses. The code was designed to be integrated into diverse applications, including the interpretation of SPECT images, the design of inhalation exposure experiments and the targeted delivery of inhaled pharmacologic drugs.

  11. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    NASA Astrophysics Data System (ADS)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  12. 76 FR 6525 - Airworthiness Directives; Cessna Aircraft Company (Type Certificate Previously Held by Columbia...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-07

    ... Airworthiness Directives; Cessna Aircraft Company (Type Certificate Previously Held by Columbia Aircraft... following new AD: 2011-03-04 Cessna Aircraft Company (Type Certificate Previously Held by Columbia Aircraft... the following Cessna Aircraft Company (type certificate previously held by Columbia Aircraft...

  13. NASA Dryden research pilot Gordon Fullerton flies his final mission in NASA F/A-18B #852 in formation with NASA F/A-18A #850 on Dec. 21, 2007.

    NASA Image and Video Library

    2007-12-21

    Long-time NASA Dryden research pilot and former astronaut C. Gordon Fullerton capped an almost 50-year flying career, including more than 38 years with NASA, with a final flight in a NASA F/A-18 on Dec. 21, 2007. Fullerton and Dryden research pilot Jim Smolka flew a 90-minute pilot proficiency formation aerobatics flight with another Dryden F/A-18 and a Dryden T-38 before concluding with two low-level formation flyovers of Dryden before landing. Fullerton was honored with a water-cannon spray arch provided by two fire trucks from the Edwards Air Force Base fire department as he taxied the F/A-18 up to the Dryden ramp, and was then greeted by his wife Marie and several hundred Dryden staff after his final flight. Fullerton began his flying career with the U.S. Air Force in 1958 after earning bachelor's and master's degrees in mechanical engineering from the California Institute of Technology. Initially trained as a fighter pilot, he later transitioned to multi-engine bombers and became a bomber operations test pilot after attending the Air Force Aerospace Research Pilot School at Edwards Air Force Base, Calif. He then was assigned to the flight crew for the planned Air Force Manned Orbital Laboratory in 1966. Upon cancellation of that program, the Air Force assigned Fullerton to NASA's astronaut corps in 1969. He served on the support crews for the Apollo 14, 15, 16 and 17 lunar missions, and was later assigned to one of the two flight crews that piloted the space shuttle prototype Enterprise during the Approach and Landing Test program at Dryden. He then logged some 382 hours in space when he flew on two early space shuttle missions, STS-3 on Columbia in 1982 and STS-51F on Challenger in 1985. He joined the flight crew branch at NASA Dryden after leaving the astronaut corps in 1986. During his 21 years at Dryden, Fullerton was project pilot on a number of high-profile research efforts, including the Propulsion Controlled Aircraft, the high-speed landing tests of sp

  14. Impact on the Columbia River of an outburst of Spirit Lake

    USGS Publications Warehouse

    Sikonia, W.G.

    1985-01-01

    A one-dimensional sediment-transport computer model was used to study the effects of an outburst of Spirit Lake on the Columbia River. According to the model, flood sediment discharge to the Columbia from the Cowlitz would form a blockage to a height of 44 feet above the current streambed of the Columbia River, corresponding to a new streambed elevation of -3 feet, that would impound the waters of the Columbia River. For an average flow of 233,000 cubic feet in that river, water surface elevations would continue to increase for 16 days after the blockage had been formed. The river elevation at the Trojan nuclear power plant, 5 miles upstream of the Cowlitz River, would rise to 32 feet, compared to a critical elevation of 45 feet, above which the plant would be flooded. For comparison, the Columbia River at average flow without the blockage has an elevation at this location of 6 feet. Correspondingly high water surface elevations would occur along the river to Bonneville Dam , with that at Portland, Oregon, for example, rising also to 32 feet, compared to 10 feet without the blockage. (USGS)

  15. ARC-2009-ACD09-0208-029

    NASA Image and Video Library

    2009-09-15

    Obama Administration launches Cloud Computing Initiative at Ames Research Center. Vivek Kundra, White House Chief Federal Information Officer (right) and Lori Garver, NASA Deputy Administrator (left) get a tour & demo NASAS Supercomputing Center Hyperwall.

  16. 77 FR 53141 - Drawbridge Operation Regulation; Columbia River, Vancouver, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-31

    ... lift-spans. This deviation allows height-restricted lifts which will reduce the vertical clearance... which cross the Columbia River at mile 106.5 only be required to lift to a reduced height of 130 feet above Columbia River Datum for a 30 day period. The height restricted lifts are necessary to facilitate...

  17. Closely Watched Tenure Case at Columbia University Is Still Unsettled

    ERIC Educational Resources Information Center

    Wilson, Robin; Byrne, Richard

    2008-01-01

    This article reports on an unsettled tenure case at Columbia University. The high-profile and controversial tenure bid of Joseph A. Massad, a Palestinian-American professor of Arab politics, was turned down by Columbia University's provost, Alan Brinkley. Mr. Massad's case follows closely on two other high-profile tenure bids affected by the…

  18. Reconstructed Paleo-topography of the Columbia Hills, Mars

    NASA Astrophysics Data System (ADS)

    Cole, S. B.; Watters, W. A.; Aron, F.; Squyres, S. W.

    2013-12-01

    From June 2004 through March 2010, the Mars Exploration Rover Spirit conducted a detailed campaign examining the Columbia Hills of Gusev Crater. In addition to mineralogical and chemical investigations, Spirit's stereo panoramic (Pancam) and navigation (Navcam) cameras obtained over 7,000 images of geologic targets along the West Spur of the Columbia Hills and Husband Hill, the highest peak. We have analyzed the entirety of this dataset, which includes stereo coverage of several outcrop exposures with apparent bedding. We have measured the bedding plane orientations of hundreds of fine-scale (~1-100cm) features on all of the potentially in-place outcrops using Digital Terrain Models (DTMs) derived from the rover's Pancam stereo image data, and mapped these orientations on a regional HiRISE image and DTM. Assuming that the bedding material was deposited conformably on the topography at the time of emplacement, we reconstruct the paleo-topography of the Columbia Hills. Our reconstructed paleo-topography is similar to the modern shape of Husband Hill, but with steeper slopes, consistent with a substantial amount of erosion since deposition. The Columbia Hills are an irregular, nearly-triangular edifice of uncertain origin, situated near the center of the 160km-diameter crater and hypothesized to be either the remnant of a central peak structure, or overlapping crater rims. They span ~6.6 km in the northerly direction by ~3.6 km in the easterly direction, and rise 90m above the basaltic plains that fill the floor of Gusev Crater and embay the Hills. The topography is as irregular as the perimeter, and is cut by numerous valleys of varying lengths, widths, and directional trends. Along the traverse, Spirit examined several rock classes as defined by elemental abundances from the Alpha Particle X-ray Spectrometer (APXS) and identified remotely by the Miniature Thermal Emission Spectrometer (Mini-TES). Unlike the Gusev Plains, the rocks of the Columbia Hills show

  19. Hyperspectral analysis of columbia spotted frog habitat

    USGS Publications Warehouse

    Shive, J.P.; Pilliod, D.S.; Peterson, C.R.

    2010-01-01

    Wildlife managers increasingly are using remotely sensed imagery to improve habitat delineations and sampling strategies. Advances in remote sensing technology, such as hyperspectral imagery, provide more information than previously was available with multispectral sensors. We evaluated accuracy of high-resolution hyperspectral image classifications to identify wetlands and wetland habitat features important for Columbia spotted frogs (Rana luteiventris) and compared the results to multispectral image classification and United States Geological Survey topographic maps. The study area spanned 3 lake basins in the Salmon River Mountains, Idaho, USA. Hyperspectral data were collected with an airborne sensor on 30 June 2002 and on 8 July 2006. A 12-year comprehensive ground survey of the study area for Columbia spotted frog reproduction served as validation for image classifications. Hyperspectral image classification accuracy of wetlands was high, with a producer's accuracy of 96 (44 wetlands) correctly classified with the 2002 data and 89 (41 wetlands) correctly classified with the 2006 data. We applied habitat-based rules to delineate breeding habitat from other wetlands, and successfully predicted 74 (14 wetlands) of known breeding wetlands for the Columbia spotted frog. Emergent sedge microhabitat classification showed promise for directly predicting Columbia spotted frog egg mass locations within a wetland by correctly identifying 72 (23 of 32) of known locations. Our study indicates hyperspectral imagery can be an effective tool for mapping spotted frog breeding habitat in the selected mountain basins. We conclude that this technique has potential for improving site selection for inventory and monitoring programs conducted across similar wetland habitat and can be a useful tool for delineating wildlife habitats. ?? 2010 The Wildlife Society.

  20. The Rocks of the Columbia Hills

    NASA Technical Reports Server (NTRS)

    Squyres, Steven W.; Arvidson, Raymond E.; Blaney, Diana L.; Clark, Benton C.; Crumpler, Larry; Farrand, William H.; Gorevan, Stephen; Herkenhoff, Kenneth; Hurowitz, Joel; Kusack, Alastair; hide

    2006-01-01

    The Mars Exploration Rover Spirit has identified five distinct rock types in the Columbia Hills of Gusev crater. Clovis Class rock is a poorly-sorted clastic rock that has undergone substantial aqueous alteration. We interpret it to be aqueously-altered ejecta deposits formed by impacts into basaltic materials. Wishstone Class rock is also a poorly-sorted clastic rock that has a distinctive chemical composition that is high in Ti and P and low in Cr. Wishstone Class rock may be pyroclastic in origin. Peace Class rock is a sedimentary material composed of ultramafic sand grains cemented by significant quantities of Mg- and Ca-sulfates. Peace Class rock may have formed when water briefly saturated the ultramafic sands, and evaporated to allow precipitation of the sulfates. Watchtower Class rocks are similar chemically to Wishstone Class rocks, and have undergone widely varying degrees of near-isochemical aqueous alteration. They may also be ejecta deposits, formed by impacts into Wishstone-rich materials and altered by small amounts of water. Backstay Class rocks are basalt/trachybasalt lavas that were emplaced in the Columbia Hills after the other rock classes were, either as impact ejecta or by localized volcanic activity. The geologic record preserved in the rocks of the Columbia Hills reveals a period very early in martian history in which volcanic materials were widespread, impact was a dominant process, and water was commonly present.

  1. Columbia Terminal Railroad (COLT) feasibility analysis

    DOT National Transportation Integrated Search

    2009-06-01

    The Missouri Department of Transportation partnered with public agencies and a private company to determine expansion feasibility of intermodal freight movement through the Columbia Terminal Railroad (COLT) in Central Missouri. Businesses and shipper...

  2. The Columbia University Management Program.

    ERIC Educational Resources Information Center

    Yavarkovsky, Jerome; Haas, Warren J.

    In 1971, a management consulting firm undertook a case study of the Columbia University libraries to improve library performance by reviewing and strengthening the organization and recasting staff composition and deployment patterns. To implement the study's recommendations, an administrative structure was proposed which would emphasize functional…

  3. 75 FR 41762 - Safety Zone; Annual Kennewick, WA, Columbia Unlimited Hydroplane Races, Kennewick, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-19

    ...-AA00 Safety Zone; Annual Kennewick, WA, Columbia Unlimited Hydroplane Races, Kennewick, WA AGENCY..., Columbia Unlimited Hydroplane Races'' also known as the Tri-City Water Follies Hydroplane Races. The safety... Association hosts annual hydroplane races on the Columbia River in Kennewick, Washington. The Association is...

  4. 11 CFR 108.8 - Exemption for the District of Columbia.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 11 Federal Elections 1 2010-01-01 2010-01-01 false Exemption for the District of Columbia. 108.8 Section 108.8 Federal Elections FEDERAL ELECTION COMMISSION GENERAL FILING COPIES OF REPORTS AND STATEMENTS WITH STATE OFFICERS (2 U.S.C. 439) § 108.8 Exemption for the District of Columbia. Any copy of a...

  5. American shad in the Columbia River

    USGS Publications Warehouse

    Petersen, J.H.; Hinrichsen, R.A.; Gadomski, D.M.; Feil, D.H.; Rondorf, D.W.

    2003-01-01

    American shad Alosa sapidissima from the Hudson River, New York, were introduced into the Sacramento River, California, in 1871 and were first observed in the Columbia River in 1876. American shad returns to the Columbia River increased greatly between 1960 and 1990, and recently 2-4 million adults have been counted per year at Bonneville Dam, Oregon and Washington State (river kilometer 235). The total return of American shad is likely much higher than this dam count. Returning adults migrate as far as 600 km up the Columbia and Snake rivers, passing as many as eight large hydroelectric dams. Spawning occurs primarily in the lower river and in several large reservoirs. A small sample found returning adults were 2-6 years old and about one-third of adults were repeat spawners. Larval American shad are abundant in plankton and in the nearshore zone. Juvenile American shad occur throughout the water column during night, but school near the bottom or inshore during day. Juveniles consume a variety of zooplankton, but cyclopoid copepods were 86% of the diet by mass. Juveniles emigrate from the river from August through December. Annual exploitation of American shad by commercial and recreational fisheries combined is near 9% of the total count at Bonneville Dam. The success of American shad in the Columbia River is likely related to successful passage at dams, good spawning and rearing habitats, and low exploitation. The role of American shad within the aquatic community is poorly understood. We speculate that juveniles could alter the zooplankton community and may supplement the diet of resident predators. Data, however, are lacking or sparse in some areas, and more information is needed on the role of larval and juvenile American shad in the food web, factors limiting adult returns, ocean distribution of adults, and interactions between American shad and endangered or threatened salmonids throughout the river. ?? 2003 by the American Fisheries Society.

  6. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; van Rosendale, John; Southard, Dale

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level ofmore » treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"« less

  7. NASA Supercomputer Improves Prospects for Ocean Climate Research

    NASA Technical Reports Server (NTRS)

    Menemenlis, D.; Hill, C.; Adcroft, A.; Campin, J. -M.; Cheng, B.; Ciotti, B.; Fukumori, I.; Heimbach, P.; Henze, C.; Kohl, A.; hide

    2005-01-01

    Estimates of ocean circulation constrained by in situ and remotely sensed observations have become routinely available during the past five years, and they are being applied to myriad scientific and operational problems [Stammer et al.,2002]. Under the Global Ocean Data Assimilation Experiment (GODAE), several regional and global estimates have evolved for applications in climate research, seasonal forecasting, naval operations, marine safety, fisheries,the offshore oil industry, coastal management, and other areas. This article reports on recent progress by one effort, the consortium for Estimating the Circulation and Climate of the Ocean (ECCO), toward a next-generation synthesis of ocean and sea-ice data that is global, that covers the full ocean depth, and that permits eddies.

  8. 1993-1994 Final technical report for establishing the SECME Model in the District of Columbia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vickers, R.G.

    1995-12-31

    This is the final report for a program to establish the SECME Model in the District of Columbia. This program has seen the development of a partnership between the District of Columbia Public Schools, the University of the District of Columbia, the Department of Energy, and SECME. This partnership has demonstrated positive achievement in mathematics and science education and learning in students within the District of Columbia.

  9. Alternative Fuels Data Center: District of Columbia's Government Fleet Uses

    Science.gov Websites

    a Wide Variety of Alternative FuelsA> District of Columbia's Government Fleet Uses a Wide Variety Government Fleet Uses a Wide Variety of Alternative Fuels on Facebook Tweet about Alternative Fuels Data Center: District of Columbia's Government Fleet Uses a Wide Variety of Alternative Fuels on Twitter

  10. KENNEDY SPACE CENTER, FLA. - From left, Valerie Cassanto, Instrumentation Technology Associates, Inc., and Dr. Dennis Morrison, NASA Johnson Space Center, analyze one of the experiments carried on mission STS-107. Several experiments were found during the search for Columbia debris. Included in the Commercial ITA Biomedical Experiments payload on mission STS-107 are urokinase cancer research, microencapsulation of drugs, the Growth of Bacterial Biofilm on Surfaces during Spaceflight (GOBBSS), and tin crystal formation.

    NASA Image and Video Library

    2003-05-07

    KENNEDY SPACE CENTER, FLA. - From left, Valerie Cassanto, Instrumentation Technology Associates, Inc., and Dr. Dennis Morrison, NASA Johnson Space Center, analyze one of the experiments carried on mission STS-107. Several experiments were found during the search for Columbia debris. Included in the Commercial ITA Biomedical Experiments payload on mission STS-107 are urokinase cancer research, microencapsulation of drugs, the Growth of Bacterial Biofilm on Surfaces during Spaceflight (GOBBSS), and tin crystal formation.

  11. Scientific Visualization in High Speed Network Environments

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kutler, Paul (Technical Monitor)

    1997-01-01

    In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.

  12. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  13. Application of Electron Microscopy Techniques to the Investigation of Space Shuttle Columbia Accident

    NASA Technical Reports Server (NTRS)

    Shah, Sandeep

    2005-01-01

    This viewgraph presentation gives an overview of the investigation into the breakup of the Space Shuttle Columbia, and addresses the importance of a failure analysis strategy for the investigation of the Columbia accident. The main focus of the presentation is on the usefulness of electron microscopy for analyzing slag deposits from the tiles and reinforced carbon-carbon (RCC) wing panels of the Columbia orbiter.

  14. ARC-2009-ACD09-0208-023

    NASA Image and Video Library

    2009-09-15

    Obama Administration launches Cloud Computing Initiative at Ames Research Center. Vivek Kundra, White House Chief Federal Information Officer (right) and Lori Garver, NASA Deputy Administrator (left) get a tour & demo NASAS Supercomputing Center Hyperwall by Chris Kemp.

  15. The Space Shuttle Columbia Preservation Project - The Debris Loan Process

    NASA Technical Reports Server (NTRS)

    Thurston, Scott; Comer, Jim; Marder, Arnold; Deacon, Ryan

    2005-01-01

    The purpose of this project is to provide a process for loan of Columbia debris to qualified researchers and technical educators to: (1) Aid in advanced spacecraft design and flight safety development (2) Advance the study of hypersonic re-entry to enhance ground safety. (3) Train and instruct accident investigators and (4) Establish an enduring legacy for Space Shuttle Columbia and her crew.

  16. Synthesis, hydrolysis rates, supercomputer modeling, and antibacterial activity of bicyclic tetrahydropyridazinones.

    PubMed

    Jungheim, L N; Boyd, D B; Indelicato, J M; Pasini, C E; Preston, D A; Alborn, W E

    1991-05-01

    Bicyclic tetrahydropyridazinones, such as 13, where X are strongly electron-withdrawing groups, were synthesized to investigate their antibacterial activity. These delta-lactams are homologues of bicyclic pyrazolidinones 15, which were the first non-beta-lactam containing compounds reported to bind to penicillin-binding proteins (PBPs). The delta-lactam compounds exhibit poor antibacterial activity despite having reactivity comparable to the gamma-lactams. Molecular modeling based on semiempirical molecular orbital calculations on a Cray X-MP supercomputer, predicted that the reason for the inactivity is steric bulk hindering high affinity of the compounds to PBPs, as well as high conformational flexibility of the tetrahydropyridazinone ring hampering effective alignment of the molecule in the active site. Subsequent PBP binding experiments confirmed that this class of compound does not bind to PBPs.

  17. KSC-08pd0112

    NASA Image and Video Library

    2008-02-01

    KENNEDY SPACE CENTER, FLA. -- (From left) NASA Associate Administrator for Space Operations William Gerstenmaier; Evelyn Husband-Thompson, widow of Colonel Rick Husband, who died in the space shuttle Columbia accident; and NASA Administrator Michael Griffin pause in front of the flowers left in remembrance of the fallen heroes. Kennedy marked the NASA Day of Remembrance with special ceremonies. This year the crew of Columbia was remembered in a special way on the day that marked the fifth anniversary of the Columbia accident. Photo credit: NASA/Kim Shiflett

  18. Bedrock geology of the northern Columbia Plateau and adjacent areas

    NASA Technical Reports Server (NTRS)

    Swanson, D. A.; Wright, T. L.

    1978-01-01

    The Columbia Plateau is surrounded by a complex assemblage of highly deformed Precambrian to lower Tertiary continental and oceanic rocks that reflects numerous episodes of continental accretion. The plateau itself is comprised of the Columbia River basalt group formed between about 16.5 x 1 million years B.P. and 6 x 1 million years B.P. Eruptions were infrequent between about 14 and 6 x 1 million years B.P., allowing time for erosion and deformation between successive outpourings. The present-day courses of much of the Snake River, and parts of the Columbia River, across the plateau date from this time. Basalt produced during this waning activity is more heterogeneous chemically and isotopically than older flows, reflecting its prolonged period of volcanism.

  19. 33 CFR 100.1303 - Annual Kennewick, Washington, Columbia Unlimited Hydroplane Races.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., Columbia Unlimited Hydroplane Races. 100.1303 Section 100.1303 Navigation and Navigable Waters COAST GUARD... Annual Kennewick, Washington, Columbia Unlimited Hydroplane Races. (a) This regulation is effective each year on the last Tuesday through Sunday in July from 8:30 a.m. local time until the last race is...

  20. 33 CFR 100.1303 - Annual Kennewick, Washington, Columbia Unlimited Hydroplane Races.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., Columbia Unlimited Hydroplane Races. 100.1303 Section 100.1303 Navigation and Navigable Waters COAST GUARD... Annual Kennewick, Washington, Columbia Unlimited Hydroplane Races. (a) This regulation is effective each year on the last Tuesday through Sunday in July from 8:30 a.m. local time until the last race is...