Sample records for nasa high-end computing

  1. Role of High-End Computing in Meeting NASA's Science and Engineering Challenges

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak

    2006-01-01

    High-End Computing (HEC) has always played a major role in meeting the modeling and simulation needs of various NASA missions. With NASA's newest 62 teraflops Columbia supercomputer, HEC is having an even greater impact within the Agency and beyond. Significant cutting-edge science and engineering simulations in the areas of space exploration, Shuttle operations, Earth sciences, and aeronautics research, are already occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. The talk will describe how the integrated supercomputing production environment is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions.

  2. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  3. High End Computer Network Testbedding at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Gary, James Patrick

    1998-01-01

    The Earth & Space Data Computing (ESDC) Division, at the Goddard Space Flight Center, is involved in development and demonstrating various high end computer networking capabilities. The ESDC has several high end super computers. These are used to run: (1) computer simulation of the climate systems; (2) to support the Earth and Space Sciences (ESS) project; (3) to support the Grand Challenge (GC) Science, which is aimed at understanding the turbulent convection and dynamos in stars. GC research occurs in many sites throughout the country, and this research is enabled by, in part, the multiple high performance network interconnections. The application drivers for High End Computer Networking use distributed supercomputing to support virtual reality applications, such as TerraVision, (i.e., three dimensional browser of remotely accessed data), and Cave Automatic Virtual Environments (CAVE). Workstations can access and display data from multiple CAVE's with video servers, which allows for group/project collaborations using a combination of video, data, voice and shared white boarding. The ESDC is also developing and demonstrating the high degree of interoperability between satellite and terrestrial-based networks. To this end, the ESDC is conducting research and evaluations of new computer networking protocols and related technologies which improve the interoperability of satellite and terrestrial networks. The ESDC is also involved in the Security Proof of Concept Keystone (SPOCK) program sponsored by National Security Agency (NSA). The SPOCK activity provides a forum for government users and security technology providers to share information on security requirements, emerging technologies and new product developments. Also, the ESDC is involved in the Trans-Pacific Digital Library Experiment, which aims to demonstrate and evaluate the use of high performance satellite communications and advanced data communications protocols to enable interactive digital library data

  4. Global Weather Prediction and High-End Computing at NASA

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert; Yeh, Kao-San

    2003-01-01

    We demonstrate current capabilities of the NASA finite-volume General Circulation Model an high-resolution global weather prediction, and discuss its development path in the foreseeable future. This model can be regarded as a prototype of a future NASA Earth modeling system intended to unify development activities cutting across various disciplines within the NASA Earth Science Enterprise.

  5. Welcome to the NASA High Performance Computing and Communications Computational Aerosciences (CAS) Workshop 2000

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine H. (Editor)

    2000-01-01

    The purpose of the CAS workshop is to bring together NASA's scientists and engineers and their counterparts in industry, other government agencies, and academia working in the Computational Aerosciences and related fields. This workshop is part of the technology transfer plan of the NASA High Performance Computing and Communications (HPCC) Program. Specific objectives of the CAS workshop are to: (1) communicate the goals and objectives of HPCC and CAS, (2) promote and disseminate CAS technology within the appropriate technical communities, including NASA, industry, academia, and other government labs, (3) help promote synergy among CAS and other HPCC scientists, and (4) permit feedback from peer researchers on issues facing High Performance Computing in general and the CAS project in particular. This year we had a number of exciting presentations in the traditional aeronautics, aerospace sciences, and high-end computing areas and in the less familiar (to many of us affiliated with CAS) earth science, space science, and revolutionary computing areas. Presentations of more than 40 high quality papers were organized into ten sessions and presented over the three-day workshop. The proceedings are organized here for easy access: by author, title and topic.

  6. High-End Computing Challenges in Aerospace Design and Engineering

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ronald

    2004-01-01

    High-End Computing (HEC) has had significant impact on aerospace design and engineering and is poised to make even more in the future. In this paper we describe four aerospace design and engineering challenges: Digital Flight, Launch Simulation, Rocket Fuel System and Digital Astronaut. The paper discusses modeling capabilities needed for each challenge and presents projections of future near and far-term HEC computing requirements. NASA's HEC Project Columbia is described and programming strategies presented that are necessary to achieve high real performance.

  7. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  8. High-End Scientific Computing

    EPA Pesticide Factsheets

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  9. A Look at the Impact of High-End Computing Technologies on NASA Missions

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Dunbar, Jill; Hardman, John; Bailey, F. Ron; Wheeler, Lorien; Rogers, Stuart

    2012-01-01

    From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission s discovery of new planets now capturing the world s imagination.

  10. Image Processor Electronics (IPE): The High-Performance Computing System for NASA SWIFT Mission

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang H.; Settles, Beverly A.

    2003-01-01

    Gamma Ray Bursts (GRBs) are believed to be the most powerful explosions that have occurred in the Universe since the Big Bang and are a mystery to the scientific community. Swift, a NASA mission that includes international participation, was designed and built in preparation for a 2003 launch to help to determine the origin of Gamma Ray Bursts. Locating the position in the sky where a burst originates requires intensive computing, because the duration of a GRB can range between a few milliseconds up to approximately a minute. The instrument data system must constantly accept multiple images representing large regions of the sky that are generated by sixteen gamma ray detectors operating in parallel. It then must process the received images very quickly in order to determine the existence of possible gamma ray bursts and their locations. The high-performance instrument data computing system that accomplishes this is called the Image Processor Electronics (IPE). The IPE was designed, built and tested by NASA Goddard Space Flight Center (GSFC) in order to meet these challenging requirements. The IPE is a small size, low power and high performing computing system for space applications. This paper addresses the system implementation and the system hardware architecture of the IPE. The paper concludes with the IPE system performance that was measured during end-to-end system testing.

  11. High-End Computing for Incompressible Flows

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, Cetin

    2001-01-01

    The objective of the First MIT Conference on Computational Fluid and Solid Mechanics (June 12-14, 2001) is to bring together industry and academia (and government) to nurture the next generation in computational mechanics. The objective of the current talk, 'High-End Computing for Incompressible Flows', is to discuss some of the current issues in large scale computing for mission-oriented tasks.

  12. Role of High-End Computing in Meeting NASA's Science and Engineering Challenges

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Tu, Eugene L.; Van Dalsem, William R.

    2006-01-01

    Two years ago, NASA was on the verge of dramatically increasing its HEC capability and capacity. With the 10,240-processor supercomputer, Columbia, now in production for 18 months, HEC has an even greater impact within the Agency and extending to partner institutions. Advanced science and engineering simulations in space exploration, shuttle operations, Earth sciences, and fundamental aeronautics research are occurring on Columbia, demonstrating its ability to accelerate NASA s exploration vision. This talk describes how the integrated production environment fostered at the NASA Advanced Supercomputing (NAS) facility at Ames Research Center is accelerating scientific discovery, achieving parametric analyses of multiple scenarios, and enhancing safety for NASA missions. We focus on Columbia s impact on two key engineering and science disciplines: Aerospace, and Climate. We also discuss future mission challenges and plans for NASA s next-generation HEC environment.

  13. NASA Advanced Supercomputing Facility Expansion

    NASA Technical Reports Server (NTRS)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  14. SANs and Large Scale Data Migration at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2004-01-01

    Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.

  15. NASA's computer science research program

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1983-01-01

    Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.

  16. Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Doyle, Richard; Bergman, Larry; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael

    2013-01-01

    Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and mission. Onboard computing can be aptly viewed as a "technology multiplier" in that advances provide direct dramatic improvements in flight functions and capabilities across the NASA mission classes, and enable new flight capabilities and mission scenarios, increasing science and exploration return. Space-qualified computing technology, however, has not advanced significantly in well over ten years and the current state of the practice fails to meet the near- to mid-term needs of NASA missions. Recognizing this gap, the NASA Game Changing Development Program (GCDP), under the auspices of the NASA Space Technology Mission Directorate, commissioned a study on space-based computing needs, looking out 15-20 years. The study resulted in a recommendation to pursue high-performance spaceflight computing (HPSC) for next-generation missions, and a decision to partner with the Air Force Research Lab (AFRL) in this development.

  17. NASA high performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee; Smith, Paul; Hunter, Paul

    1993-01-01

    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 100-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientist's abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects as well as summaries of individual research and development programs within each project.

  18. NASA High Performance Computing and Communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee; Smith, Paul; Hunter, Paul

    1994-01-01

    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 1(X)-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientists' abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project, exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects, as well as summaries of early accomplishments and the significance, status, and plans for individual research and development programs within each project. Areas of emphasis include benchmarking, testbeds, software and simulation methods.

  19. The NASA computer science research program plan

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A taxonomy of computer science is included, one state of the art of each of the major computer science categories is summarized. A functional breakdown of NASA programs under Aeronautics R and D, space R and T, and institutional support is also included. These areas were assessed against the computer science categories. Concurrent processing, highly reliable computing, and information management are identified.

  20. Evaluation of NASA's end-to-end data systems using DSDS+

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Davenport, William; Message, Philip

    1994-01-01

    The Data Systems Dynamic Simulator (DSDS+) is a software tool being developed by the authors to evaluate candidate architectures for NASA's end-to-end data systems. Via modeling and simulation, we are able to quickly predict the performance characteristics of each architecture, to evaluate 'what-if' scenarios, and to perform sensitivity analyses. As such, we are using modeling and simulation to help NASA select the optimal system configuration, and to quantify the performance characteristics of this system prior to its delivery. This paper is divided into the following six sections: (1) The role of modeling and simulation in the systems engineering process. In this section, we briefly describe the different types of results obtained by modeling each phase of the systems engineering life cycle, from concept definition through operations and maintenance; (2) Recent applications of DSDS+. In this section, we describe ongoing applications of DSDS+ in support of the Earth Observing System (EOS), and we present some of the simulation results generated of candidate system designs. So far, we have modeled individual EOS subsystems (e.g. the Solid State Recorders used onboard the spacecraft), and we have also developed an integrated model of the EOS end-to-end data processing and data communications systems (from the payloads onboard to the principle investigator facilities on the ground); (3) Overview of DSDS+. In this section we define what a discrete-event model is, and how it works. The discussion is presented relative to the DSDS+ simulation tool that we have developed, including it's run-time optimization algorithms that enables DSDS+ to execute substantially faster than comparable discrete-event simulation tools; (4) Summary. In this section, we summarize our findings and 'lessons learned' during the development and application of DSDS+ to model NASA's data systems; (5) Further Information; and (6) Acknowledgements.

  1. NASA high performance computing, communications, image processing, and data visualization-potential applications to medicine.

    PubMed

    Kukkonen, C A

    1995-06-01

    High-speed information processing technologies being developed and applied by the Jet Propulsion Laboratory for NASA and Department of Defense mission needs have potential dual-uses in telemedicine and other medical applications. Fiber optic ground networks connected with microwave satellite links allow NASA to communicate with its astronauts in Earth orbit or on the moon, and with its deep space probes billions of miles away. These networks monitor the health of astronauts and or robotic spacecraft. Similar communications technology will also allow patients to communicate with doctors anywhere on Earth. NASA space missions have science as a major objective. Science sensors have become so sophisticated that they can take more data than our scientists can analyze by hand. High performance computers--workstations, supercomputer and massively parallel computers are being used to transform this data into knowledge. This is done using image processing, data visualization and other techniques to present the data--one's and zero's in forms that a human analyst can readily relate to and understand. Medical sensors have also explored in the in data output--witness CT scans, MRI, and ultrasound. This data must be presented in visual form and computers will allow routine combination of many two dimensional MRI images into three dimensional reconstructions of organs that then can be fully examined by physicians. Emerging technologies such as neural networks that are being "trained" to detect craters on planets or incoming missiles amongst decoys can be used to identify microcalcification in mammograms.

  2. Information adaptive system of NEEDS. [of NASA End to End Data System

    NASA Technical Reports Server (NTRS)

    Howle, W. M., Jr.; Kelly, W. L.

    1979-01-01

    The NASA End-to-End Data System (NEEDS) program was initiated by NASA to improve significantly the state of the art in acquisition, processing, and distribution of space-acquired data for the mid-1980s and beyond. The information adaptive system (IAS) is a program element under NEEDS Phase II which addresses sensor specific processing on board the spacecraft. The IAS program is a logical first step toward smart sensors, and IAS developments - particularly the system components and key technology improvements - are applicable to future smart efforts. The paper describes the design goals and functional elements of the IAS. In addition, the schedule for IAS development and demonstration is discussed.

  3. Going End to End to Deliver High-Speed Data

    NASA Technical Reports Server (NTRS)

    2005-01-01

    By the end of the 1990s, the optical fiber "backbone" of the telecommunication and data-communication networks had evolved from megabits-per-second transmission rates to gigabits-per-second transmission rates. Despite this boom in bandwidth, however, users at the end nodes were still not being reached on a consistent basis. (An end node is any device that does not behave like a router or a managed hub or switch. Examples of end node objects are computers, printers, serial interface processor phones, and unmanaged hubs and switches.) The primary reason that prevents bandwidth from reaching the end nodes is the complex local network topology that exists between the optical backbone and the end nodes. This complex network topology consists of several layers of routing and switch equipment which introduce potential congestion points and network latency. By breaking down the complex network topology, a true optical connection can be achieved. Access Optical Networks, Inc., is making this connection a reality with guidance from NASA s nondestructive evaluation experts.

  4. Educational NASA Computational and Scientific Studies (enCOMPASS)

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2013-01-01

    Educational NASA Computational and Scientific Studies (enCOMPASS) is an educational project of NASA Goddard Space Flight Center aimed at bridging the gap between computational objectives and needs of NASA's scientific research, missions, and projects, and academia's latest advances in applied mathematics and computer science. enCOMPASS achieves this goal via bidirectional collaboration and communication between NASA and academia. Using developed NASA Computational Case Studies in university computer science/engineering and applied mathematics classes is a way of addressing NASA's goals of contributing to the Science, Technology, Education, and Math (STEM) National Objective. The enCOMPASS Web site at http://encompass.gsfc.nasa.gov provides additional information. There are currently nine enCOMPASS case studies developed in areas of earth sciences, planetary sciences, and astrophysics. Some of these case studies have been published in AIP and IEEE's Computing in Science and Engineering magazines. A few university professors have used enCOMPASS case studies in their computational classes and contributed their findings to NASA scientists. In these case studies, after introducing the science area, the specific problem, and related NASA missions, students are first asked to solve a known problem using NASA data and past approaches used and often published in a scientific/research paper. Then, after learning about the NASA application and related computational tools and approaches for solving the proposed problem, students are given a harder problem as a challenge for them to research and develop solutions for. This project provides a model for NASA scientists and engineers on one side, and university students, faculty, and researchers in computer science and applied mathematics on the other side, to learn from each other's areas of work, computational needs and solutions, and the latest advances in research and development. This innovation takes NASA science and

  5. Computer aided indexing at NASA

    NASA Technical Reports Server (NTRS)

    Buchan, Ronald L.

    1987-01-01

    The application of computer technology to the construction of the NASA Thesaurus and in NASA Lexical Dictionary development is discussed in a brief overview. Consideration is given to the printed and online versions of the Thesaurus, retrospective indexing, the NASA RECON frequency command, demand indexing, lists of terms by category, and the STAR and IAA annual subject indexes. The evolution of computer methods in the Lexical Dictionary program is traced, from DOD and DOE subject switching to LCSH machine-aided indexing and current techniques for handling natural language (e.g., the elimination of verbs to facilitate breakdown of sentences into words and phrases).

  6. NASA's Heliophysics Theory Program - Accomplishments in Life Cycle Ending 2011

    NASA Technical Reports Server (NTRS)

    Grebowsky, J.

    2011-01-01

    NASA's Heliophysics Theory Program (HTP) is now into a new triennial cycle of funded research, with new research awards beginning in 2011. The theory program was established by the (former) Solar Terrestrial Division in 1980 to redress a weakness of support in the theory area. It has been a successful, evolving scientific program with long-term funding of relatively large "critical mass groups" pursuing theory and modeling on a scale larger than that available within the limits of traditional NASA Supporting Research and Technology (SR&T) awards. The results of the last 3 year funding cycle, just ended, contributed to ever more cutting edge theoretical understanding of all parts of the Sun-Earth Connection chain. Advances ranged from the core of the Sun out into the corona, through the solar wind into the Earth's magnetosphere and down to the ionosphere and lower atmosphere, also contributing to understanding the environments of other solar system bodies. The HTP contributions were not isolated findings but continued to contribute to the planning and implementation of NASA spacecraft missions and to the development of the predictive computer models that have become the workhorses for analyzing satellite and ground-based measurements.

  7. Computational Nanoelectronics and Nanotechnology at NASA ARC

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Kutler, Paul (Technical Monitor)

    1998-01-01

    Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technology are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotechnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.

  8. Computational Nanoelectronics and Nanotechnology at NASA ARC

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    1998-01-01

    Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technolpgy are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotecnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.

  9. End-to-End Information System design at the NASA Jet Propulsion Laboratory

    NASA Technical Reports Server (NTRS)

    Hooke, A. J.

    1978-01-01

    Recognizing a pressing need of the 1980s to optimize the two-way flow of information between a ground-based user and a remote space-based sensor, an end-to-end approach to the design of information systems has been adopted at the Jet Propulsion Laboratory. The objectives of this effort are to ensure that all flight projects adequately cope with information flow problems at an early stage of system design, and that cost-effective, multi-mission capabilities are developed when capital investments are made in supporting elements. The paper reviews the End-to-End Information System (EEIS) activity at the Laboratory, and notes the ties to the NASA End-to-End Data System program.

  10. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

  11. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  12. Computational Fluid Dynamics Program at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1989-01-01

    The Computational Fluid Dynamics (CFD) Program at NASA Ames Research Center is reviewed and discussed. The technical elements of the CFD Program are listed and briefly discussed. These elements include algorithm research, research and pilot code development, scientific visualization, advanced surface representation, volume grid generation, and numerical optimization. Next, the discipline of CFD is briefly discussed and related to other areas of research at NASA Ames including experimental fluid dynamics, computer science research, computational chemistry, and numerical aerodynamic simulation. These areas combine with CFD to form a larger area of research, which might collectively be called computational technology. The ultimate goal of computational technology research at NASA Ames is to increase the physical understanding of the world in which we live, solve problems of national importance, and increase the technical capabilities of the aerospace community. Next, the major programs at NASA Ames that either use CFD technology or perform research in CFD are listed and discussed. Briefly, this list includes turbulent/transition physics and modeling, high-speed real gas flows, interdisciplinary research, turbomachinery demonstration computations, complete aircraft aerodynamics, rotorcraft applications, powered lift flows, high alpha flows, multiple body aerodynamics, and incompressible flow applications. Some of the individual problems actively being worked in each of these areas is listed to help define the breadth or extent of CFD involvement in each of these major programs. State-of-the-art examples of various CFD applications are presented to highlight most of these areas. The main emphasis of this portion of the presentation is on examples which will not otherwise be treated at this conference by the individual presentations. Finally, a list of principal current limitations and expected future directions is given.

  13. Hot Chips and Hot Interconnects for High End Computing Systems

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  14. Development of a High Resolution Weather Forecast Model for Mesoamerica Using the NASA Nebula Cloud Computing Environment

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Case, Jonathan L.; Venner, Jason; Moreno-Madrinan, Max. J.; Delgado, Francisco

    2012-01-01

    Over the past two years, scientists in the Earth Science Office at NASA fs Marshall Space Flight Center (MSFC) have explored opportunities to apply cloud computing concepts to support near real ]time weather forecast modeling via the Weather Research and Forecasting (WRF) model. Collaborators at NASA fs Short ]term Prediction Research and Transition (SPoRT) Center and the SERVIR project at Marshall Space Flight Center have established a framework that provides high resolution, daily weather forecasts over Mesoamerica through use of the NASA Nebula Cloud Computing Platform at Ames Research Center. Supported by experts at Ames, staff at SPoRT and SERVIR have established daily forecasts complete with web graphics and a user interface that allows SERVIR partners access to high resolution depictions of weather in the next 48 hours, useful for monitoring and mitigating meteorological hazards such as thunderstorms, heavy precipitation, and tropical weather that can lead to other disasters such as flooding and landslides. This presentation will describe the framework for establishing and providing WRF forecasts, example applications of output provided via the SERVIR web portal, and early results of forecast model verification against available surface ] and satellite ]based observations.

  15. Development of a High Resolution Weather Forecast Model for Mesoamerica Using the NASA Nebula Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Molthan, A.; Case, J.; Venner, J.; Moreno-Madriñán, M. J.; Delgado, F.

    2012-12-01

    Over the past two years, scientists in the Earth Science Office at NASA's Marshall Space Flight Center (MSFC) have explored opportunities to apply cloud computing concepts to support near real-time weather forecast modeling via the Weather Research and Forecasting (WRF) model. Collaborators at NASA's Short-term Prediction Research and Transition (SPoRT) Center and the SERVIR project at Marshall Space Flight Center have established a framework that provides high resolution, daily weather forecasts over Mesoamerica through use of the NASA Nebula Cloud Computing Platform at Ames Research Center. Supported by experts at Ames, staff at SPoRT and SERVIR have established daily forecasts complete with web graphics and a user interface that allows SERVIR partners access to high resolution depictions of weather in the next 48 hours, useful for monitoring and mitigating meteorological hazards such as thunderstorms, heavy precipitation, and tropical weather that can lead to other disasters such as flooding and landslides. This presentation will describe the framework for establishing and providing WRF forecasts, example applications of output provided via the SERVIR web portal, and early results of forecast model verification against available surface- and satellite-based observations.

  16. Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing

    NASA Technical Reports Server (NTRS)

    Doyle, Richard; Bergman, Larry; Some, Raphael; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael

    2013-01-01

    Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and the mission; it can be aptly viewed as a "technology multiplier" in that advances in onboard computing provide dramatic improvements in flight functions and capabilities across the NASA mission classes, and will enable new flight capabilities and mission scenarios, increasing science and exploration return per mission-dollar.

  17. Applied Computational Fluid Dynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  18. NASA Computational Mobility

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This blue sky study was conducted in order to study the feasibility and scope of the notion of Computational Mobility to potential NASA applications such as control of multiple robotic platforms. The study was started on July lst, 2003 and concluded on September 30th, 2004. During the course of that period, four meetings were held for the participants to meet and discuss the concept, its viability, and potential applications. The study involved, at various stages, the following personnel: James Allen (IHMC), Albert0 Canas (IHMC), Daniel Cooke (Texas Tech), Kenneth Ford (IHMC - PI), Patrick Hayes (IHMC), Butler Hine (NASA), Robert Morris (NASA), Liam Pedersen (NASA), Jerry Pratt (IHMC), Raul Saavedra (IHMC), Niranjan Suri (IHMC), and Milind Tambe (USC). A white paper describing the notion of a Process Integrated Mechanism (PIM) was generated as a result of this study. The white paper is attached to this report. In addition, a number of presentations were generated during the four meetings, which are included in this report. Finally, an execution platform and a simulation environment were developed, which are available upon request from Niranjan Suri (nsuri@,ihmc.us).

  19. Computational Nanotechnology at NASA Ames Research Center, 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Langhoff, Steve; Pohorille, Andrew; Levit, Creon; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Some forms of nanotechnology appear to have enormous potential to improve aerospace and computer systems; computational nanotechnology, the design and simulation of programmable molecular machines, is crucial to progress. NASA Ames Research Center has begun a computational nanotechnology program including in-house work, external research grants, and grants of supercomputer time. Four goals have been established: (1) Simulate a hypothetical programmable molecular machine replicating itself and building other products. (2) Develop molecular manufacturing CAD (computer aided design) software and use it to design molecular manufacturing systems and products of aerospace interest, including computer components. (3) Characterize nanotechnologically accessible materials of aerospace interest. Such materials may have excellent strength and thermal properties. (4) Collaborate with experimentalists. Current in-house activities include: (1) Development of NanoDesign, software to design and simulate a nanotechnology based on functionalized fullerenes. Early work focuses on gears. (2) A design for high density atomically precise memory. (3) Design of nanotechnology systems based on biology. (4) Characterization of diamonoid mechanosynthetic pathways. (5) Studies of the laplacian of the electronic charge density to understand molecular structure and reactivity. (6) Studies of entropic effects during self-assembly. Characterization of properties of matter for clusters up to sizes exhibiting bulk properties. In addition, the NAS (NASA Advanced Supercomputing) supercomputer division sponsored a workshop on computational molecular nanotechnology on March 4-5, 1996 held at NASA Ames Research Center. Finally, collaborations with Bill Goddard at CalTech, Ralph Merkle at Xerox Parc, Don Brenner at NCSU (North Carolina State University), Tom McKendree at Hughes, and Todd Wipke at UCSC are underway.

  20. Challenges of Future High-End Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David; Kutler, Paul (Technical Monitor)

    1998-01-01

    The next major milestone in high performance computing is a sustained rate of one Pflop/s (also written one petaflops, or 10(circumflex)15 floating-point operations per second). In addition to prodigiously high computational performance, such systems must of necessity feature very large main memories, as well as comparably high I/O bandwidth and huge mass storage facilities. The current consensus of scientists who have studied these issues is that "affordable" petaflops systems may be feasible by the year 2010, assuming that certain key technologies continue to progress at current rates. One important question is whether applications can be structured to perform efficiently on such systems, which are expected to incorporate many thousands of processors and deeply hierarchical memory systems. To answer these questions, advanced performance modeling techniques, including simulation of future architectures and applications, may be required. It may also be necessary to formulate "latency tolerant algorithms" and other completely new algorithmic approaches for certain applications. This talk will give an overview of these challenges.

  1. NASA CST aids U.S. industry. [computational structures technology

    NASA Technical Reports Server (NTRS)

    Housner, Jerry M.; Pinson, Larry D.

    1993-01-01

    The effect of NASA's computational structures Technology (CST) research on aerospace vehicle design and operation is discussed. The application of this research to proposed version of a high-speed civil transport, to composite structures in aerospace, to the study of crack growth, and to resolving field problems is addressed.

  2. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    NASA Technical Reports Server (NTRS)

    Pham, Long; Chen, Aijun; Kempler, Steven; Lynnes, Christopher; Theobald, Michael; Asghar, Esfandiari; Campino, Jane; Vollmer, Bruce

    2011-01-01

    Cloud Computing has been implemented in several commercial arenas. The NASA Nebula Cloud Computing platform is an Infrastructure as a Service (IaaS) built in 2008 at NASA Ames Research Center and 2010 at GSFC. Nebula is an open source Cloud platform intended to: a) Make NASA realize significant cost savings through efficient resource utilization, reduced energy consumption, and reduced labor costs. b) Provide an easier way for NASA scientists and researchers to efficiently explore and share large and complex data sets. c) Allow customers to provision, manage, and decommission computing capabilities on an as-needed bases

  3. Large-Scale NASA Science Applications on the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2005-01-01

    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  4. Petascale Computing: Impact on Future NASA Missions

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2006-01-01

    This slide presentation reviews NASA's use of a new super computer, called Columbia, capable of operating at 62 Tera Flops. This computer is the 4th fastest computer in the world. This computer will serve all mission directorates. The applications that it would serve are: aerospace analysis and design, propulsion subsystem analysis, climate modeling, hurricane prediction and astrophysics and cosmology.

  5. NASA Applications for Computational Electromagnetic Analysis

    NASA Technical Reports Server (NTRS)

    Lewis, Catherine C.; Trout, Dawn H.; Krome, Mark E.; Perry, Thomas A.

    2011-01-01

    Computational Electromagnetic Software is used by NASA to analyze the compatibility of systems too large or too complex for testing. Recent advances in software packages and computer capabilities have made it possible to determine the effects of a transmitter inside a launch vehicle fairing, better analyze the environment threats, and perform on-orbit replacements with assured electromagnetic compatibility.

  6. Computational needs survey of NASA automation and robotics missions. Volume 2: Appendixes

    NASA Technical Reports Server (NTRS)

    Davis, Gloria J.

    1991-01-01

    NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is the fact that mission computing requirements are frequency unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. Here, NASA, industry and academic communities are provided with a preliminary set of advanced mission computational processing requirements of automation and robotics (A and R) systems. The results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implemented capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Here, appendixes are provided.

  7. NASA's 3D Flight Computer for Space Applications

    NASA Technical Reports Server (NTRS)

    Alkalai, Leon

    2000-01-01

    The New Millennium Program (NMP) Integrated Product Development Team (IPDT) for Microelectronics Systems was planning to validate a newly developed 3D Flight Computer system on its first deep-space flight, DS1, launched in October 1998. This computer, developed in the 1995-97 time frame, contains many new computer technologies previously never used in deep-space systems. They include: advanced 3D packaging architecture for future low-mass and low-volume avionics systems; high-density 3D packaged chip-stacks for both volatile and non-volatile mass memory: 400 Mbytes of local DRAM memory, and 128 Mbytes of Flash memory; high-bandwidth Peripheral Component Interface (Per) local-bus with a bridge to VME; high-bandwidth (20 Mbps) fiber-optic serial bus; and other attributes, such as standard support for Design for Testability (DFT). Even though this computer system did not complete on time for delivery to the DS1 project, it was an important development along a technology roadmap towards highly integrated and highly miniaturized avionics systems for deep-space applications. This continued technology development is now being performed by NASA's Deep Space System Development Program (also known as X2000) and within JPL's Center for Integrated Space Microsystems (CISM).

  8. The NASA High Speed ASE Project: Computational Analyses of a Low-Boom Supersonic Configuration

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; DeLaGarza, Antonio; Zink, Scott; Bounajem, Elias G.; Johnson, Christopher; Buonanno, Michael; Sanetrik, Mark D.; Yoo, Seung Y.; Kopasakis, George; Christhilf, David M.; hide

    2014-01-01

    A summary of NASA's High Speed Aeroservoelasticity (ASE) project is provided with a focus on a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The summary includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, structured and unstructured CFD grids, and discussion of the FEM development including sizing and structural constraints applied to the N+2 configuration. Linear results obtained to date include linear mode shapes and linear flutter boundaries. In addition to the tasks associated with the N+2 configuration, a summary of the work involving the development of AeroPropulsoServoElasticity (APSE) models is also discussed.

  9. NASA Center for Computational Sciences: History and Resources

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  10. A History of High-Performance Computing

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.

  11. Computational Simulations of the NASA Langley HyMETS Arc-Jet Facility

    NASA Technical Reports Server (NTRS)

    Brune, A. J.; Bruce, W. E., III; Glass, D. E.; Splinter, S. C.

    2017-01-01

    The Hypersonic Materials Environmental Test System (HyMETS) arc-jet facility located at the NASA Langley Research Center in Hampton, Virginia, is primarily used for the research, development, and evaluation of high-temperature thermal protection systems for hypersonic vehicles and reentry systems. In order to improve testing capabilities and knowledge of the test article environment, an effort is underway to computationally simulate the flow-field using computational fluid dynamics (CFD). A detailed three-dimensional model of the arc-jet nozzle and free-jet portion of the flow-field has been developed and compared to calibration probe Pitot pressure and stagnation-point heat flux for three test conditions at low, medium, and high enthalpy. The CFD model takes into account uniform pressure and non-uniform enthalpy profiles at the nozzle inlet as well as catalytic recombination efficiency effects at the probe surface. Comparing the CFD results and test data indicates an effectively fully-catalytic copper surface on the heat flux probe of about 10% efficiency and a 2-3 kpa pressure drop from the arc heater bore, where the pressure is measured, to the plenum section, prior to the nozzle. With these assumptions, the CFD results are well within the uncertainty of the stagnation pressure and heat flux measurements. The conditions at the nozzle exit were also compared with radial and axial velocimetry. This simulation capability will be used to evaluate various three-dimensional models that are tested in the HyMETS facility. An end-to-end aerothermal and thermal simulation of HyMETS test articles will follow this work to provide a better understanding of the test environment, test results, and to aid in test planning. Additional flow-field diagnostic measurements will also be considered to improve the modeling capability.

  12. End-to-End Information System design at the NASA Jet Propulsion Laboratory. [data transmission between user and space-based sensor

    NASA Technical Reports Server (NTRS)

    Hooke, A. J.

    1978-01-01

    In recognition of a pressing need of the 1980s to optimize the two-way flow of information between a ground-based user and a remote-space-based sensor, an end-to-end approach to the design of information systems has been adopted at the JPL. This paper reviews End-to-End Information System (EEIS) activity at the JPL, with attention given to the scope of the EEIS transfer function, and functional and physical elements of the EEIS. The relationship between the EEIS and the NASA End-to-End Data System program is discussed.

  13. Guidelines for development of NASA (National Aeronautics and Space Administration) computer security training programs

    NASA Technical Reports Server (NTRS)

    Tompkins, F. G.

    1983-01-01

    The report presents guidance for the NASA Computer Security Program Manager and the NASA Center Computer Security Officials as they develop training requirements and implement computer security training programs. NASA audiences are categorized based on the computer security knowledge required to accomplish identified job functions. Training requirements, in terms of training subject areas, are presented for both computer security program management personnel and computer resource providers and users. Sources of computer security training are identified.

  14. Computer Supported Indexing: A History and Evaluation of NASA's MAI System

    NASA Technical Reports Server (NTRS)

    Silvester, June P.

    1997-01-01

    Computer supported or machine aided indexing (MAI) can be categorized in multiple ways. The system used by the National Aeronautics and Space Administration's (NASA's) Center for AeroSpace Information (CASI) is described as semantic and computational. It's based on the co-occurrence of domain-specific terminology in parts of a sentence, and the probability that an indexer will assign a particular index term when a given word or phrase is encountered in text. The NASA CASI system is run on demand by the indexer and responds in 3 to 9 seconds with a list of suggested, authorized terms. The system was originally based on a syntactic system used in the late 1970's by the Defense Technical Information Center (DTIC). The NASA mainframe-supported system consists of three components: two programs and a knowledge base (KB). The evolution of the system is described and flow charts illustrate the MAI procedures. Tests used to evaluate NASA's MAI system were limited to those that would not slow production. A very early test indicated that MAI saved about 3 minutes and provided several additional terms for each document indexed. It also was determined that time and other resources spent in careful construction of the KB pay off with high-quality output and indexer acceptance of MAI results.

  15. NASA Scientists Push the Limits of Computer Technology

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Dr. Donald Frazier,NASA researcher, uses a blue laser shining through a quarts window into a special mix of chemicals to generate a polymer film on the inside quartz surface. As the chemicals respond to the laser light, they adhere to the glass surface, forming optical films. Dr. Frazier and Dr. Mark S. Paley developed the process in the Space Sciences Laboratory at NASA's Marshall Space Flight Center in Huntsville, AL. Working aboard the Space Shuttle, a science team led by Dr. Frazier formed thin films potentially useful in optical computers with fewer impurities than those formed on Earth. Patterns of these films can be traced onto the quartz surface. In the optical computers of the future, these films could replace electronic circuits and wires, making the systems more efficient and cost-effective, as well as lighter and more compact. Photo credit: NASA/Marshall Space Flight Center.

  16. NASA Scientists Push the Limits of Computer Technology

    NASA Technical Reports Server (NTRS)

    1998-01-01

    NASA research Dr. Donald Frazier uses a blue laser shining through a quartz window into a special mix of chemicals to generate a polymer film on the inside quartz surface. As the chemicals respond to the laser light, they adhere to the glass surface, forming opticl films. Dr. Frazier and Dr. Mark S. Paley developed the process in the Space Sciences Laboratory at NASA's Marshall Space Flight Center in Huntsville, AL. Working aboard the Space Shuttle, a science team led by Dr. Frazier formed thin-films potentially useful in optical computers with fewer impurities than those formed on Earth. Patterns of these films can be traced onto the quartz surface. In the optical computers on the future, these films could replace electronic circuits and wires, making the systems more efficient and cost-effective, as well as lighter and more compact. Photo credit: NASA/Marshall Space Flight Center

  17. NASA Scientists Push the Limits of Computer Technology

    NASA Technical Reports Server (NTRS)

    1999-01-01

    NASA researcher Dr. Donald Frazier uses a blue laser shining through a quartz window into a special mix of chemicals to generate a polymer film on the inside quartz surface. As the chemicals respond to the laser light, they adhere to the glass surface, forming optical films. Dr. Frazier and Dr. Mark S. Paley developed the process in the Space Sciences Laboratory at NASA's Marshall Space Flight Center in Huntsville, AL. Working aboard the Space Shuttle, a science team led by Dr. Frazier formed thin-films potentially useful in optical computers with fewer impurities than those formed on Earth. Patterns of these films can be traced onto the quartz surface. In the optical computers of the future, thee films could replace electronic circuits and wires, making the systems more efficient and cost-effective, as well as lighter and more compact. Photo credit: NASA/Marshall Space Flight Center

  18. Construction of the NASA Thesaurus: Computer Processing Support. Final Report.

    ERIC Educational Resources Information Center

    Hammond, William

    Details are given on the necessary computer processing services required to produce a NASA thesaurus. These services included (1) keypunching the terminology to specifications from approximately 19,000 Term Review Forms furnished by NASA; (2) modifying a set of programs to satisfy NASA specifications, principally to accommodate 42 character terms…

  19. First NASA/Industry High-Speed Research Configuration Aerodynamics Workshop

    NASA Technical Reports Server (NTRS)

    Wood, Richard M. (Editor)

    1999-01-01

    This publication is a compilation of documents presented at the First NASA/Industry High Speed Research Configuration Aerodynamics Workshop held on February 27-29, 1996 at NASA Langley Research Center. The purpose of the workshop was to bring together the broad spectrum of aerodynamicists, engineers, and scientists working within the Configuration Aerodynamics element of the HSR Program to collectively evaluate the technology status and to define the needs within Computational Fluid Dynamics (CFD) Analysis Methodology, Aerodynamic Shape Design, Propulsion/Airframe Integration (PAI), Aerodynamic Performance, and Stability and Control (S&C) to support the development of an economically viable High Speed Civil Transport (HSCT) aircraft. To meet these objectives, papers were presented by representative from NASA Langley, Ames, and Lewis Research Centers; Boeing, McDonnell Douglas, Northrop-Grumman, Lockheed-Martin, Vigyan, Analytical Services, Dynacs, and RIACS.

  20. Computational needs survey of NASA automation and robotics missions. Volume 1: Survey and results

    NASA Technical Reports Server (NTRS)

    Davis, Gloria J.

    1991-01-01

    NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is that mission computing requirements are frequently unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. A preliminary set of advanced mission computational processing requirements of automation and robotics (A&R) systems are provided for use by NASA, industry, and academic communities. These results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implementation capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Volume one includes the survey and results. Volume two contains the appendixes.

  1. Organizational Strategies for End-User Computing Support.

    ERIC Educational Resources Information Center

    Blackmun, Robert R.; And Others

    1988-01-01

    Effective support for end users of computers has been an important issue in higher education from the first applications of general purpose mainframe computers through minicomputers, microcomputers, and supercomputers. The development of end user support is reviewed and organizational models are examined. (Author/MLW)

  2. Proposed Use of the NASA Ames Nebula Cloud Computing Platform for Numerical Weather Prediction and the Distribution of High Resolution Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi

    2010-01-01

    The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.

  3. Impact of the Columbia Supercomputer on NASA Space and Exploration Mission

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott

    2006-01-01

    NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.

  4. NASA Computational Fluid Dynamics Conference. Volume 1: Sessions 1-6

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Presentations given at the NASA Computational Fluid Dynamics (CFD) Conference held at the NASA Ames Research Center, Moffett Field, California, March 7-9, 1989 are given. Topics covered include research facility overviews of CFD research and applications, validation programs, direct simulation of compressible turbulence, turbulence modeling, advances in Runge-Kutta schemes for solving 3-D Navier-Stokes equations, grid generation and invicid flow computation around aircraft geometries, numerical simulation of rotorcraft, and viscous drag prediction for rotor blades.

  5. Network Computer Technology. Phase I: Viability and Promise within NASA's Desktop Computing Environment

    NASA Technical Reports Server (NTRS)

    Paluzzi, Peter; Miller, Rosalind; Kurihara, West; Eskey, Megan

    1998-01-01

    Over the past several months, major industry vendors have made a business case for the network computer as a win-win solution toward lowering total cost of ownership. This report provides results from Phase I of the Ames Research Center network computer evaluation project. It identifies factors to be considered for determining cost of ownership; further, it examines where, when, and how network computer technology might fit in NASA's desktop computing architecture.

  6. An introduction to NASA's advanced computing program: Integrated computing systems in advanced multichip modules

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Alkalai, Leon

    1996-01-01

    Recent changes within NASA's space exploration program favor the design, implementation, and operation of low cost, lightweight, small and micro spacecraft with multiple launches per year. In order to meet the future needs of these missions with regard to the use of spacecraft microelectronics, NASA's advanced flight computing (AFC) program is currently considering industrial cooperation and advanced packaging architectures. In relation to this, the AFC program is reviewed, considering the design and implementation of NASA's AFC multichip module.

  7. NASA/FLAGRO - FATIGUE CRACK GROWTH COMPUTER PROGRAM

    NASA Technical Reports Server (NTRS)

    Forman, R. G.

    1994-01-01

    Structural flaws and cracks may grow under fatigue inducing loads and, upon reaching a critical size, cause structural failure to occur. The growth of these flaws and cracks may occur at load levels well below the ultimate load bearing capability of the structure. The Fatigue Crack Growth Computer Program, NASA/FLAGRO, was developed as an aid in predicting the growth of pre-existing flaws and cracks in structural components of space systems. The earlier version of the program, FLAGRO4, was the primary analysis tool used by Rockwell International and the Shuttle subcontractors for fracture control analysis on the Space Shuttle. NASA/FLAGRO is an enhanced version of the program and incorporates state-of-the-art improvements in both fracture mechanics and computer technology. NASA/FLAGRO provides the fracture mechanics analyst with a computerized method of evaluating the "safe crack growth life" capabilities of structural components. NASA/FLAGRO could also be used to evaluate the damage tolerance aspects of a given structural design. The propagation of an existing crack is governed by the stress field in the vicinity of the crack tip. The stress intensity factor is defined in terms of the relationship between the stress field magnitude and the crack size. The propagation of the crack becomes catastrophic when the local stress intensity factor reaches the fracture toughness of the material. NASA/FLAGRO predicts crack growth using a two-dimensional model which predicts growth independently in two directions based on the calculation of stress intensity factors. The analyst can choose to use either a crack growth rate equation or a nonlinear interpolation routine based on tabular data. The growth rate equation is a modified Forman equation which can be converted to a Paris or Walker equation by substituting different values into the exponent. This equation provides accuracy and versatility and can be fit to data using standard least squares methods. Stress

  8. NASA Computational Case Study: The Flight of Friendship 7

    NASA Technical Reports Server (NTRS)

    Simpson, David G.

    2012-01-01

    In this case study, we learn how to compute the position of an Earth-orbiting spacecraft as a function of time. As an exercise, we compute the position of John Glenn's Mercury spacecraft Friendship 7 as it orbited the Earth during the third flight of NASA's Mercury program.

  9. NASA Environmentally Responsible Aviation High Overall Pressure Ratio Compressor Research Pre-Test CFD

    NASA Technical Reports Server (NTRS)

    Celestina, Mark L.; Fabian, John C.; Kulkarni, Sameer

    2012-01-01

    This paper describes a collaborative and cost-shared approach to reducing fuel burn under the NASA Environmentally Responsible Aviation project. NASA and General Electric (GE) Aviation are working together aa an integrated team to obtain compressor aerodynamic data that is mutually beneficial to both NASA and GE Aviation. The objective of the High OPR Compressor Task is to test a single stage then two stages of an advanced GE core compressor using state-of-the-art research instrumentation to investigate the loss mechanisms and interaction effects of embedded transonic highly-loaded compressor stages. This paper presents preliminary results from NASA's in-house multistage computational code, APNASA, in preparation for this advanced transonic compressor rig test.

  10. Managing End User Computing in the Federal Government.

    ERIC Educational Resources Information Center

    General Services Administration, Washington, DC.

    This report presents an initial approach developed by the General Services Administration for the management of end user computing in federal government agencies. Defined as technology used directly by individuals in need of information products, end user computing represents a new field encompassing such technologies as word processing, personal…

  11. Ending Year in Space: NASA Goddard Network Maintains Communications from Space to Ground

    NASA Image and Video Library

    2016-03-01

    NASA's Goddard Space Flight Center in Greenbelt, Maryland, will monitor the landing of NASA Astronaut Scott Kelly and Russian Cosmonaut Mikhail Kornienko from their #YearInSpace Mission. Goddard's Networks Integration Center, pictured above, leads all coordination for space-to-ground communications support for the International Space Station and provides contingency support for the Soyuz TMA-18M 44S spacecraft, ensuring complete communications coverage through NASA's Space Network. The Soyuz 44S spacecraft will undock at 8:02 p.m. EST this evening from the International Space Station. It will land approximately three and a half hours later, at 11:25 p.m. EST in Kazakhstan. Both Kelly and Kornienko have spent 340 days aboard the International Space Station, preparing humanity for long duration missions and exploration into deep space. Read more: www.nasa.gov/feature/goddard/2016/ending-year-in-space-na... Credit: NASA/Goddard/Rebecca Roth NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  12. Ending Year in Space: NASA Goddard Network Maintains Communications from Space to Ground

    NASA Image and Video Library

    2017-12-08

    NASA's Goddard Space Flight Center in Greenbelt, Maryland, will monitor the landing of NASA Astronaut Scott Kelly and Russian Cosmonaut Mikhail Kornienko from their #YearInSpace Mission. Goddard's Networks Integration Center, pictured above, leads all coordination for space-to-ground communications support for the International Space Station and provides contingency support for the Soyuz TMA-18M 44S spacecraft, ensuring complete communications coverage through NASA's Space Network. The Soyuz 44S spacecraft will undock at 8:02 p.m. EST this evening from the International Space Station. It will land approximately three and a half hours later, at 11:25 p.m. EST in Kazakhstan. Both Kelly and Kornienko have spent 340 days aboard the International Space Station, preparing humanity for long duration missions and exploration into deep space. Read more: www.nasa.gov/feature/goddard/2016/ending-year-in-space-na... Credit: NASA/Goddard/Rebecca Roth NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  13. An Offload NIC for NASA, NLR, and Grid Computing

    NASA Technical Reports Server (NTRS)

    Awrach, James

    2013-01-01

    , and to add several more capabilities while reducing space consumption and cost. Provisions were designed for interoperability with systems used in the NASA HEC (High-End Computing) program. The new acceleration engine consists of state-ofthe- art FPGA (field-programmable gate array) core IP, C, and Verilog code; novel communication protocol; and extensions to the Globus structure. The engine provides the functions of network acceleration, encryption, compression, packet-ordering, and security added to Globus grid or for cloud data transfer. This system is scalable in nX10-Gbps increments through 100-Gbps f-d. It can be interfaced to industry-standard system-side or network-side devices or core IP in increments of 10 GigE, scaling to provide IEEE 40/100 GigE compliance.

  14. First NASA/Industry High-Speed Research Configuration Aerodynamics Workshop. Pt. 2

    NASA Technical Reports Server (NTRS)

    Wood, Richard M. (Editor)

    1999-01-01

    This publication is a compilation of documents presented at the First NASA Industry High Speed Research Configuration Aerodynamics Workshop held on February 27-29, 1996 at NASA Langley Research Center. The purpose of the workshop was to bring together the broad spectrum of aerodynamicists, engineers, and scientists working within the Configuration Aerodynamics element of the HSR Program to collectively evaluate the technology status and to define the needs within Computational Fluid Dynamics (CFD) Analysis Methodology, Aerodynamic Shape Design, Propulsion/Airframe Integration (PAI), Aerodynamic Performance, and Stability and Control (S&C) to support the development of an economically viable High Speed Civil Transport (HSCT) aircraft. To meet these objectives, papers were presented by representatives from NASA Langley, Ames, and Lewis Research Centers; Boeing, McDonnell Douglas, Northrop-Grumman, Lockheed-Martin, Vigyan, Analytical Services, Dynacs, and RIACS.

  15. First NASA/Industry High-Speed Research Configuration Aerodynamics Workshop. Part 1

    NASA Technical Reports Server (NTRS)

    Wood, Richard M. (Editor)

    1999-01-01

    This publication is a compilation of documents presented at the First NASA/Industry High Speed Research Configuration Aerodynamics Workshop held on February 27-29, 1996 at NASA Langley Research Center. The purpose of the workshop was to bring together the broad spectrum of aerodynamicists, engineers, and scientists working within the Configuration Aerodynamics element of the HSR Program to collectively evaluate the technology status and to define the needs within Computational Fluid Dynamics (CFD) Analysis Methodology, Aerodynamic Shape Design, Propulsion/Airframe Integration (PAI), Aerodynamic Performance, and Stability and Control (S&C) to support the development of an economically viable High Speed Civil Transport (HSCT) aircraft. To meet these objectives, papers were presented by representative from NASA Langley, Ames, and Lewis Research Centers; Boeing, McDonnell Douglas, Northrop-Grumman, Lockheed-Martin, Vigyan, Analytical Services, Dynacs, and RIACS.

  16. Does Cloud Computing in the Atmospheric Sciences Make Sense? A case study of hybrid cloud computing at NASA Langley Research Center

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Chee, T.; Minnis, P.; Spangenberg, D.; Ayers, J. K.; Palikonda, R.; Vakhnin, A.; Dubois, R.; Murphy, P. R.

    2014-12-01

    The processing, storage and dissemination of satellite cloud and radiation products produced at NASA Langley Research Center are key activities for the Climate Science Branch. A constellation of systems operates in sync to accomplish these goals. Because of the complexity involved with operating such intricate systems, there are both high failure rates and high costs for hardware and system maintenance. Cloud computing has the potential to ameliorate cost and complexity issues. Over time, the cloud computing model has evolved and hybrid systems comprising off-site as well as on-site resources are now common. Towards our mission of providing the highest quality research products to the widest audience, we have explored the use of the Amazon Web Services (AWS) Cloud and Storage and present a case study of our results and efforts. This project builds upon NASA Langley Cloud and Radiation Group's experience with operating large and complex computing infrastructures in a reliable and cost effective manner to explore novel ways to leverage cloud computing resources in the atmospheric science environment. Our case study presents the project requirements and then examines the fit of AWS with the LaRC computing model. We also discuss the evaluation metrics, feasibility, and outcomes and close the case study with the lessons we learned that would apply to others interested in exploring the implementation of the AWS system in their own atmospheric science computing environments.

  17. Computational mechanics and physics at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr.

    1987-01-01

    An overview is given of computational mechanics and physics at NASA Langley Research Center. Computational analysis is a major component and tool in many of Langley's diverse research disciplines, as well as in the interdisciplinary research. Examples are given for algorithm development and advanced applications in aerodynamics, transition to turbulence and turbulence simulation, hypersonics, structures, and interdisciplinary optimization.

  18. User-oriented end-to-end transport protocols for the real-time distribution of telemetry data from NASA spacecraft

    NASA Technical Reports Server (NTRS)

    Hooke, A. J.

    1979-01-01

    A set of standard telemetry protocols for downlink data flow facilitating the end-to-end transport of instrument data from the spacecraft to the user in real time is proposed. The direct switching of data by autonomous message 'packets' that are assembled by the source instrument on the spacecraft is discussed. The data system consists thus of a format on a message rather than word basis, and such packet telemetry would include standardized protocol headers. Standards are being developed within the NASA End-to-End Data System (NEEDS) program for the source packet and transport frame protocols. The source packet protocol contains identification of both the sequence number of the packet as it is generated by the source and the total length of the packet, while the transport frame protocol includes a sequence count defining the serial number of the frame as it is generated by the spacecraft data system, and a field specifying any 'options' selected in the format of the frame itself.

  19. Exploiting NASA's Cumulus Earth Science Cloud Archive with Services and Computation

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Jazayeri, A.; Schuler, I.; Plofchan, P.; Baynes, K.; Ramachandran, R.

    2017-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) houses nearly 30PBs of critical Earth Science data and with upcoming missions is expected to balloon to between 200PBs-300PBs over the next seven years. In addition to the massive increase in data collected, researchers and application developers want more and faster access - enabling complex visualizations, long time-series analysis, and cross dataset research without needing to copy and manage massive amounts of data locally. NASA has started prototyping with commercial cloud providers to make this data available in elastic cloud compute environments, allowing application developers direct access to the massive EOSDIS holdings. In this talk we'll explain the principles behind the archive architecture and share our experience of dealing with large amounts of data with serverless architectures including AWS Lambda, the Elastic Container Service (ECS) for long running jobs, and why we dropped thousands of lines of code for AWS Step Functions. We'll discuss best practices and patterns for accessing and using data available in a shared object store (S3) and leveraging events and message passing for sophisticated and highly scalable processing and analysis workflows. Finally we'll share capabilities NASA and cloud services are making available on the archives to enable massively scalable analysis and computation in a variety of formats and tools.

  20. Computer-Design Drawing for NASA 2020 Mars Rover

    NASA Image and Video Library

    2016-07-15

    NASA's 2020 Mars rover mission will go to a region of Mars thought to have offered favorable conditions long ago for microbial life, and the rover will search for signs of past life there. It will also collect and cache samples for potential return to Earth, for many types of laboratory analysis. As a pioneering step toward how humans on Mars will use the Red Planet's natural resources, the rover will extract oxygen from the Martian atmosphere. This 2016 image comes from computer-assisted-design work on the 2020 rover. The design leverages many successful features of NASA's Curiosity rover, which landed on Mars in 2012, but it adds new science instruments and a sampling system to carry out the new goals for the mission. http://photojournal.jpl.nasa.gov/catalog/PIA20759

  1. Projected Applications of a "Climate in a Box" Computing System at the NASA Short-Term Prediction Research and Transition (SPoRT) Center

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Molthan, Andrew L.; Zavodsky, Bradley; Case, Jonathan L.; LaFontaine, Frank J.

    2010-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique observations and research capabilities to the operational weather community, with a goal of improving short-term forecasts on a regional scale. Advances in research computing have lead to "Climate in a Box" systems, with hardware configurations capable of producing high resolution, near real-time weather forecasts, but with footprints, power, and cooling requirements that are comparable to desktop systems. The SPoRT Center has developed several capabilities for incorporating unique NASA research capabilities and observations with real-time weather forecasts. Planned utilization includes the development of a fully-cycled data assimilation system used to drive 36-48 hour forecasts produced by the NASA Unified version of the Weather Research and Forecasting (WRF) model (NU-WRF). The horsepower provided by the "Climate in a Box" system is expected to facilitate the assimilation of vertical profiles of temperature and moisture provided by the Atmospheric Infrared Sounder (AIRS) aboard the NASA Aqua satellite. In addition, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Aqua and Terra satellites provide high-resolution sea surface temperatures and vegetation characteristics. The development of MODIS normalized difference vegetation index (NVDI) composites for use within the NASA Land Information System (LIS) will assist in the characterization of vegetation, and subsequently the surface albedo and processes related to soil moisture. Through application of satellite simulators, NASA satellite instruments can be used to examine forecast model errors in cloud cover and other characteristics. Through the aforementioned application of the "Climate in a Box" system and NU-WRF capabilities, an end goal is the establishment of a real-time forecast system that fully integrates modeling and analysis capabilities developed within the NASA SPo

  2. Computer science: Key to a space program renaissance. The 1981 NASA/ASEE summer study on the use of computer science and technology in NASA. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Freitas, R. A., Jr. (Editor); Carlson, P. A. (Editor)

    1983-01-01

    Adoption of an aggressive computer science research and technology program within NASA will: (1) enable new mission capabilities such as autonomous spacecraft, reliability and self-repair, and low-bandwidth intelligent Earth sensing; (2) lower manpower requirements, especially in the areas of Space Shuttle operations, by making fuller use of control center automation, technical support, and internal utilization of state-of-the-art computer techniques; (3) reduce project costs via improved software verification, software engineering, enhanced scientist/engineer productivity, and increased managerial effectiveness; and (4) significantly improve internal operations within NASA with electronic mail, managerial computer aids, an automated bureaucracy and uniform program operating plans.

  3. A Perspective on Computational Aerothermodynamics at NASA

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2007-01-01

    The evolving role of computational aerothermodynamics (CA) within NASA over the past 20 years is reviewed. The presentation highlights contributions to understanding the Space Shuttle pitching moment anomaly observed in the first shuttle flight, prediction of a static instability for Mars Pathfinder, and the use of CA for damage assessment in post-Columbia mission support. In the view forward, several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified.

  4. An Improved Version of the NASA-Lockheed Multielement Airfoil Analysis Computer Program

    NASA Technical Reports Server (NTRS)

    Brune, G. W.; Manke, J. W.

    1978-01-01

    An improved version of the NASA-Lockheed computer program for the analysis of multielement airfoils is described. The predictions of the program are evaluated by comparison with recent experimental high lift data including lift, pitching moment, profile drag, and detailed distributions of surface pressures and boundary layer parameters. The results of the evaluation show that the contract objectives of improving program reliability and accuracy have been met.

  5. Current state and future direction of computer systems at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  6. Bayesian Research at the NASA Ames Research Center,Computational Sciences Division

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.

    2003-01-01

    NASA Ames Research Center is one of NASA s oldest centers, having started out as part of the National Advisory Committee on Aeronautics, (NACA). The site, about 40 miles south of San Francisco, still houses many wind tunnels and other aviation related departments. In recent years, with the growing realization that space exploration is heavily dependent on computing and data analysis, its focus has turned more towards Information Technology. The Computational Sciences Division has expanded rapidly as a result. In this article, I will give a brief overview of some of the past and present projects with a Bayesian content. Much more than is described here goes on with the Division. The web pages at http://ic.arc. nasa.gov give more information on these, and the other Division projects.

  7. The NASA Lewis Research Center High Temperature Fatigue and Structures Laboratory

    NASA Technical Reports Server (NTRS)

    Mcgaw, M. A.; Bartolotta, P. A.

    1987-01-01

    The physical organization of the NASA Lewis Research Center High Temperature Fatigue and Structures Laboratory is described. Particular attention is given to uniaxial test systems, high cycle/low cycle testing systems, axial torsional test systems, computer system capabilities, and a laboratory addition. The proposed addition will double the floor area of the present laboratory and will be equipped with its own control room.

  8. Projected Applications of a ``Climate in a Box'' Computing System at the NASA Short-term Prediction Research and Transition (SPoRT) Center

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Molthan, A.; Zavodsky, B.; Case, J.; Lafontaine, F.

    2010-12-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique observations and research capabilities to the operational weather community, with a goal of improving short-term forecasts on a regional scale. Advances in research computing have lead to “Climate in a Box” systems, with hardware configurations capable of producing high resolution, near real-time weather forecasts, but with footprints, power, and cooling requirements that are comparable to desktop systems. The SPoRT Center has developed several capabilities for incorporating unique NASA research capabilities and observations with real-time weather forecasts. Planned utilization includes the development of a fully-cycled data assimilation system used to drive 36-48 hour forecasts produced by the NASA Unified version of the Weather Research and Forecasting (WRF) model (NU-WRF). The horsepower provided by the “Climate in a Box” system is expected to facilitate the assimilation of vertical profiles of temperature and moisture provided by the Atmospheric Infrared Sounder (AIRS) aboard the NASA Aqua satellite. In addition, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA’s Aqua and Terra satellites provide high-resolution sea surface temperatures and vegetation characteristics. The development of MODIS normalized difference vegetation index (NVDI) composites for use within the NASA Land Information System (LIS) will assist in the characterization of vegetation, and subsequently the surface albedo and processes related to soil moisture. Through application of satellite simulators, NASA satellite instruments can be used to examine forecast model errors in cloud cover and other characteristics. Through the aforementioned application of the “Climate in a Box” system and NU-WRF capabilities, an end goal is the establishment of a real-time forecast system that fully integrates modeling and analysis capabilities developed

  9. Experimental Evaluation and Workload Characterization for High-Performance Computer Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.

    1995-01-01

    This research is conducted in the context of the Joint NSF/NASA Initiative on Evaluation (JNNIE). JNNIE is an inter-agency research program that goes beyond typical.bencbking to provide and in-depth evaluations and understanding of the factors that limit the scalability of high-performance computing systems. Many NSF and NASA centers have participated in the effort. Our research effort was an integral part of implementing JNNIE in the NASA ESS grand challenge applications context. Our research work under this program was composed of three distinct, but related activities. They include the evaluation of NASA ESS high- performance computing testbeds using the wavelet decomposition application; evaluation of NASA ESS testbeds using astrophysical simulation applications; and developing an experimental model for workload characterization for understanding workload requirements. In this report, we provide a summary of findings that covers all three parts, a list of the publications that resulted from this effort, and three appendices with the details of each of the studies using a key publication developed under the respective work.

  10. NASA End-to-End Data System /NEEDS/ information adaptive system - Performing image processing onboard the spacecraft

    NASA Technical Reports Server (NTRS)

    Kelly, W. L.; Howle, W. M.; Meredith, B. D.

    1980-01-01

    The Information Adaptive System (IAS) is an element of the NASA End-to-End Data System (NEEDS) Phase II and is focused toward onbaord image processing. Since the IAS is a data preprocessing system which is closely coupled to the sensor system, it serves as a first step in providing a 'Smart' imaging sensor. Some of the functions planned for the IAS include sensor response nonuniformity correction, geometric correction, data set selection, data formatting, packetization, and adaptive system control. The inclusion of these sensor data preprocessing functions onboard the spacecraft will significantly improve the extraction of information from the sensor data in a timely and cost effective manner and provide the opportunity to design sensor systems which can be reconfigured in near real time for optimum performance. The purpose of this paper is to present the preliminary design of the IAS and the plans for its development.

  11. NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Lee-Rausch, E. M.

    2012-01-01

    Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.

  12. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  13. NASA GRC Stirling Technology Development Overview

    NASA Technical Reports Server (NTRS)

    Thieme, Lanny G.; Schreiber, Jeffrey G.

    2003-01-01

    The Department of Energy, Lockheed Martin (LM), Stirling Technology Company, and NASA Glenn Research Center (GRC) are developing a high-efficiency Stirling Radioisotope Generator (SRG) for potential NASA Space Science missions. The SRG is being developed for multimission use, including providing spacecraft onboard electric power for NASA deep space missions and power for unmanned Mars rovers. NASA GRC is conducting an in- house supporting technology project to assist in developing the Stirling convertor for space qualification and mission implementation. Preparations are underway for a thermalhacuum system demonstration and unattended operation during endurance testing of the 55-We Technology Demonstration Convertors. Heater head life assessment efforts continue, including verification of the heater head brazing and heat treatment schedules and evaluation of any potential regenerator oxidation. Long-term magnet aging tests are continuing to characterize any possible aging in the strength or demagnetization resistance of the permanent magnets used in the linear alternator. Testing of the magnet/lamination epoxy bond for performance and lifetime characteristics is now underway. These efforts are expected to provide key inputs as the system integrator, LM, begins system development of the SRG. GRC is also developing advanced technology for Stirling convertors. Cleveland State University (CSU) is progressing toward a multi-dimensional Stirling computational fluid dynamics code, capable of modeling complete convertors. Validation efforts at both CSU and the University of Minnesota are complementing the code development. New efforts have been started this year on a lightweight convertor, advanced controllers, high-temperature materials, and an end-to-end system dynamics model. Performance and mass improvement goals have been established for second- and third-generation Stirling radioisotope power systems.

  14. The end-to-end simulator for the E-ELT HIRES high resolution spectrograph

    NASA Astrophysics Data System (ADS)

    Genoni, M.; Landoni, M.; Riva, M.; Pariani, G.; Mason, E.; Di Marcantonio, P.; Disseau, K.; Di Varano, I.; Gonzalez, O.; Huke, P.; Korhonen, H.; Li Causi, Gianluca

    2017-06-01

    We present the design, architecture and results of the End-to-End simulator model of the high resolution spectrograph HIRES for the European Extremely Large Telescope (E-ELT). This system can be used as a tool to characterize the spectrograph both by engineers and scientists. The model allows to simulate the behavior of photons starting from the scientific object (modeled bearing in mind the main science drivers) to the detector, considering also calibration light sources, and allowing to perform evaluation of the different parameters of the spectrograph design. In this paper, we will detail the architecture of the simulator and the computational model which are strongly characterized by modularity and flexibility that will be crucial in the next generation astronomical observation projects like E-ELT due to of the high complexity and long-time design and development. Finally, we present synthetic images obtained with the current version of the End-to-End simulator based on the E-ELT HIRES requirements (especially high radial velocity accuracy). Once ingested in the Data reduction Software (DRS), they will allow to verify that the instrument design can achieve the radial velocity accuracy needed by the HIRES science cases.

  15. Comparison of two computer codes for crack growth analysis: NASCRAC Versus NASA/FLAGRO

    NASA Technical Reports Server (NTRS)

    Stallworth, R.; Meyers, C. A.; Stinson, H. C.

    1989-01-01

    Results are presented from the comparison study of two computer codes for crack growth analysis - NASCRAC and NASA/FLAGRO. The two computer codes gave compatible conservative results when the part through crack analysis solutions were analyzed versus experimental test data. Results showed good correlation between the codes for the through crack at a lug solution. For the through crack at a lug solution, NASA/FLAGRO gave the most conservative results.

  16. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    NASA Astrophysics Data System (ADS)

    Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.

    2011-12-01

    Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly

  17. Creating Communications, Computing, and Networking Technology Development Road Maps for Future NASA Human and Robotic Missions

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul; Hayden, Jeffrey L.

    2005-01-01

    For human and robotic exploration missions in the Vision for Exploration, roadmaps are needed for capability development and investments based on advanced technology developments. A roadmap development process was undertaken for the needed communications, and networking capabilities and technologies for the future human and robotics missions. The underlying processes are derived from work carried out during development of the future space communications architecture, an d NASA's Space Architect Office (SAO) defined formats and structures for accumulating data. Interrelationships were established among emerging requirements, the capability analysis and technology status, and performance data. After developing an architectural communications and networking framework structured around the assumed needs for human and robotic exploration, in the vicinity of Earth, Moon, along the path to Mars, and in the vicinity of Mars, information was gathered from expert participants. This information was used to identify the capabilities expected from the new infrastructure and the technological gaps in the way of obtaining them. We define realistic, long-term space communication architectures based on emerging needs and translate the needs into interfaces, functions, and computer processing that will be required. In developing our roadmapping process, we defined requirements for achieving end-to-end activities that will be carried out by future NASA human and robotic missions. This paper describes: 10 the architectural framework developed for analysis; 2) our approach to gathering and analyzing data from NASA, industry, and academia; 3) an outline of the technology research to be done, including milestones for technology research and demonstrations with timelines; and 4) the technology roadmaps themselves.

  18. The NASA NASTRAN structural analysis computer program - New content

    NASA Technical Reports Server (NTRS)

    Weidman, D. J.

    1978-01-01

    Capabilities of a NASA-developed structural analysis computer program, NASTRAN, are evaluated with reference to finite-element modelling. Applications include the automotive industry as well as aerospace. It is noted that the range of sub-programs within NASTRAN has expanded, while keeping user cost low.

  19. SSR_pipeline--computer software for the identification of microsatellite sequences from paired-end Illumina high-throughput DNA sequence data

    USGS Publications Warehouse

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.

  20. The NASA Computational Fluid Dynamics (CFD) program - Building technology to solve future challenges

    NASA Technical Reports Server (NTRS)

    Richardson, Pamela F.; Dwoyer, Douglas L.; Kutler, Paul; Povinelli, Louis A.

    1993-01-01

    This paper presents the NASA Computational Fluid Dynamics program in terms of a strategic vision and goals as well as NASA's financial commitment and personnel levels. The paper also identifies the CFD program customers and the support to those customers. In addition, the paper discusses technical emphasis and direction of the program and some recent achievements. NASA's Ames, Langley, and Lewis Research Centers are the research hubs of the CFD program while the NASA Headquarters Office of Aeronautics represents and advocates the program.

  1. The NASA computer aided design and test system

    NASA Technical Reports Server (NTRS)

    Gould, J. M.; Juergensen, K.

    1973-01-01

    A family of computer programs facilitating the design, layout, evaluation, and testing of digital electronic circuitry is described. CADAT (computer aided design and test system) is intended for use by NASA and its contractors and is aimed predominantly at providing cost effective microelectronic subsystems based on custom designed metal oxide semiconductor (MOS) large scale integrated circuits (LSIC's). CADAT software can be easily adopted by installations with a wide variety of computer hardware configurations. Its structure permits ease of update to more powerful component programs and to newly emerging LSIC technologies. The components of the CADAT system are described stressing the interaction of programs rather than detail of coding or algorithms. The CADAT system provides computer aids to derive and document the design intent, includes powerful automatic layout software, permits detailed geometry checks and performance simulation based on mask data, and furnishes test pattern sequences for hardware testing.

  2. Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics.

    PubMed

    Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo

    2016-01-01

    The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.

  3. An analysis of Space Shuttle countdown activities: Preliminaries to a computational model of the NASA Test Director

    NASA Technical Reports Server (NTRS)

    John, Bonnie E.; Remington, Roger W.; Steier, David M.

    1991-01-01

    Before all systems are go just prior to the launch of a space shuttle, thousands of operations and tests have been performed to ensure that all shuttle and support subsystems are operational and ready for launch. These steps, which range from activating the orbiter's flight computers to removing the launch pad from the itinerary of the NASA tour buses, are carried out by launch team members at various locations and with highly specialized fields of expertise. The liability for coordinating these diverse activities rests with the NASA Test Director (NTD) at NASA-Kennedy. The behavior is being studied of the NTD with the goal of building a detailed computational model of that behavior; the results of that analysis to date are given. The NTD's performance is described in detail, as a team member who must coordinate a complex task through efficient audio communication, as well as an individual taking notes and consulting manuals. A model of the routine cognitive skill used by the NTD to follow the launch countdown procedure manual was implemented using the Soar cognitive architecture. Several examples are given of how such a model could aid in evaluating proposed computer support systems.

  4. Managing the Risks Associated with End-User Computing.

    ERIC Educational Resources Information Center

    Alavi, Maryam; Weiss, Ira R.

    1986-01-01

    Identifies organizational risks of end-user computing (EUC) associated with different stages of the end-user applications life cycle (analysis, design, implementation). Generic controls are identified that address each of the risks enumerated in a manner that allows EUC management to select those most appropriate to their EUC environment. (5…

  5. Training leads to increased auditory brain-computer interface performance of end-users with motor impairments.

    PubMed

    Halder, S; Käthner, I; Kübler, A

    2016-02-01

    Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Computational Predictions of the Performance Wright 'Bent End' Propellers

    NASA Technical Reports Server (NTRS)

    Wang, Xiang-Yu; Ash, Robert L.; Bobbitt, Percy J.; Prior, Edwin (Technical Monitor)

    2002-01-01

    Computational analysis of two 1911 Wright brothers 'Bent End' wooden propeller reproductions have been performed and compared with experimental test results from the Langley Full Scale Wind Tunnel. The purpose of the analysis was to check the consistency of the experimental results and to validate the reliability of the tests. This report is one part of the project on the propeller performance research of the Wright 'Bent End' propellers, intend to document the Wright brothers' pioneering propeller design contributions. Two computer codes were used in the computational predictions. The FLO-MG Navier-Stokes code is a CFD (Computational Fluid Dynamics) code based on the Navier-Stokes Equations. It is mainly used to compute the lift coefficient and the drag coefficient at specified angles of attack at different radii. Those calculated data are the intermediate results of the computation and a part of the necessary input for the Propeller Design Analysis Code (based on Adkins and Libeck method), which is a propeller design code used to compute the propeller thrust coefficient, the propeller power coefficient and the propeller propulsive efficiency.

  7. NASA Bioreactor

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Laptop computer sits atop the Experiment Control Computer for a NASA Bioreactor. The flight crew can change operating conditions in the Bioreactor by using the graphical interface on the laptop. The NASA Bioreactor provides a low turbulence culture environment which promotes the formation of large, three-dimensional cell clusters. The Bioreactor is rotated to provide gentle mixing of fresh and spent nutrient without inducing shear forces that would damage the cells. Due to their high level of cellular organization and specialization, samples constructed in the bioreactor more closely resemble the original tumor or tissue found in the body. The work is sponsored by NASA's Office of Biological and Physical Research. The bioreactor is managed by the Biotechnology Cell Science Program at NASA's Johnson Space Center (JSC). NASA-sponsored bioreactor research has been instrumental in helping scientists to better understand normal and cancerous tissue development. In cooperation with the medical community, the bioreactor design is being used to prepare better models of human colon, prostate, breast and ovarian tumors. Cartilage, bone marrow, heart muscle, skeletal muscle, pancreatic islet cells, liver and kidney are just a few of the normal tissues being cultured in rotating bioreactors by investigators.

  8. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  9. Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview

    NASA Technical Reports Server (NTRS)

    Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'

  10. High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center

    NASA Astrophysics Data System (ADS)

    Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.

    2015-12-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  11. NASA charging analyzer program: A computer tool that can evaluate electrostatic contamination

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.; Roche, J. C.; Mandell, M. J.

    1978-01-01

    A computer code, the NASA Charging Analyzer Program (NASCAP), was developed to study the surface charging of bodies subjected to geomagnetic substorm conditions. This program will treat the material properties of a surface in a self-consistent manner and calculate the electric fields in space due to the surface charge. Trajectories of charged particles in this electric field can be computed to determine if these particles enhance surface contamination. A preliminary model of the Spacecraft Charging At The High Altitudes (SCATHA) satellite was developed in the NASCAP code and subjected to a geomagnetic substorm environment to investigate the possibility of electrostatic contamination. The results indicate that differential voltages will exist between the spacecraft ground surfaces and the insulator surfaces. The electric fields from this differential charging can enhance the contamination of spacecraft surfaces.

  12. Development of a High Resolution Weather Forecast Model for Mesoamerica Using the NASA Ames Code I Private Cloud Computing Environment

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Case, Jonathan; Venner, Jason; Moreno-Madrinan, Max J.; Delgado, Francisco

    2012-01-01

    Two projects at NASA Marshall Space Flight Center have collaborated to develop a high resolution weather forecast model for Mesoamerica: The NASA Short-term Prediction Research and Transition (SPoRT) Center, which integrates unique NASA satellite and weather forecast modeling capabilities into the operational weather forecasting community. NASA's SERVIR Program, which integrates satellite observations, ground-based data, and forecast models to improve disaster response in Central America, the Caribbean, Africa, and the Himalayas.

  13. The Lunar Laser Communication Demonstration: NASA's First Step Toward Very High Data Rate Support of Science and Exploration Missions

    NASA Astrophysics Data System (ADS)

    Boroson, Don M.; Robinson, Bryan S.

    2014-12-01

    Future NASA missions for both Science and Exploration will have needs for much higher data rates than are presently available, even with NASA's highly-capable Space- and Deep-Space Networks. As a first step towards this end, for one month in late 2013, NASA's Lunar Laser Communication Demonstration (LLCD) successfully demonstrated for the first time high-rate duplex laser communications between a satellite in lunar orbit, the Lunar Atmosphere and Dust Environment Explorer (LADEE), and multiple ground stations on the Earth. It constituted the longest-range laser communication link ever built and demonstrated the highest communication data rates ever achieved to or from the Moon.

  14. NASA's supercomputing experience

    NASA Technical Reports Server (NTRS)

    Bailey, F. Ron

    1990-01-01

    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed.

  15. High Productivity Computing Systems and Competitiveness Initiative

    DTIC Science & Technology

    2007-07-01

    planning committee for the annual, international Supercomputing Conference in 2004 and 2005. This is the leading HPC industry conference in the world. It...sector partnerships. Partnerships will form a key part of discussions at the 2nd High Performance Computing Users Conference, planned for July 13, 2005...other things an interagency roadmap for high-end computing core technologies and an accessibility improvement plan . Improving HPC Education and

  16. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  17. The Design and Implementation of NASA's Advanced Flight Computing Module

    NASA Technical Reports Server (NTRS)

    Alkakaj, Leon; Straedy, Richard; Jarvis, Bruce

    1995-01-01

    This paper describes a working flight computer Multichip Module developed jointly by JPL and TRW under their respective research programs in a collaborative fashion. The MCM is fabricated by nCHIP and is packaged within a 2 by 4 inch Al package from Coors. This flight computer module is one of three modules under development by NASA's Advanced Flight Computer (AFC) program. Further development of the Mass Memory and the programmable I/O MCM modules will follow. The three building block modules will then be stacked into a 3D MCM configuration. The mass and volume of the flight computer MCM achieved at 89 grams and 1.5 cubic inches respectively, represent a major enabling technology for future deep space as well as commercial remote sensing applications.

  18. Computer modeling of high-voltage solar array experiment using the NASCAP/LEO (NASA Charging Analyzer Program/Low Earth Orbit) computer code

    NASA Astrophysics Data System (ADS)

    Reichl, Karl O., Jr.

    1987-06-01

    The relationship between the Interactions Measurement Payload for Shuttle (IMPS) flight experiment and the low Earth orbit plasma environment is discussed. Two interactions (parasitic current loss and electrostatic discharge on the array) may be detrimental to mission effectiveness. They result from the spacecraft's electrical potentials floating relative to plasma ground to achieve a charge flow equilibrium into the spacecraft. The floating potentials were driven by external biases applied to a solar array module of the Photovoltaic Array Space Power (PASP) experiment aboard the IMPS test pallet. The modeling was performed using the NASA Charging Analyzer Program/Low Earth Orbit (NASCAP/LEO) computer code which calculates the potentials and current collection of high-voltage objects in low Earth orbit. Models are developed by specifying the spacecraft, environment, and orbital parameters. Eight IMPS models were developed by varying the array's bias voltage and altering its orientation relative to its motion. The code modeled a typical low Earth equatorial orbit. NASCAP/LEO calculated a wide variety of possible floating potential and current collection scenarios. These varied directly with both the array bias voltage and with the vehicle's orbital orientation.

  19. Development of Advanced Computational Aeroelasticity Tools at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Bartels, R. E.

    2008-01-01

    NASA Langley Research Center has continued to develop its long standing computational tools to address new challenges in aircraft and launch vehicle design. This paper discusses the application and development of those computational aeroelastic tools. Four topic areas will be discussed: 1) Modeling structural and flow field nonlinearities; 2) Integrated and modular approaches to nonlinear multidisciplinary analysis; 3) Simulating flight dynamics of flexible vehicles; and 4) Applications that support both aeronautics and space exploration.

  20. NASA Computational Case Study SAR Data Processing: Ground-Range Projection

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Rincon, Rafael

    2013-01-01

    Radar technology is used extensively by NASA for remote sensing of the Earth and other Planetary bodies. In this case study, we learn about different computational concepts for processing radar data. In particular, we learn how to correct a slanted radar image by projecting it on the surface that was sensed by a radar instrument.

  1. Proceedings of the Fifth NASA/NSF/DOD Workshop on Aerospace Computational Control

    NASA Technical Reports Server (NTRS)

    Wette, M. (Editor); Man, G. K. (Editor)

    1993-01-01

    The Fifth Annual Workshop on Aerospace Computational Control was one in a series of workshops sponsored by NASA, NSF, and the DOD. The purpose of these workshops is to address computational issues in the analysis, design, and testing of flexible multibody control systems for aerospace applications. The intention in holding these workshops is to bring together users, researchers, and developers of computational tools in aerospace systems (spacecraft, space robotics, aerospace transportation vehicles, etc.) for the purpose of exchanging ideas on the state of the art in computational tools and techniques.

  2. Development of the NASA High-Altitude Imaging Wind and Rain Airborne Profiler

    NASA Technical Reports Server (NTRS)

    Li, Lihua; Heymsfield, Gerald; Carswell, James; Schaubert, Dan; McLinden, Matthew; Vega, Manuel; Perrine, Martin

    2011-01-01

    The scope of this paper is the development and recent field deployments of the High-Altitude Imaging Wind and Rain Airborne Profiler (HIWRAP), which was funded under the NASA Instrument Incubator Program (IIP) [1]. HIWRAP is a dual-frequency (Ka- and Ku-band), dual-beam (300 and 400 incidence angles), conical scanning, Doppler radar system designed for operation on the NASA high-altitude (65,000 ft) Global Hawk Unmanned Aerial System (UAS). It utilizes solid state transmitters along with a novel pulse compression scheme that results in a system with compact size, light weight, less power consumption, and low cost compared to radars currently in use for precipitation and Doppler wind measurements. By combining measurements at Ku- and Ka-band, HIWRAP is able to image winds through measuring volume backscattering from clouds and precipitation. In addition, HIWRAP is also capable of measuring surface winds in an approach similar to SeaWinds on QuikScat. To this end, HIWRAP hardware and software development has been completed. It was installed on the NASA WB57 for instrument test flights in March, 2010 and then deployed on the NASA Global Hawk for supporting the Genesis and Rapid Intensification Processes (GRIP) field campaign in August-September, 2010. This paper describes the scientific motivations of the development of HIWRAP as well as system hardware, aircraft integration and flight missions. Preliminary data from GRIP science flights is also presented.

  3. Parallel Computing:. Some Activities in High Energy Physics

    NASA Astrophysics Data System (ADS)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  4. Fourth NASA Workshop on Computational Control of Flexible Aerospace Systems, part 2

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1991-01-01

    A collection of papers presented at the Fourth NASA Workshop on Computational Control of Flexible Aerospace Systems is given. The papers address modeling, systems identification, and control of flexible aircraft, spacecraft and robotic systems.

  5. Application of NASA General-Purpose Solver to Large-Scale Computations in Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Storaasli, Olaf O.

    2004-01-01

    Of several iterative and direct equation solvers evaluated previously for computations in aeroacoustics, the most promising was the NASA-developed General-Purpose Solver (winner of NASA's 1999 software of the year award). This paper presents detailed, single-processor statistics of the performance of this solver, which has been tailored and optimized for large-scale aeroacoustic computations. The statistics, compiled using an SGI ORIGIN 2000 computer with 12 Gb available memory (RAM) and eight available processors, are the central processing unit time, RAM requirements, and solution error. The equation solver is capable of solving 10 thousand complex unknowns in as little as 0.01 sec using 0.02 Gb RAM, and 8.4 million complex unknowns in slightly less than 3 hours using all 12 Gb. This latter solution is the largest aeroacoustics problem solved to date with this technique. The study was unable to detect any noticeable error in the solution, since noise levels predicted from these solution vectors are in excellent agreement with the noise levels computed from the exact solution. The equation solver provides a means for obtaining numerical solutions to aeroacoustics problems in three dimensions.

  6. NASA's Climate Data Services Initiative

    NASA Astrophysics Data System (ADS)

    McInerney, M.; Duffy, D.; Schnase, J. L.; Webster, W. P.

    2013-12-01

    Our understanding of the Earth's processes is based on a combination of observational data records and mathematical models. The size of NASA's space-based observational data sets is growing dramatically as new missions come online. However a potentially bigger data challenge is posed by the work of climate scientists, whose models are regularly producing data sets of hundreds of terabytes or more. It is important to understand that the 'Big Data' challenge of climate science cannot be solved with a single technological approach or an ad hoc assemblage of technologies. It will require a multi-faceted, well-integrated suite of capabilities that include cloud computing, large-scale compute-storage systems, high-performance analytics, scalable data management, and advanced deployment mechanisms in addition to the existing, well-established array of mature information technologies. It will also require a coherent organizational effort that is able to focus on the specific and sometimes unique requirements of climate science. Given that it is the knowledge that is gained from data that is of ultimate benefit to society, data publication and data analytics will play a particularly important role. In an effort to accelerate scientific discovery and innovation through broader use of climate data, NASA Goddard Space Flight Center's Office of Computational and Information Sciences and Technology has embarked on a determined effort to build a comprehensive, integrated data publication and analysis capability for climate science. The Climate Data Services (CDS) Initiative integrates people, expertise, and technology into a highly-focused, next-generation, one-stop climate science information service. The CDS Initiative is providing the organizational framework, processes, and protocols needed to deploy existing information technologies quickly using a combination of enterprise-level services and an expanding array of cloud services. Crucial to its effectiveness, the CDS

  7. NSI customer service representatives and user support office: NASA Science Internet

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Science Internet, (NSI) was established in 1987 to provide NASA's Offices of Space Science and Applications (OSSA) missions with transparent wide-area data connectivity to NASA's researchers, computational resources, and databases. The NSI Office at NASA/Ames Research Center has the lead responsibility for implementing a total, open networking program to serve the OSSA community. NSI is a full-service communications provider whose services include science network planning, network engineering, applications development, network operations, and network information center/user support services. NSI's mission is to provide reliable high-speed communications to the NASA science community. To this end, the NSI Office manages and operates the NASA Science Internet, a multiprotocol network currently supporting both DECnet and TCP/IP protocols. NSI utilizes state-of-the-art network technology to meet its customers' requirements. THe NASA Science Internet interconnects with other national networks including the National Science Foundation's NSFNET, the Department of Energy's ESnet, and the Department of Defense's MILNET. NSI also has international connections to Japan, Australia, New Zealand, Chile, and several European countries. NSI cooperates with other government agencies as well as academic and commercial organizations to implement networking technologies which foster interoperability, improve reliability and performance, increase security and control, and expedite migration to the OSI protocols.

  8. Modern design of a fast front-end computer

    NASA Astrophysics Data System (ADS)

    Šoštarić, Z.; Anic̈ić, D.; Sekolec, L.; Su, J.

    1994-12-01

    Front-end computers (FEC) at Paul Scherrer Institut provide access to accelerator CAMAC-based sensors and actuators by way of a local area network. In the scope of the new generation FEC project, a front-end is regarded as a collection of services. The functionality of one such service is described in terms of Yourdon's environment, behaviour, processor and task models. The computational model (software representation of the environment) of the service is defined separately, using the information model of the Shlaer-Mellor method, and Sather OO language. In parallel with the analysis and later with the design, a suite of test programmes was developed to evaluate the feasibility of different computing platforms for the project and a set of rapid prototypes was produced to resolve different implementation issues. The past and future aspects of the project and its driving forces are presented. Justification of the choice of methodology, platform and requirement, is given. We conclude with a description of the present state, priorities and limitations of our project.

  9. Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.

    PubMed

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk

    2009-07-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.

  10. Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging

    PubMed Central

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk

    2009-01-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160

  11. NASA Brevard Top Scholars

    NASA Image and Video Library

    2017-11-13

    Retired NASA astronaut Tom Jones is with top scholars from Brevard County public high schools in the Rocket Garden at the NASA Kennedy Space Center Visitor Complex in Florida. Top scholars from the high schools were invited to Kennedy Space Center for a tour of facilities, lunch and a roundtable discussion with engineers and scientists at the center. The 2017-2018 Brevard Top Scholars event was hosted by the center's Education Projects and Youth Engagement office to honor the top three scholars of the 2017-2018 graduating student class from each of Brevard County’s public high schools. They students received a personalized certificate at the end of the day.

  12. NASA Brevard Top Scholars

    NASA Image and Video Library

    2017-11-13

    Retired NASA astronaut Tom Jones talks to high school students during "Lunch with an Astronaut" at the NASA Kennedy Space Center Visitor Complex in Florida. Top scholars from Brevard County public high schools were invited to Kennedy Space Center for a tour of facilities, lunch and a roundtable discussion with engineers and scientists at the center. The 2017-2018 Brevard Top Scholars event was hosted by the center's Education Projects and Youth Engagement office to honor the top three scholars of the 2017-2018 graduating student class from each of Brevard County’s public high schools. The students received a personalized certificate at the end of the day.

  13. Combining Simulation Tools for End-to-End Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan; Gutkowski, Jeffrey; Craig, Scott; Dawn, Tim; Williams, Jacobs; Stein, William B.; Litton, Daniel; Lugo, Rafael; Qu, Min

    2015-01-01

    Trajectory simulations with advanced optimization algorithms are invaluable tools in the process of designing spacecraft. Due to the need for complex models, simulations are often highly tailored to the needs of the particular program or mission. NASA's Orion and SLS programs are no exception. While independent analyses are valuable to assess individual spacecraft capabilities, a complete end-to-end trajectory from launch to splashdown maximizes potential performance and ensures a continuous solution. In order to obtain end-to-end capability, Orion's in-space tool (Copernicus) was made to interface directly with the SLS's ascent tool (POST2) and a new tool to optimize the full problem by operating both simulations simultaneously was born.

  14. Development of the NASA/FLAGRO computer program for analysis of airframe structures

    NASA Technical Reports Server (NTRS)

    Forman, R. G.; Shivakumar, V.; Newman, J. C., Jr.

    1994-01-01

    The NASA/FLAGRO (NASGRO) computer program was developed for fracture control analysis of space hardware and is currently the standard computer code in NASA, the U.S. Air Force, and the European Agency (ESA) for this purpose. The significant attributes of the NASGRO program are the numerous crack case solutions, the large materials file, the improved growth rate equation based on crack closure theory, and the user-friendly promptive input features. In support of the National Aging Aircraft Research Program (NAARP); NASGRO is being further developed to provide advanced state-of-the-art capability for damage tolerance and crack growth analysis of aircraft structural problems, including mechanical systems and engines. The project currently involves a cooperative development effort by NASA, FAA, and ESA. The primary tasks underway are the incorporation of advanced methodology for crack growth rate retardation resulting from spectrum loading and improved analysis for determining crack instability. Also, the current weight function solutions in NASGRO or nonlinear stress gradient problems are being extended to more crack cases, and the 2-d boundary integral routine for stress analysis and stress-intensity factor solutions is being extended to 3-d problems. Lastly, effort is underway to enhance the program to operate on personal computers and work stations in a Windows environment. Because of the increasing and already wide usage of NASGRO, the code offers an excellent mechanism for technology transfer for new fatigue and fracture mechanics capabilities developed within NAARP.

  15. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  16. NASA HPCC Technology for Aerospace Analysis and Design

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine H.

    1999-01-01

    The Computational Aerosciences (CAS) Project is part of NASA's High Performance Computing and Communications Program. Its primary goal is to accelerate the availability of high-performance computing technology to the US aerospace community-thus providing the US aerospace community with key tools necessary to reduce design cycle times and increase fidelity in order to improve safety, efficiency and capability of future aerospace vehicles. A complementary goal is to hasten the emergence of a viable commercial market within the aerospace community for the advantage of the domestic computer hardware and software industry. The CAS Project selects representative aerospace problems (especially design) and uses them to focus efforts on advancing aerospace algorithms and applications, systems software, and computing machinery to demonstrate vast improvements in system performance and capability over the life of the program. Recent demonstrations have served to assess the benefits of possible performance improvements while reducing the risk of adopting high-performance computing technology. This talk will discuss past accomplishments in providing technology to the aerospace community, present efforts, and future goals. For example, the times to do full combustor and compressor simulations (of aircraft engines) have been reduced by factors of 320:1 and 400:1 respectively. While this has enabled new capabilities in engine simulation, the goal of an overnight, dynamic, multi-disciplinary, 3-dimensional simulation of an aircraft engine is still years away and will require new generations of high-end technology.

  17. An Overview of Computational Aeroacoustic Modeling at NASA Langley

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2001-01-01

    The use of computational techniques in the area of acoustics is known as computational aeroacoustics and has shown great promise in recent years. Although an ultimate goal is to use computational simulations as a virtual wind tunnel, the problem is so complex that blind applications of traditional algorithms are typically unable to produce acceptable results. The phenomena of interest are inherently unsteady and cover a wide range of frequencies and amplitudes. Nonetheless, with appropriate simplifications and special care to resolve specific phenomena, currently available methods can be used to solve important acoustic problems. These simulations can be used to complement experiments, and often give much more detailed information than can be obtained in a wind tunnel. The use of acoustic analogy methods to inexpensively determine far-field acoustics from near-field unsteadiness has greatly reduced the computational requirements. A few examples of current applications of computational aeroacoustics at NASA Langley are given. There remains a large class of problems that require more accurate and efficient methods. Research to develop more advanced methods that are able to handle the geometric complexity of realistic problems using block-structured and unstructured grids are highlighted.

  18. Computational Fluid Dynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Kutler, Paul

    1994-01-01

    Computational fluid dynamics (CFD) is beginning to play a major role in the aircraft industry of the United States because of the realization that CFD can be a new and effective design tool and thus could provide a company with a competitive advantage. It is also playing a significant role in research institutions, both governmental and academic, as a tool for researching new fluid physics, as well as supplementing and complementing experimental testing. In this presentation, some of the progress made to date in CFD at NASA Ames will be reviewed. The presentation addresses the status of CFD in terms of methods, examples of CFD solutions, and computer technology. In addition, the role CFD will play in supporting the revolutionary goals set forth by the Aeronautical Policy Review Committee established by the Office of Science and Technology Policy is noted. The need for validated CFD tools is also briefly discussed.

  19. A Queue Simulation Tool for a High Performance Scientific Computing Center

    NASA Technical Reports Server (NTRS)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  20. NASA Gulf of Mexico Initiative Hypoxia Research

    NASA Technical Reports Server (NTRS)

    Armstrong, Curtis D.

    2012-01-01

    The Applied Science & Technology Project Office at Stennis Space Center (SSC) manages NASA's Gulf of Mexico Initiative (GOMI). Addressing short-term crises and long-term issues, GOMI participants seek to understand the environment using remote sensing, in-situ observations, laboratory analyses, field observations and computational models. New capabilities are transferred to end-users to help them make informed decisions. Some GOMI activities of interest to the hypoxia research community are highlighted.

  1. End-of-test Performance and Wear Characterization of NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test

    NASA Technical Reports Server (NTRS)

    Shastry, Rohit; Herman, Daniel Andrew; Soulas, George C.; Patterson, Michael J.

    2014-01-01

    This presentation describes results from the end-of-test performance characterization of NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test (LDT). Sub-component performance as well as overall thruster performance is presented and compared to results over the course of the test. Overall wear of critical thruster components is also described, and an update on the first failure mode of the thruster is provided.

  2. Using Frameworks in a Government Contracting Environment: Case Study at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    McGalliard, James

    2008-01-01

    A viewgraph describing the use of multiple frameworks by NASA, GSA, and U.S. Government agencies is presented. The contents include: 1) Federal Systems Integration and Management Center (FEDSIM) and NASA Center for Computational Sciences (NCCS) Environment; 2) Ruling Frameworks; 3) Implications; and 4) Reconciling Multiple Frameworks.

  3. Computer-Assisted Synthetic Planning: The End of the Beginning.

    PubMed

    Szymkuć, Sara; Gajewska, Ewa P; Klucznik, Tomasz; Molga, Karol; Dittwald, Piotr; Startek, Michał; Bajczyk, Michał; Grzybowski, Bartosz A

    2016-05-10

    Exactly half a century has passed since the launch of the first documented research project (1965 Dendral) on computer-assisted organic synthesis. Many more programs were created in the 1970s and 1980s but the enthusiasm of these pioneering days had largely dissipated by the 2000s, and the challenge of teaching the computer how to plan organic syntheses earned itself the reputation of a "mission impossible". This is quite curious given that, in the meantime, computers have "learned" many other skills that had been considered exclusive domains of human intellect and creativity-for example, machines can nowadays play chess better than human world champions and they can compose classical music pleasant to the human ear. Although there have been no similar feats in organic synthesis, this Review argues that to concede defeat would be premature. Indeed, bringing together the combination of modern computational power and algorithms from graph/network theory, chemical rules (with full stereo- and regiochemistry) coded in appropriate formats, and the elements of quantum mechanics, the machine can finally be "taught" how to plan syntheses of non-trivial organic molecules in a matter of seconds to minutes. The Review begins with an overview of some basic theoretical concepts essential for the big-data analysis of chemical syntheses. It progresses to the problem of optimizing pathways involving known reactions. It culminates with discussion of algorithms that allow for a completely de novo and fully automated design of syntheses leading to relatively complex targets, including those that have not been made before. Of course, there are still things to be improved, but computers are finally becoming relevant and helpful to the practice of organic-synthetic planning. Paraphrasing Churchill's famous words after the Allies' first major victory over the Axis forces in Africa, it is not the end, it is not even the beginning of the end, but it is the end of the beginning for the

  4. Flow Control Research at NASA Langley in Support of High-Lift Augmentation

    NASA Technical Reports Server (NTRS)

    Sellers, William L., III; Jones, Gregory S.; Moore, Mark D.

    2002-01-01

    The paper describes the efforts at NASA Langley to apply active and passive flow control techniques for improved high-lift systems, and advanced vehicle concepts utilizing powered high-lift techniques. The development of simplified high-lift systems utilizing active flow control is shown to provide significant weight and drag reduction benefits based on system studies. Active flow control that focuses on separation, and the development of advanced circulation control wings (CCW) utilizing unsteady excitation techniques will be discussed. The advanced CCW airfoils can provide multifunctional controls throughout the flight envelope. Computational and experimental data are shown to illustrate the benefits and issues with implementation of the technology.

  5. Terahertz computed tomography of NASA thermal protection system materials

    NASA Astrophysics Data System (ADS)

    Roth, D. J.; Reyes-Rodriguez, S.; Zimdars, D. A.; Rauser, R. W.; Ussery, W. W.

    2012-05-01

    A terahertz (THz) axial computed tomography system has been developed that uses time domain measurements in order to form cross-sectional image slices and three dimensional volume renderings of terahertz-transparent materials. The system can inspect samples as large as 0.0283 m3 (1 ft3) with no safety concerns as for x-ray computed tomography. In this study, the THz-CT system was evaluated for its ability to detect and characterize 1) an embedded void in Space Shuttle external fuel tank thermal protection system (TPS) foam material and 2) impact damage in a TPS configuration under consideration for use in NASA's multi-purpose Orion crew module (CM). Micro-focus X-ray CT is utilized to characterize the flaws and provide a baseline for which to compare the THz CT results.

  6. Highly Parallel Computing Architectures by using Arrays of Quantum-dot Cellular Automata (QCA): Opportunities, Challenges, and Recent Results

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Toomarian, Benny N.

    2000-01-01

    There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA

  7. End-effector: Joint conjugates for robotic assembly of large truss structures in space: Extended concepts

    NASA Technical Reports Server (NTRS)

    Brewer, W. V.; Rasis, E. P.; Shih, H. R.

    1993-01-01

    Results from NASA/HBCU Grant No. NAG-1-1125 are summarized. Designs developed for model fabrication, exploratory concepts drafted, interface of computer with robot and end-effector, and capability enhancement are discussed.

  8. Parallel computation of fluid-structural interactions using high resolution upwind schemes

    NASA Astrophysics Data System (ADS)

    Hu, Zongjun

    An efficient and accurate solver is developed to simulate the non-linear fluid-structural interactions in turbomachinery flutter flows. A new low diffusion E-CUSP scheme, Zha CUSP scheme, is developed to improve the efficiency and accuracy of the inviscid flux computation. The 3D unsteady Navier-Stokes equations with the Baldwin-Lomax turbulence model are solved using the finite volume method with the dual-time stepping scheme. The linearized equations are solved with Gauss-Seidel line iterations. The parallel computation is implemented using MPI protocol. The solver is validated with 2D cases for its turbulence modeling, parallel computation and unsteady calculation. The Zha CUSP scheme is validated with 2D cases, including a supersonic flat plate boundary layer, a transonic converging-diverging nozzle and a transonic inlet diffuser. The Zha CUSP2 scheme is tested with 3D cases, including a circular-to-rectangular nozzle, a subsonic compressor cascade and a transonic channel. The Zha CUSP schemes are proved to be accurate, robust and efficient in these tests. The steady and unsteady separation flows in a 3D stationary cascade under high incidence and three inlet Mach numbers are calculated to study the steady state separation flow patterns and their unsteady oscillation characteristics. The leading edge vortex shedding is the mechanism behind the unsteady characteristics of the high incidence separated flows. The separation flow characteristics is affected by the inlet Mach number. The blade aeroelasticity of a linear cascade with forced oscillating blades is studied using parallel computation. A simplified two-passage cascade with periodic boundary condition is first calculated under a medium frequency and a low incidence. The full scale cascade with 9 blades and two end walls is then studied more extensively under three oscillation frequencies and two incidence angles. The end wall influence and the blade stability are studied and compared under different

  9. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  10. Interpretive computer simulator for the NASA Standard Spacecraft Computer-2 (NSSC-2)

    NASA Technical Reports Server (NTRS)

    Smith, R. S.; Noland, M. S.

    1979-01-01

    An Interpretive Computer Simulator (ICS) for the NASA Standard Spacecraft Computer-II (NSSC-II) was developed as a code verification and testing tool for the Annular Suspension and Pointing System (ASPS) project. The simulator is written in the higher level language PASCAL and implented on the CDC CYBER series computer system. It is supported by a metal assembler, a linkage loader for the NSSC-II, and a utility library to meet the application requirements. The architectural design of the NSSC-II is that of an IBM System/360 (S/360) and supports all but four instructions of the S/360 standard instruction set. The structural design of the ICS is described with emphasis on the design differences between it and the NSSC-II hardware. The program flow is diagrammed, with the function of each procedure being defined; the instruction implementation is discussed in broad terms; and the instruction timings used in the ICS are listed. An example of the steps required to process an assembly level language program on the ICS is included. The example illustrates the control cards necessary to assemble, load, and execute assembly language code; the sample program to to be executed; the executable load module produced by the loader; and the resulting output produced by the ICS.

  11. Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing

    NASA Astrophysics Data System (ADS)

    Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.

    2007-05-01

    uniquely powerful computing power of the MareNostrum supercomputer in Barcelona to realize the promise of RTM, incorporate it into daily processing flows, and to help solve exploration problems in a highly cost-effective way. Uniquely, the Kaleidoscope Project is simultaneously integrating software (algorithms) and hardware (Cell BE), steps that are traditionally taken sequentially. This unique integration of software and hardware will accelerate seismic imaging by several orders of magnitude compared to conventional solutions running on standard Linux Clusters.

  12. Assessment of computational issues associated with analysis of high-lift systems

    NASA Technical Reports Server (NTRS)

    Balasubramanian, R.; Jones, Kenneth M.; Waggoner, Edgar G.

    1992-01-01

    Thin-layer Navier-Stokes calculations for wing-fuselage configurations from subsonic to hypersonic flow regimes are now possible. However, efficient, accurate solutions for using these codes for two- and three-dimensional high-lift systems have yet to be realized. A brief overview of salient experimental and computational research is presented. An assessment of the state-of-the-art relative to high-lift system analysis and identification of issues related to grid generation and flow physics which are crucial for computational success in this area are also provided. Research in support of the high-lift elements of NASA's High Speed Research and Advanced Subsonic Transport Programs which addresses some of the computational issues is presented. Finally, fruitful areas of concentrated research are identified to accelerate overall progress for high lift system analysis and design.

  13. The NASA Carbon Monitoring System

    NASA Astrophysics Data System (ADS)

    Hurtt, G. C.

    2015-12-01

    Greenhouse gas emission inventories, forest carbon sequestration programs (e.g., Reducing Emissions from Deforestation and Forest Degradation (REDD and REDD+), cap-and-trade systems, self-reporting programs, and their associated monitoring, reporting and verification (MRV) frameworks depend upon data that are accurate, systematic, practical, and transparent. A sustained, observationally-driven carbon monitoring system using remote sensing data has the potential to significantly improve the relevant carbon cycle information base for the U.S. and world. Initiated in 2010, NASA's Carbon Monitoring System (CMS) project is prototyping and conducting pilot studies to evaluate technological approaches and methodologies to meet carbon monitoring and reporting requirements for multiple users and over multiple scales of interest. NASA's approach emphasizes exploitation of the satellite remote sensing resources, computational capabilities, scientific knowledge, airborne science capabilities, and end-to-end system expertise that are major strengths of the NASA Earth Science program. Through user engagement activities, the NASA CMS project is taking specific actions to be responsive to the needs of stakeholders working to improve carbon MRV frameworks. The first phase of NASA CMS projects focused on developing products for U.S. biomass/carbon stocks and global carbon fluxes, and on scoping studies to identify stakeholders and explore other potential carbon products. The second phase built upon these initial efforts, with a large expansion in prototyping activities across a diversity of systems, scales, and regions, including research focused on prototype MRV systems and utilization of COTS technologies. Priorities for the future include: 1) utilizing future satellite sensors, 2) prototyping with commercial off-the-shelf technology, 3) expanding the range of prototyping activities, 4) rigorous evaluation, uncertainty quantification, and error characterization, 5) stakeholder

  14. End-of-Test Performance and Wear Characterization of NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test

    NASA Technical Reports Server (NTRS)

    Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2014-01-01

    The NASA's Evolutionary Xenon Thruster (NEXT) program is developing the next-generation solar electric ion propulsion system with significant enhancements beyond the state-of-the-art NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) ion propulsion system to provide future NASA science missions with enhanced capabilities. A Long-Duration Test (LDT) was initiated in June 2005 to validate the thruster service life modeling and to quantify the thruster propellant throughput capability. Testing was recently completed in February 2014, with the thruster accumulating 51,184 hours of operation, processing 918 kg of xenon propellant, and delivering 35.5 MN-s of total impulse.As part of the test termination procedure, a comprehensive performance characterization was performed across the entire NEXT throttle table. This was performed prior to planned repairs of numerous diagnostics that had become inoperable over the course of the test. After completion of these diagnostic repairs in November 2013, a comprehensive end-of-test performance and wear characterization was performed on the test article prior to exposure to atmosphere. These data have confirmed steady thruster performance with minimal degradation as well as mitigation of numerous life limiting mechanisms encountered in the NSTAR design. Component erosion rates compare favorably to pretest predictions based on semi-empirical models used for the thruster service life assessment. Additional data relating to ion beam density profiles, facility backsputter rates, facility backpressure effects on thruster telemetry, and modulation of the neutralizer keeper current are presented as part of the end-of-test characterization. Presently the test article for the NEXT LDT has been exposed to atmosphere and placed within a clean room environment, with post-test disassembly and inspection underway.

  15. CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients

    NASA Technical Reports Server (NTRS)

    Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.

    2001-01-01

    For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.

  16. HIWRAP Radar Development for High-Altitude Operation on the NASA Global Hawk and ER-2

    NASA Technical Reports Server (NTRS)

    Li, Lihua; Heymsfield, Gerlad; Careswell, James; Schaubert, Dan; Creticos, Justin

    2011-01-01

    The NASA High-Altitude Imaging Wind and Rain Airborne Profiler (HIWRAP) is a solid-state transmitter-based, dual-frequency (Ka- and Ku-band), dual-beam (30 degree and 40 degree incidence angle), conical scan Doppler radar system, designed for operation on the NASA high-altitude (20 km) aircrafts, such as the Global Hawk Unmanned Aerial System (UAS). Supported by the NASA Instrument Incubator Program (IIP), HIWRAP was developed to provide high spatial and temporal resolution 3D wind and reflectivity data for the research of tropical cyclone and severe storms. With the simultaneous measurements at both Ku- and Ka-band two different incidence angles, HIWRAP is capable of imaging Doppler winds and volume backscattering from clouds and precipitation associated with tropical storms. In addition, HIWRAP is able to obtain ocean surface backscatter measurements for surface wind retrieval using an approach similar to QuikScat. There are three key technology advances for HIWRAP. Firstly, a compact dual-frequency, dual-beam conical scan antenna system was designed to fit the tight size and weight constraints of the aircraft platform. Secondly, The use of solid state transmitters along with a novel transmit waveform and pulse compression scheme has resulted in a system with improved performance to size, weight, and power ratios compared to typical tube based Doppler radars currently in use for clouds and precipitation measurements. Tube based radars require high voltage power supply and pressurization of the transmitter and radar front end that complicates system design and implementation. Solid state technology also significantly improves system reliability. Finally, HIWRAP technology advances also include the development of a high-speed digital receiver and processor to handle the complex receiving pulse sequences and high data rates resulting from multi receiver channels and conical scanning. This paper describes HIWRAP technology development for dual-frequency operation at

  17. CHEETAH: circuit-switched high-speed end-to-end transport architecture

    NASA Astrophysics Data System (ADS)

    Veeraraghavan, Malathi; Zheng, Xuan; Lee, Hyuk; Gardner, M.; Feng, Wuchun

    2003-10-01

    Leveraging the dominance of Ethernet in LANs and SONET/SDH in MANs and WANs, we propose a service called CHEETAH (Circuit-switched High-speed End-to-End Transport ArcHitecture). The service concept is to provide end hosts with high-speed, end-to-end circuit connectivity on a call-by-call shared basis, where a "circuit" consists of Ethernet segments at the ends that are mapped into Ethernet-over-SONET long-distance circuits. This paper focuses on the file-transfer application for such circuits. For this application, the CHEETAH service is proposed as an add-on to the primary Internet access service already in place for enterprise hosts. This allows an end host that is sending a file to first attempt setting up an end-to-end Ethernet/EoS circuit, and if rejected, fall back to the TCP/IP path. If the circuit setup is successful, the end host will enjoy a much shorter file-transfer delay than on the TCP/IP path. To determine the conditions under which an end host with access to the CHEETAH service should attempt circuit setup, we analyze mean file-transfer delays as a function of call blocking probability in the circuit-switched network, probability of packet loss in the IP network, round-trip times, link rates, and so on.

  18. Computational fluid dynamics at NASA Ames and the numerical aerodynamic simulation program

    NASA Technical Reports Server (NTRS)

    Peterson, V. L.

    1985-01-01

    Computers are playing an increasingly important role in the field of aerodynamics such as that they now serve as a major complement to wind tunnels in aerospace research and development. Factors pacing advances in computational aerodynamics are identified, including the amount of computational power required to take the next major step in the discipline. The four main areas of computational aerodynamics research at NASA Ames Research Center which are directed toward extending the state of the art are identified and discussed. Example results obtained from approximate forms of the governing equations are presented and discussed, both in the context of levels of computer power required and the degree to which they either further the frontiers of research or apply to programs of practical importance. Finally, the Numerical Aerodynamic Simulation Program--with its 1988 target of achieving a sustained computational rate of 1 billion floating-point operations per second--is discussed in terms of its goals, status, and its projected effect on the future of computational aerodynamics.

  19. High-Power Hall Propulsion Development at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Kamhawi, Hani; Manzella, David H.; Smith, Timothy D.; Schmidt, George R.

    2014-01-01

    The NASA Office of the Chief Technologist Game Changing Division is sponsoring the development and testing of enabling technologies to achieve efficient and reliable human space exploration. High-power solar electric propulsion has been proposed by NASA's Human Exploration Framework Team as an option to achieve these ambitious missions to near Earth objects. NASA Glenn Research Center (NASA Glenn) is leading the development of mission concepts for a solar electric propulsion Technical Demonstration Mission. The mission concepts are highlighted in this paper but are detailed in a companion paper. There are also multiple projects that are developing technologies to support a demonstration mission and are also extensible to NASA's goals of human space exploration. Specifically, the In-Space Propulsion technology development project at NASA Glenn has a number of tasks related to high-power Hall thrusters including performance evaluation of existing Hall thrusters; performing detailed internal discharge chamber, near-field, and far-field plasma measurements; performing detailed physics-based modeling with the NASA Jet Propulsion Laboratory's Hall2De code; performing thermal and structural modeling; and developing high-power efficient discharge modules for power processing. This paper summarizes the various technology development tasks and progress made to date

  20. Introduction to Computational Methods for Stability and Control (COMSAC)

    NASA Technical Reports Server (NTRS)

    Hall, Robert M.; Fremaux, C. Michael; Chambers, Joseph R.

    2004-01-01

    This Symposium is intended to bring together the often distinct cultures of the Stability and Control (S&C) community and the Computational Fluid Dynamics (CFD) community. The COMSAC program is itself a new effort by NASA Langley to accelerate the application of high end CFD methodologies to the demanding job of predicting stability and control characteristics of aircraft. This talk is intended to set the stage for needing a program like COMSAC. It is not intended to give details of the program itself. The topics include: 1) S&C Challenges; 2) Aero prediction methodology; 3) CFD applications; 4) NASA COMSAC planning; 5) Objectives of symposium; and 6) Closing remarks.

  1. Upgrading NASA/DOSE laser ranging system control computers

    NASA Technical Reports Server (NTRS)

    Ricklefs, Randall L.; Cheek, Jack; Seery, Paul J.; Emenheiser, Kenneth S.; Hanrahan, William P., III; Mcgarry, Jan F.

    1993-01-01

    Laser ranging systems now managed by the NASA Dynamics of the Solid Earth (DOSE) and operated by the Bendix Field Engineering Corporation, the University of Hawaii, and the University of Texas have produced a wealth on interdisciplinary scientific data over the last three decades. Despite upgrades to the most of the ranging station subsystems, the control computers remain a mix of 1970's vintage minicomputers. These encompass a wide range of vendors, operating systems, and languages, making hardware and software support increasingly difficult. Current technology allows replacement of controller computers at a relatively low cost while maintaining excellent processing power and a friendly operating environment. The new controller systems are now being designed using IBM-PC-compatible 80486-based microcomputers, a real-time Unix operating system (LynxOS), and X-windows/Motif IB, and serial interfaces have been chosen. This design supports minimizing short and long term costs by relying on proven standards for both hardware and software components. Currently, the project is in the design and prototyping stage with the first systems targeted for production in mid-1993.

  2. Evaluating Cloud Computing in the Proposed NASA DESDynI Ground Data System

    NASA Technical Reports Server (NTRS)

    Tran, John J.; Cinquini, Luca; Mattmann, Chris A.; Zimdars, Paul A.; Cuddy, David T.; Leung, Kon S.; Kwoun, Oh-Ig; Crichton, Dan; Freeborn, Dana

    2011-01-01

    The proposed NASA Deformation, Ecosystem Structure and Dynamics of Ice (DESDynI) mission would be a first-of-breed endeavor that would fundamentally change the paradigm by which Earth Science data systems at NASA are built. DESDynI is evaluating a distributed architecture where expert science nodes around the country all engage in some form of mission processing and data archiving. This is compared to the traditional NASA Earth Science missions where the science processing is typically centralized. What's more, DESDynI is poised to profoundly increase the amount of data collection and processing well into the 5 terabyte/day and tens of thousands of job range, both of which comprise a tremendous challenge to DESDynI's proposed distributed data system architecture. In this paper, we report on a set of architectural trade studies and benchmarks meant to inform the DESDynI mission and the broader community of the impacts of these unprecedented requirements. In particular, we evaluate the benefits of cloud computing and its integration with our existing NASA ground data system software called Apache Object Oriented Data Technology (OODT). The preliminary conclusions of our study suggest that the use of the cloud and OODT together synergistically form an effective, efficient and extensible combination that could meet the challenges of NASA science missions requiring DESDynI-like data collection and processing volumes at reduced costs.

  3. Data, Meet Compute: NASA's Cumulus Ingest Architecture

    NASA Technical Reports Server (NTRS)

    Quinn, Patrick

    2018-01-01

    NASA's Earth Observing System Data and Information System (EOSDIS) houses nearly 30PBs of critical Earth Science data and with upcoming missions is expected to balloon to between 200PBs-300PBs over the next seven years. In addition to the massive increase in data collected, researchers and application developers want more and faster access - enabling complex visualizations, long time-series analysis, and cross dataset research without needing to copy and manage massive amounts of data locally. NASA has looked to the cloud to address these needs, building its Cumulus system to manage the ingest of diverse data in a wide variety of formats into the cloud. In this talk, we look at what Cumulus is from a high level and then take a deep dive into how it manages complexity and versioning associated with multiple AWS Lambda and ECS microservices communicating through AWS Step Functions across several disparate installations

  4. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  5. Study of USGS/NASA land use classification system. [computer analysis from LANDSAT data

    NASA Technical Reports Server (NTRS)

    Spann, G. W.

    1975-01-01

    The results of a computer mapping project using LANDSAT data and the USGS/NASA land use classification system are summarized. During the computer mapping portion of the project, accuracies of 67 percent to 79 percent were achieved using Level II of the classification system and a 4,000 acre test site centered on Douglasville, Georgia. Analysis of response to a questionaire circulated to actual and potential LANDSAT data users reveals several important findings: (1) there is a substantial desire for additional information related to LANDSAT capabilities; (2) a majority of the respondents feel computer mapping from LANDSAT data could aid present or future projects; and (3) the costs of computer mapping are substantially less than those of other methods.

  6. Integrating Gridded NASA Hydrological Data into CUAHSI HIS

    NASA Technical Reports Server (NTRS)

    Rui, Hualan; Teng, William; Vollmer, Bruce; Mocko, David M.; Beaudoing, Hiroko K.; Whiteaker, Tim; Valentine, David; Maidment, David; Hooper, Richard

    2011-01-01

    The amount of hydrological data available from NASA remote sensing and modeling systems is vast and ever-increasing;but, one challenge persists:increasing the usefulness of these data for, and thus their use by, end user communities. The Hydrology Data and Information Services Center (HDISC), part of the Goddard Earth Sciences DISC, has continually worked to better understand the hydrological data needs of different end users, to thus better able to bridge the gap between NASA data and end user communities. One effective strategy is integrating the data in to end user community tools and environments. There is an ongoing collaborative effort between NASA HDISC, NASA Hydrological Sciences Branch, and CUAHSI to integrate NASA gridded hydrology data in to the CUAHSI Hydrologic Information System (HIS).

  7. 1997 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 2; High Lift

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Aerodynamic Performance Workshop on February 25-28, 1997. The workshop was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in areas of Configuration Aerodynamics (transonic and supersonic cruise drag, prediction and minimization), High-Lift, Flight Controls, Supersonic Laminar Flow Control, and Sonic Boom Prediction. The workshop objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT Motion Simulator results were presented along with executives summaries for all the Aerodynamic Performance technology areas.

  8. A New Look at NASA: Strategic Research In Information Technology

    NASA Technical Reports Server (NTRS)

    Alfano, David; Tu, Eugene (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on research undertaken by NASA to facilitate the development of information technologies. Specific ideas covered here include: 1) Bio/nano technologies: biomolecular and nanoscale systems and tools for assembly and computing; 2) Evolvable hardware: autonomous self-improving, self-repairing hardware and software for survivable space systems in extreme environments; 3) High Confidence Software Technologies: formal methods, high-assurance software design, and program synthesis; 4) Intelligent Controls and Diagnostics: Next generation machine learning, adaptive control, and health management technologies; 5) Revolutionary computing: New computational models to increase capability and robustness to enable future NASA space missions.

  9. Internet end-to-end performance monitoring for the High Energy Nuclear and Particle Physics community

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, W.

    2000-02-22

    Modern High Energy Nuclear and Particle Physics (HENP) experiments at Laboratories around the world present a significant challenge to wide area networks. Petabytes (1015) or exabytes (1018) of data will be generated during the lifetime of the experiment. Much of this data will be distributed via the Internet to the experiment's collaborators at Universities and Institutes throughout the world for analysis. In order to assess the feasibility of the computing goals of these and future experiments, the HENP networking community is actively monitoring performance across a large part of the Internet used by its collaborators. Since 1995, the pingER projectmore » has been collecting data on ping packet loss and round trip times. In January 2000, there are 28 monitoring sites in 15 countries gathering data on over 2,000 end-to-end pairs. HENP labs such as SLAC, Fermi Lab and CERN are using Advanced Network's Surveyor project and monitoring performance from one-way delay of UDP packets. More recently several HENP sites have become involved with NLANR's active measurement program (AMP). In addition SLAC and CERN are part of the RIPE test-traffic project and SLAC is home for a NIMI machine. The large End-to-end performance monitoring infrastructure allows the HENP networking community to chart long term trends and closely examine short term glitches across a wide range of networks and connections. The different methodologies provide opportunities to compare results based on different protocols and statistical samples. Understanding agreement and discrepancies between results provides particular insight into the nature of the network. This paper will highlight the practical side of monitoring by reviewing the special needs of High Energy Nuclear and Particle Physics experiments and provide an overview of the experience of measuring performance across a large number of interconnected networks throughout the world with various methodologies. In particular, results from each

  10. High Efficiency End-Pumped Ho:Tm:YLF Disk Amplifier

    NASA Technical Reports Server (NTRS)

    Yu, Jirong; Singh, Upendra N.; Petros, Mulugeta; Axenson, Theresa J.; Barnes, Norman P.

    1999-01-01

    Space based coherent lidar for global wind measurement requires an all solid state laser system with high energy, high efficiency and narrow linewidth that operates in the eye safe region. A Q-switched, diode pumped Ho:Tm:YLF 2 micrometer laser with output energy of as much as 125 mJ at 6 Hz with an optical-to-optical efficiency of 3% has been reported. Single frequency operation of the laser was achieved by injection seeding. The design of this laser is being incorporated into NASA's SPARCLE (SPAce Readiness Coherent Lidar Experiment) wind lidar mission. Laser output energy ranging from 500 mJ to 2 J is required for an operational space coherent lidar. We previously developed a high energy Ho:Tm:YLF master oscillator and side pumped power amplifier system and demonstrated a 600-mJ single frequency pulse at a repetition rate of 10 Hz. Although the output energy is high, the optical-to-optical efficiency is only about 2%. Designing a high energy, highly efficient, conductively cooled 2-micrometer laser remains a challenge. In this paper, the preliminary result of an end-pumped amplifier that has a potential to provide a factor 3 of improvement in the system efficiency is reported.

  11. High-Power Hall Propulsion Development at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Kamhawi, Hani; Manzella, David H.; Smith, Timothy D.; Schmidt, George R.

    2012-01-01

    The NASA Office of the Chief Technologist Game Changing Division is sponsoring the development and testing of enabling technologies to achieve efficient and reliable human space exploration. High-power solar electric propulsion has been proposed by NASA's Human Exploration Framework Team as an option to achieve these ambitious missions to near Earth objects. NASA Glenn Research Center is leading the development of mission concepts for a solar electric propulsion Technical Demonstration Mission. The mission concepts are highlighted in this paper but are detailed in a companion paper. There are also multiple projects that are developing technologies to support a demonstration mission and are also extensible to NASA's goals of human space exploration. Specifically, the In-Space Propulsion technology development project at the NASA Glenn has a number of tasks related to high-power Hall thrusters including performance evaluation of existing Hall thrusters; performing detailed internal discharge chamber, near-field, and far-field plasma measurements; performing detailed physics-based modeling with the NASA Jet Propulsion Laboratory's Hall2De code; performing thermal and structural modeling; and developing high-power efficient discharge modules for power processing. This paper summarizes the various technology development tasks and progress made to date.

  12. The NASA Integrated Information Technology Architecture

    NASA Technical Reports Server (NTRS)

    Baldridge, Tim

    1997-01-01

    This document defines an Information Technology Architecture for the National Aeronautics and Space Administration (NASA), where Information Technology (IT) refers to the hardware, software, standards, protocols and processes that enable the creation, manipulation, storage, organization and sharing of information. An architecture provides an itemization and definition of these IT structures, a view of the relationship of the structures to each other and, most importantly, an accessible view of the whole. It is a fundamental assumption of this document that a useful, interoperable and affordable IT environment is key to the execution of the core NASA scientific and project competencies and business practices. This Architecture represents the highest level system design and guideline for NASA IT related activities and has been created on the authority of the NASA Chief Information Officer (CIO) and will be maintained under the auspices of that office. It addresses all aspects of general purpose, research, administrative and scientific computing and networking throughout the NASA Agency and is applicable to all NASA administrative offices, projects, field centers and remote sites. Through the establishment of five Objectives and six Principles this Architecture provides a blueprint for all NASA IT service providers: civil service, contractor and outsourcer. The most significant of the Objectives and Principles are the commitment to customer-driven IT implementations and the commitment to a simpler, cost-efficient, standards-based, modular IT infrastructure. In order to ensure that the Architecture is presented and defined in the context of the mission, project and business goals of NASA, this Architecture consists of four layers in which each subsequent layer builds on the previous layer. They are: 1) the Business Architecture: the operational functions of the business, or Enterprise, 2) the Systems Architecture: the specific Enterprise activities within the context

  13. Computational Investigation of the NASA Cascade Cyclonic Separation Device

    NASA Technical Reports Server (NTRS)

    Hoyt, Nathaniel C.; Kamotani, Yasuhiro; Kadambi, Jaikrishnan; McQuillen, John B.; Sankovic, John M.

    2008-01-01

    Devices designed to replace the absent buoyancy separation mechanism within a microgravity environment are of considerable interest to NASA as the functionality of many spacecraft systems are dependent on the proper sequestration of interpenetrating gas and liquid phases. Inasmuch, a full multifluid Euler-Euler computational fluid dynamics investigation has been undertaken to evaluate the performance characteristics of one such device, the Cascade Cyclonic Separator, across a full range of inlet volumetric quality with combined volumetric injection rates varying from 1 L/min to 20 L/min. These simulations have delimited the general modes of operation of this class of devices and have proven able to describe the complicated vortex structure and induced pressure gradients that arise. The computational work has furthermore been utilized to analyze design modifications that enhance the overall performance of these devices. The promising results indicate that proper CFD modeling may be successfully used as a tool for microgravity separator design.

  14. Automated Test for NASA CFS

    NASA Technical Reports Server (NTRS)

    McComas, David C.; Strege, Susanne L.; Carpenter, Paul B. Hartman, Randy

    2015-01-01

    The core Flight System (cFS) is a flight software (FSW) product line developed by the Flight Software Systems Branch (FSSB) at NASA's Goddard Space Flight Center (GSFC). The cFS uses compile-time configuration parameters to implement variable requirements to enable portability across embedded computing platforms and to implement different end-user functional needs. The verification and validation of these requirements is proving to be a significant challenge. This paper describes the challenges facing the cFS and the results of a pilot effort to apply EXB Solution's testing approach to the cFS applications.

  15. CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients. Revised

    NASA Technical Reports Server (NTRS)

    Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.

    2002-01-01

    For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.

  16. NASA Open Rotor Noise Research

    NASA Technical Reports Server (NTRS)

    Envia, Ed

    2010-01-01

    Owing to their inherent fuel burn efficiency advantage compared with the current generation high bypass ratio turbofan engines, there is resurgent interest in developing open rotor propulsion systems for powering the next generation commercial aircraft. However, to make open rotor systems truly competitive, they must be made to be acoustically acceptable too. To address this challenge, NASA in collaboration with industry is exploring the design space for low-noise open rotor propulsion systems. The focus is on the system level assessment of the open rotors compared with other candidate concepts like the ultra high bypass ratio cycle engines. To that end there is an extensive research effort at NASA focused on component testing and diagnostics of the open rotor acoustic performance as well as assessment and improvement of open rotor noise prediction tools. In this presentation and overview of the current NASA research on open rotor noise will be provided. Two NASA projects, the Environmentally Responsible Aviation Project and the Subsonic Fixed Wing Project, have been funding this research effort.

  17. Refining, revising, augmenting, compiling and developing computer assisted instruction K-12 aerospace materials for implementation in NASA spacelink electronic information system

    NASA Technical Reports Server (NTRS)

    Blake, Jean A.

    1988-01-01

    The NASA Spacelink is an electronic information service operated by the Marshall Space Flight Center. The Spacelink contains extensive NASA news and educational resources that can be accessed by a computer and modem. Updates and information are provided on: current NASA news; aeronautics; space exploration: before the Shuttle; space exploration: the Shuttle and beyond; NASA installations; NASA educational services; materials for classroom use; and space program spinoffs.

  18. Verification methodology for fault-tolerant, fail-safe computers applied to maglev control computer systems. Final report, July 1991-May 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lala, J.H.; Nagle, G.A.; Harper, R.E.

    1993-05-01

    The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev control computer system has been designed using a design-for-validation methodology developed earlier under NASA and SDIO sponsorship for real-time aerospace applications. The present study starts by defining the maglev mission scenario and ends with the definition of a maglev control computer architecture. Key intermediate steps included definitions of functional and dependability requirements, synthesis of two candidate architectures, development of qualitative and quantitative evaluation criteria, and analyticalmore » modeling of the dependability characteristics of the two architectures. Finally, the applicability of the design-for-validation methodology was also illustrated by applying it to the German Transrapid TR07 maglev control system.« less

  19. X-ray computed tomography comparison of individual and parallel assembled commercial lithium iron phosphate batteries at end of life after high rate cycling

    NASA Astrophysics Data System (ADS)

    Carter, Rachel; Huhman, Brett; Love, Corey T.; Zenyuk, Iryna V.

    2018-03-01

    X-ray computed tomography (X-ray CT) across multiple length scales is utilized for the first time to investigate the physical abuse of high C-rate pulsed discharge on cells wired individually and in parallel.. Manufactured lithium iron phosphate cells boasting high rate capability were pulse power tested in both wiring conditions with high discharge currents of 10C for a high number of cycles (up to 1200) until end of life (<80% of initial discharge capacity retained). The parallel assembly reached end of life more rapidly for reasons unknown prior to CT investigations. The investigation revealed evidence of overdischarge in the most degraded cell from the parallel assembly, compared to more traditional failure in the individual cell. The parallel-wired cell exhibited dissolution of copper from the anode current collector and subsequent deposition throughout the separator near the cathode of the cell. This overdischarge-induced copper deposition, notably impossible to confirm with other state of health (SOH) monitoring methods, is diagnosed using CT by rendering the interior current collector without harm or alteration to the active materials. Correlation of CT observations to the electrochemical pulse data from the parallel-wired cells reveals the risk of parallel wiring during high C-rate pulse discharge.

  20. Cassini End of Mission

    NASA Image and Video Library

    2017-09-15

    A computer screen in mission control displays mission elapsed time for Cassini minutes after the spacecraft plunged into Saturn's atmosphere, Friday, Sept. 15, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. Loss of contact with the Cassini spacecraft occurred at 7:55 a.m. EDT (4:55 a.m. PDT). The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  1. NASA High-Speed 2D Photogrammetric Measurement System

    NASA Technical Reports Server (NTRS)

    Dismond, Harriett R.

    2012-01-01

    The object of this report is to provide users of the NASA high-speed 2D photogrammetric measurement system with procedures required to obtain drop-model trajectory and impact data for full-scale and sub-scale models. This guide focuses on use of the system for vertical drop testing at the NASA Langley Landing and Impact Research (LandIR) Facility.

  2. NASA Langley Research Center's distributed mass storage system

    NASA Technical Reports Server (NTRS)

    Pao, Juliet Z.; Humes, D. Creig

    1993-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.

  3. High Resolution Aerospace Applications using the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha

    2005-01-01

    This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.

  4. NASA Tech Briefs, November/December 1986, Special Edition

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Topics: Computing: The View from NASA Headquarters; Earth Resources Laboratory Applications Software: Versatile Tool for Data Analysis; The Hypercube: Cost-Effective Supercomputing; Artificial Intelligence: Rendezvous with NASA; NASA's Ada Connection; COSMIC: NASA's Software Treasurehouse; Golden Oldies: Tried and True NASA Software; Computer Technical Briefs; NASA TU Services; Digital Fly-by-Wire.

  5. Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks

    NASA Astrophysics Data System (ADS)

    Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji

    High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.

  6. High performance real-time flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  7. Crash in Australian outback ends NASA ballooning season

    NASA Astrophysics Data System (ADS)

    Harris, Margaret

    2010-06-01

    NASA has temporarily suspended all its scientific balloon launches after the balloon-borne Nuclear Compton Tele scope (NCT) crashed during take-off, scattering a trail of debris across the remote launch site and overturning a nearby parked car.

  8. Brain-computer interface controlled gaming: evaluation of usability by severely motor restricted end-users.

    PubMed

    Holz, Elisa Mira; Höhne, Johannes; Staiger-Sälzer, Pit; Tangermann, Michael; Kübler, Andrea

    2013-10-01

    Connect-Four, a new sensorimotor rhythm (SMR) based brain-computer interface (BCI) gaming application, was evaluated by four severely motor restricted end-users; two were in the locked-in state and had unreliable eye-movement. Following the user-centred approach, usability of the BCI prototype was evaluated in terms of effectiveness (accuracy), efficiency (information transfer rate (ITR) and subjective workload) and users' satisfaction. Online performance varied strongly across users and sessions (median accuracy (%) of end-users: A=.65; B=.60; C=.47; D=.77). Our results thus yielded low to medium effectiveness in three end-users and high effectiveness in one end-user. Consequently, ITR was low (0.05-1.44bits/min). Only two end-users were able to play the game in free-mode. Total workload was moderate but varied strongly across sessions. Main sources of workload were mental and temporal demand. Furthermore, frustration contributed to the subjective workload of two end-users. Nevertheless, most end-users accepted the BCI application well and rated satisfaction medium to high. Sources for dissatisfaction were (1) electrode gel and cap, (2) low effectiveness, (3) time-consuming adjustment and (4) not easy-to-use BCI equipment. All four end-users indicated ease of use as being one of the most important aspect of BCI. Effectiveness and efficiency are lower as compared to applications using the event-related potential as input channel. Nevertheless, the SMR-BCI application was satisfactorily accepted by the end-users and two of four could imagine using the BCI application in their daily life. Thus, despite moderate effectiveness and efficiency BCIs might be an option when controlling an application for entertainment. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. NASA Enterprise Managed Cloud Computing (EMCC): Delivering an Initial Operating Capability (IOC) for NASA use of Commercial Infrastructure-as-a-Service (IaaS)

    NASA Technical Reports Server (NTRS)

    O'Brien, Raymond

    2017-01-01

    In 2016, Ames supported the NASA CIO in delivering an initial operating capability for Agency use of commercial cloud computing. This presentation provides an overview of the project, the services approach followed, and the major components of the capability that was delivered. The presentation is being given at the request of Amazon Web Services to a contingent representing the Brazilian Federal Government and Defense Organization that is interested in the use of Amazon Web Services (AWS). NASA is currently a customer of AWS and delivered the Initial Operating Capability using AWS as its first commercial cloud provider. The IOC, however, designed to also support other cloud providers in the future.

  10. NASA Exhibits

    NASA Technical Reports Server (NTRS)

    Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick; hide

    2001-01-01

    A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.

  11. NASA cancels carbon monitoring research program

    NASA Astrophysics Data System (ADS)

    Voosen, Paul

    2018-05-01

    The administration of President Donald Trump has waged a broad attack on climate science conducted by NASA, including proposals to cut the budget of earth science research and kill off the Orbiting Carbon Observatory 3 mission. Congress has fended these attacks off—with one exception. NASA has moved ahead with plans to end the Carbon Monitoring System, a $10-million-a-year research line that has helped stitch together observations of sources and sinks of methane and carbon dioxide into high-resolution models of the planet's flows of carbon, the agency confirmed to Science. The program, begun in 2010, has developed tools to improve estimates of carbon stocks in forests, especially, from Alaska to Indonesia. Ending it, researchers say, will complicate future efforts to monitor and verify national emission cuts stemming from the Paris climate deal.

  12. NASA's OCA Mirroring System: An Application of Multiagent Systems in Mission Control

    NASA Technical Reports Server (NTRS)

    Sierhuis, Maarten; Clancey, William J.; vanHoof, Ron J. J.; Seah, Chin H.; Scott, Michael S.; Nado, Robert A.; Blumenberg, Susan F.; Shafto, Michael G.; Anderson, Brian L.; Bruins, Anthony C.; hide

    2009-01-01

    Orbital Communications Adaptor (OCA) Flight Controllers, in NASA's International Space Station Mission Control Center, use different computer systems to uplink, downlink, mirror, archive, and deliver files to and from the International Space Station (ISS) in real time. The OCA Mirroring System (OCAMS) is a multiagent software system (MAS) that is operational in NASA's Mission Control Center. This paper presents OCAMS and its workings in an operational setting where flight controllers rely on the system 24x7. We also discuss the return on investment, based on a simulation baseline, six months of 24x7 operations at NASA Johnson Space Center in Houston, Texas, and a projection of future capabilities. This paper ends with a discussion of the value of MAS and future planned functionality and capabilities.

  13. Computational Intelligence and Its Impact on Future High-Performance Engineering Systems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    1996-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Intelligence held at the Virginia Consortium of Engineering and Science Universities, Hampton, Virginia, June 27-28, 1995. The presentations addressed activities in the areas of fuzzy logic, neural networks, and evolutionary computations. Workshop attendees represented NASA, the National Science Foundation, the Department of Energy, National Institute of Standards and Technology (NIST), the Jet Propulsion Laboratory, industry, and academia. The workshop objectives were to assess the state of technology in the Computational intelligence area and to provide guidelines for future research.

  14. NEXUS/NASCAD- NASA ENGINEERING EXTENDIBLE UNIFIED SOFTWARE SYSTEM WITH NASA COMPUTER AIDED DESIGN

    NASA Technical Reports Server (NTRS)

    Purves, L. R.

    1994-01-01

    NEXUS, the NASA Engineering Extendible Unified Software system, is a research set of computer programs designed to support the full sequence of activities encountered in NASA engineering projects. This sequence spans preliminary design, design analysis, detailed design, manufacturing, assembly, and testing. NEXUS primarily addresses the process of prototype engineering, the task of getting a single or small number of copies of a product to work. Prototype engineering is a critical element of large scale industrial production. The time and cost needed to introduce a new product are heavily dependent on two factors: 1) how efficiently required product prototypes can be developed, and 2) how efficiently required production facilities, also a prototype engineering development, can be completed. NEXUS extendibility and unification are achieved by organizing the system as an arbitrarily large set of computer programs accessed in a common manner through a standard user interface. The NEXUS interface is a multipurpose interactive graphics interface called NASCAD (NASA Computer Aided Design). NASCAD can be used to build and display two and three-dimensional geometries, to annotate models with dimension lines, text strings, etc., and to store and retrieve design related information such as names, masses, and power requirements of components used in the design. From the user's standpoint, NASCAD allows the construction, viewing, modification, and other processing of data structures that represent the design. Four basic types of data structures are supported by NASCAD: 1) three-dimensional geometric models of the object being designed, 2) alphanumeric arrays to hold data ranging from numeric scalars to multidimensional arrays of numbers or characters, 3) tabular data sets that provide a relational data base capability, and 4) procedure definitions to combine groups of system commands or other user procedures to create more powerful functions. NASCAD has extensive abilities to

  15. NASA Earth Observation Systems and Applications for Health: Moving from Research to Operational End Users

    NASA Astrophysics Data System (ADS)

    Haynes, J.; Estes, S. M.

    2017-12-01

    Health providers and researchers need environmental data to study and understand the geographic, environmental, and meteorological differences in disease. Satellite remote sensing of the environment offers a unique vantage point that can fill in the gaps of environmental, spatial, and temporal data for tracking disease. This presentation will demonstrate NASA's applied science programs efforts to transition from research to operations to benefit society. Satellite earth observations present a unique vantage point of the earth's environment from space, which offers a wealth of health applications for the imaginative investigator. The presentation is directly related to Earth Observing systems and Global Health Surveillance and will present research results of the remote sensing environmental observations of earth and health applications, which can contribute to the health research. As part of NASA approach and methodology they have used Earth Observation Systems and Applications for Health Models to provide a method for bridging gaps of environmental, spatial, and temporal data for tracking disease. This presentation will provide a venue where the results of both research and practice using satellite earth observations to study weather and it's role in health research and the transition to operational end users.

  16. Applying Trustworthy Computing to End-to-End Electronic Voting

    ERIC Educational Resources Information Center

    Fink, Russell A.

    2010-01-01

    "End-to-End (E2E)" voting systems provide cryptographic proof that the voter's intention is captured, cast, and tallied correctly. While E2E systems guarantee integrity independent of software, most E2E systems rely on software to provide confidentiality, availability, authentication, and access control; thus, end-to-end integrity is not…

  17. Federal Plan for High-End Computing. Report of the High-End Computing Revitalization Task Force (HECRTF)

    DTIC Science & Technology

    2004-07-01

    steadily for the past fifteen years, while memory latency and bandwidth have improved much more slowly. For example, Intel processor clock rates38 have... processor and memory performance) all greatly restrict the ability to achieve high levels of performance for science, engineering, and national...sub-nuclear distances. Guide experiments to identify transition from quantum chromodynamics to quark -gluon plasma. Accelerator Physics Accurate

  18. Commercialization of NASA's High Strength Cast Aluminum Alloy for High Temperature Applications

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan A.

    2003-01-01

    In this paper, the commercialization of a new high strength cast aluminum alloy, invented by NASA-Marshall Space Flight Center, for high temperature applications will be presented. Originally developed to meet U.S. automotive legislation requiring low- exhaust emission, the novel NASA aluminum alloy offers dramatic improvement in tensile and fatigue strengths at elevated temperatures (450 F-750 F), which can lead to reducing part weight and cost as well as improving performance for automotive engine applications. It is an ideal low cost material for cast components such as pistons, cylinder heads, cylinder liners, connecting rods, turbo chargers, impellers, actuators, brake calipers and rotors. NASA alloy also offers greater wear resistance, dimensional stability, and lower thermal expansion compared to conventional aluminum alloys, and the new alloy can be produced economically from sand, permanent mold and investment casting. Since 2001, this technology was licensed to several companies for automotive and marine internal combustion engines applications.

  19. Opportunities for leveraging OS virtualization in high-end supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Patrick G.; Pedretti, Kevin Thomas Tauke

    2010-11-01

    This paper examines potential motivations for incorporating virtualization support in the system software stacks of high-end capability supercomputers. We advocate that this will increase the flexibility of these platforms significantly and enable new capabilities that are not possible with current fixed software stacks. Our results indicate that compute, virtual memory, and I/O virtualization overheads are low and can be further mitigated by utilizing well-known techniques such as large paging and VMM bypass. Furthermore, since the addition of virtualization support does not affect the performance of applications using the traditional native environment, there is essentially no disadvantage to its addition.

  20. High Altitude Platform Aircraft at NASA Past, Present and Future

    NASA Technical Reports Server (NTRS)

    DelFrate, John H.

    2006-01-01

    This viewgraph presentation reviews NASA Dryden Flight Research Center's significant accomplishments from the Environment Research and Sensor Technology (ERAST) project, the present High Altitude Platform (HAP) needs and opportunities, NASA's Aeronautical focus shift, HAP Non-aeronautics challenges, and current HAP Capabilities.

  1. NASA strategic plan

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The NASA Strategic Plan is a living document. It provides far-reaching goals and objectives to create stability for NASA's efforts. The Plan presents NASA's top-level strategy: it articulates what NASA does and for whom; it differentiates between ends and means; it states where NASA is going and what NASA intends to do to get there. This Plan is not a budget document, nor does it present priorities for current or future programs. Rather, it establishes a framework for shaping NASA's activities and developing a balanced set of priorities across the Agency. Such priorities will then be reflected in the NASA budget. The document includes vision, mission, and goals; external environment; conceptual framework; strategic enterprises (Mission to Planet Earth, aeronautics, human exploration and development of space, scientific research, space technology, and synergy); strategic functions (transportation to space, space communications, human resources, and physical resources); values and operating principles; implementing strategy; and senior management team concurrence.

  2. NASA's Earth science flight program status

    NASA Astrophysics Data System (ADS)

    Neeck, Steven P.; Volz, Stephen M.

    2010-10-01

    NASA's strategic goal to "advance scientific understanding of the changing Earth system to meet societal needs" continues the agency's legacy of expanding human knowledge of the Earth through space activities, as mandated by the National Aeronautics and Space Act of 1958. Over the past 50 years, NASA has been the world leader in developing space-based Earth observing systems and capabilities that have fundamentally changed our view of our planet and have defined Earth system science. The U.S. National Research Council report "Earth Observations from Space: The First 50 Years of Scientific Achievements" published in 2008 by the National Academy of Sciences articulates those key achievements and the evolution of the space observing capabilities, looking forward to growing potential to address Earth science questions and enable an abundance of practical applications. NASA's Earth science program is an end-to-end one that encompasses the development of observational techniques and the instrument technology needed to implement them. This includes laboratory testing and demonstration from surface, airborne, or space-based platforms; research to increase basic process knowledge; incorporation of results into complex computational models to more fully characterize the present state and future evolution of the Earth system; and development of partnerships with national and international organizations that can use the generated information in environmental forecasting and in policy, business, and management decisions. Currently, NASA's Earth Science Division (ESD) has 14 operating Earth science space missions with 6 in development and 18 under study or in technology risk reduction. Two Tier 2 Decadal Survey climate-focused missions, Active Sensing of CO2 Emissions over Nights, Days and Seasons (ASCENDS) and Surface Water and Ocean Topography (SWOT), have been identified in conjunction with the U.S. Global Change Research Program and initiated for launch in the 2019

  3. Computer-based communication in support of scientific and technical work. [conferences on management information systems used by scientists of NASA programs

    NASA Technical Reports Server (NTRS)

    Vallee, J.; Wilson, T.

    1976-01-01

    Results are reported of the first experiments for a computer conference management information system at the National Aeronautics and Space Administration. Between August 1975 and March 1976, two NASA projects with geographically separated participants (NASA scientists) used the PLANET computer conferencing system for portions of their work. The first project was a technology assessment of future transportation systems. The second project involved experiments with the Communication Technology Satellite. As part of this project, pre- and postlaunch operations were discussed in a computer conference. These conferences also provided the context for an analysis of the cost of computer conferencing. In particular, six cost components were identified: (1) terminal equipment, (2) communication with a network port, (3) network connection, (4) computer utilization, (5) data storage and (6) administrative overhead.

  4. Mass Storage System Upgrades at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen; Macie, Medora; Saletta, Marty

    2000-01-01

    The NASA Center for Computational Sciences (NCCS) provides supercomputing and mass storage services to over 1200 Earth and space scientists. During the past two years, the mass storage system at the NCCS went through a great deal of changes both major and minor. Tape drives, silo control software, and the mass storage software itself were upgraded, and the mass storage platform was upgraded twice. Some of these upgrades were aimed at achieving year-2000 compliance, while others were simply upgrades to newer and better technologies. In this paper we will describe these upgrades.

  5. In-Space Networking on NASA's SCAN Testbed

    NASA Technical Reports Server (NTRS)

    Brooks, David E.; Eddy, Wesley M.; Clark, Gilbert J.; Johnson, Sandra K.

    2016-01-01

    The NASA Space Communications and Navigation (SCaN) Testbed, an external payload onboard the International Space Station, is equipped with three software defined radios and a flight computer for supporting in-space communication research. New technologies being studied using the SCaN Testbed include advanced networking, coding, and modulation protocols designed to support the transition of NASAs mission systems from primarily point to point data links and preplanned routes towards adaptive, autonomous internetworked operations needed to meet future mission objectives. Networking protocols implemented on the SCaN Testbed include the Advanced Orbiting Systems (AOS) link-layer protocol, Consultative Committee for Space Data Systems (CCSDS) Encapsulation Packets, Internet Protocol (IP), Space Link Extension (SLE), CCSDS File Delivery Protocol (CFDP), and Delay-Tolerant Networking (DTN) protocols including the Bundle Protocol (BP) and Licklider Transmission Protocol (LTP). The SCaN Testbed end-to-end system provides three S-band data links and one Ka-band data link to exchange space and ground data through NASAs Tracking Data Relay Satellite System or a direct-to-ground link to ground stations. The multiple data links and nodes provide several upgradable elements on both the space and ground systems. This paper will provide a general description of the testbeds system design and capabilities, discuss in detail the design and lessons learned in the implementation of the network protocols, and describe future plans for continuing research to meet the communication needs for evolving global space systems.

  6. NASA Brevard Top Scholars

    NASA Image and Video Library

    2017-11-13

    Students from Brevard County public high schools arrive at the NASA Kennedy Space Center Visitor Complex in Florida. Top scholars from the high schools were invited to Kennedy Space Center for a tour of facilities, lunch and a roundtable discussion with engineers and scientists at the center. The 2017-2018 Brevard Top Scholars event was hosted by the center's Education Projects and Youth Engagement office to honor the top three scholars of the graduating student class from each of Brevard County’s public high schools. The students received a personalized certificate at the end of the day.

  7. NASA Brevard Top Scholars

    NASA Image and Video Library

    2017-11-13

    Top scholars from Brevard County public high schools participate in roundtable discussions with NASA engineers and scientists at the Public Engagement Center at Kennedy Space Center Visitor Complex in Florida. Top scholars from the high schools were invited to Kennedy Space Center for a tour of facilities, lunch and a roundtable discussion. The 2017-2018 Brevard Top Scholars event was hosted by the center's Education Projects and Youth Engagement office to honor the top three scholars of the graduating student class from each of Brevard County’s public high schools. The students received a personalized certificate at the end of the day.

  8. Distributed management of scientific projects - An analysis of two computer-conferencing experiments at NASA

    NASA Technical Reports Server (NTRS)

    Vallee, J.; Gibbs, B.

    1976-01-01

    Between August 1975 and March 1976, two NASA projects with geographically separated participants used a computer-conferencing system developed by the Institute for the Future for portions of their work. Monthly usage statistics for the system were collected in order to examine the group and individual participation figures for all conferences. The conference transcripts were analysed to derive observations about the use of the medium. In addition to the results of these analyses, the attitudes of users and the major components of the costs of computer conferencing are discussed.

  9. Geodetic Imaging Lidar: Applications for high-accuracy, large area mapping with NASA's upcoming high-altitude waveform-based airborne laser altimetry Facility

    NASA Astrophysics Data System (ADS)

    Blair, J. B.; Rabine, D.; Hofton, M. A.; Citrin, E.; Luthcke, S. B.; Misakonis, A.; Wake, S.

    2015-12-01

    Full waveform laser altimetry has demonstrated its ability to capture highly-accurate surface topography and vertical structure (e.g. vegetation height and structure) even in the most challenging conditions. NASA's high-altitude airborne laser altimeter, LVIS (the Land Vegetation, and Ice Sensor) has produced high-accuracy surface maps over a wide variety of science targets for the last 2 decades. Recently NASA has funded the transition of LVIS into a full-time NASA airborne Facility instrument to increase the amount and quality of the data and to decrease the end-user costs, to expand the utilization and application of this unique sensor capability. Based heavily on the existing LVIS sensor design, the Facility LVIS instrument includes numerous improvements for reliability, resolution, real-time performance monitoring and science products, decreased operational costs, and improved data turnaround time and consistency. The development of this Facility instrument is proceeding well and it is scheduled to begin operations testing in mid-2016. A comprehensive description of the LVIS Facility capability will be presented along with several mission scenarios and science applications examples. The sensor improvements included increased spatial resolution (footprints as small as 5 m), increased range precision (sub-cm single shot range precision), expanded dynamic range, improved detector sensitivity, operational autonomy, real-time flight line tracking, and overall increased reliability and sensor calibration stability. The science customer mission planning and data product interface will be discussed. Science applications of the LVIS Facility include: cryosphere, territorial ecology carbon cycle, hydrology, solid earth and natural hazards, and biodiversity.

  10. Overview of NASA Studies on High-Temperature Ceramic Fibers

    NASA Technical Reports Server (NTRS)

    DiCarlo, James A.; Yun, Hee Mann

    2001-01-01

    NASA, DOD, and DOE are currently looking to the NASA UEET Program to develop ceramic matrix composites (CMC) for hot-section components in advanced power and propulsion systems - Success will depend strongly on developing ceramic fibers with a variety of key thermostructural properties, in particular, high as-produced tensile strength and retention of a large fraction of this strength for long times under the anticipated CMC service conditions. - Current UEET approach centers on selecting the optimum fiber type from commercially available fibers since the costs for development of advanced fibers are high and the markets for high-temperature CMC have yet to be established.

  11. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  12. First NASA/Industry High Speed Research Program Nozzle Symposium

    NASA Technical Reports Server (NTRS)

    Long-Davis, Mary Jo

    1999-01-01

    The First High Speed Research (HSR) Nozzle Symposium was hosted by NASA Lewis Research Center on November 17-19, 1992 in Cleveland, Ohio, and was sponsored by the HSR Source Noise Working Group. The purpose of this symposium was to provide a national forum for the government, industry, and university participants in the program to present and discuss important low noise nozzle research results and technology issues related to the development of appropriate nozzles for a commercially viable, environmentally compatible, U.S. High-Speed Civil Transport. The HSR Phase I research program was initiated in FY90 and is approaching the first major milestone (end of FY92) relative to an initial FAR 36 Stage 3 nozzle noise assessment. Significant research results relative to that milestone were presented. The opening session provided a brief overview of the Program and status of the Phase H plan. The next five sessions were technically oriented and highlighted recent significant analytical and experimental accomplishments. The last Session included a panel discussion by the Session Chairs, summarizing the progress seen to date and discussing issues relative to further advances in technology necessary to achieve the Program Goals. Attendance at the Symposium was by invitation only and included only industry, academic, and government participants who are actively involved in the High-Speed Research Program. The technology presented in this meeting is considered commercially sensitive.

  13. Who uses NASA Earth Science Data? Connecting with Users through the Earthdata website and Social Media

    NASA Astrophysics Data System (ADS)

    Wong, M. M.; Brennan, J.; Bagwell, R.; Behnke, J.

    2015-12-01

    This poster will introduce and explore the various social media efforts, monthly webinar series and a redesigned website (https://earthdata.nasa.gov) established by National Aeronautics and Space Administration's (NASA) Earth Observing System Data and Information System (EOSDIS) project. EOSDIS is a key core capability in NASA's Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA's Earth science data from various sources - satellites, aircraft, field measurements, and various other programs. It is comprised of twelve Distributed Active Archive Centers (DAACs), Science Computing Facilities (SCFs), data discovery and service access client (Reverb and Earthdata Search), dataset directory (Global Change Master Directory - GCMD), near real-time data (Land Atmosphere Near real-time Capability for EOS - LANCE), Worldview (an imagery visualization interface), Global Imagery Browse Services, the Earthdata Code Collaborative and a host of other discipline specific data discovery, data access, data subsetting and visualization tools. We have embarked on these efforts to reach out to new audiences and potential new users and to engage our diverse end user communities world-wide. One of the key objectives is to increase awareness of the breadth of Earth science data information, services, and tools that are publicly available while also highlighting how these data and technologies enable scientific research.

  14. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  15. The NASA teleconferencing system: An evaluation

    NASA Technical Reports Server (NTRS)

    Connors, M. M.; Lindsey, G.; Miller, R. H.

    1976-01-01

    The communication requirements of the Apollo project led to the development of a teleconferencing network which linked together, in an audio-fax mode, the several NASA centers and supporting contractors of the Apollo project. The usefulness of this communication linkage for the Apollo project suggested that the system might be extended to include all NASA centers, enabling them to conduct their in-house business more efficiently than by traveling to other centers. A pilot project was run in which seventeen NASA center and subcenters, some with multiple facilities, were connected into the NASA teleconferencing network. During that year, costs were charted and, at the end of the year, an evaluation was made to determine how the system had been used and with what results. The year-end evaluation of the use of NASA teleconferencing system is summarized.

  16. High Definition Sounding System Test and Integration with NASA Atmospheric Science Program Aircraft

    DTIC Science & Technology

    2013-09-30

    of the High Definition Sounding System (HDSS) on NASA high altitude Airborne Science Program platforms, specifically the NASA P-3 and NASA WB-57. When...demonstrate the system reliability in a Global Hawk’s 62000’ altitude regime of thin air and very cold temperatures. APPROACH: Mission Profile One or more WB...57 test flights will prove airworthiness and verify the High Definition Sounding System (HDSS) is safe and functional at high altitudes , essentially

  17. Development of kinematic equations and determination of workspace of a 6 DOF end-effector with closed-kinematic chain mechanism

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Pooran, Farhad J.

    1989-01-01

    This report presents results from the research grant entitled Active Control of Robot Manipulators, funded by the Goddard Space Flight Center, under Grant NAG5-780, for the period July 1, 1988 to January 1, 1989. An analysis is presented of a 6 degree-of-freedom robot end-effector built to study telerobotic assembly of NASA hardware in space. Since the end-effector is required to perform high precision motion in a limited workspace, closed-kinematic mechanisms are chosen for its design. A closed-form solution is obtained for the inverse kinematic problem and an iterative procedure employing Newton-Raphson method is proposed to solve the forward kinematic problem. A study of the end-effector workspace results in a general procedure for the workspace determination based on link constraints. Computer simulation results are presented.

  18. Lessons Learned While Exploring Cloud-Native Architectures for NASA EOSDIS Applications and Systems

    NASA Technical Reports Server (NTRS)

    Pilone, Dan; Mclaughlin, Brett; Plofchan, Peter

    2017-01-01

    NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a multi-petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 6000 data products ranging from various types of science disciplines. EOSDIS has continually evolved to improve the discoverability, accessibility, and usability of high-impact NASA data spanning the multi-petabyte-scale archive of Earth science data products. Reviewed and approved by Chris Lynnes.

  19. A NASA high-power space-based laser research and applications program

    NASA Technical Reports Server (NTRS)

    Deyoung, R. J.; Walberg, G. D.; Conway, E. J.; Jones, L. W.

    1983-01-01

    Applications of high power lasers are discussed which might fulfill the needs of NASA missions, and the technology characteristics of laser research programs are outlined. The status of the NASA programs or lasers, laser receivers, and laser propulsion is discussed, and recommendations are presented for a proposed expanded NASA program in these areas. Program elements that are critical are discussed in detail.

  20. High-Efficiency High-Resolution Global Model Developments at the NASA Goddard Data Assimilation Office

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)

    2002-01-01

    computational design of the dynamical core using a hybrid distributed-shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.

  1. High-Efficiency High-Resolution Global Model Developments at the NASA Goddard Data Assimilation Office

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; Atlas, Robert (Technical Monitor)

    2002-01-01

    computational design of the dynamical core using a hybrid distributed- shared memory programming paradigm that is portable to virtually any of today's high-end parallel super-computing clusters.

  2. A multitasking, multisinked, multiprocessor data acquisition front end

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, R.; Au, R.; Molen, A.V.

    1989-10-01

    The authors have developed a generalized data acquisition front end system which is based on MC68020 processors running a commercial real time kernel (rhoSOS), and implemented primarily in a high level language (C). This system has been attached to the back end on-line computing system at NSCL via our high performance ETHERNET protocol. Data may be simultaneously sent to any number of back end systems. Fixed fraction sampling along links to back end computing is also supported. A nonprocedural program generator simplifies the development of experiment specific code.

  3. CFD Modeling Activities at the NASA Stennis Space Center

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel

    2007-01-01

    A viewgraph presentation on NASA Stennis Space Center's Computational Fluid Dynamics (CFD) Modeling activities is shown. The topics include: 1) Overview of NASA Stennis Space Center; 2) Role of Computational Modeling at NASA-SSC; 3) Computational Modeling Tools and Resources; and 4) CFD Modeling Applications.

  4. In-Space Networking On NASA's SCaN Testbed

    NASA Technical Reports Server (NTRS)

    Brooks, David; Eddy, Wesley M.; Clark, Gilbert J., III; Johnson, Sandra K.

    2016-01-01

    The NASA Space Communications and Navigation (SCaN) Testbed, an external payload onboard the International Space Station, is equipped with three software defined radios (SDRs) and a programmable flight computer. The purpose of the Testbed is to conduct inspace research in the areas of communication, navigation, and networking in support of NASA missions and communication infrastructure. Multiple reprogrammable elements in the end to end system, along with several communication paths and a semi-operational environment, provides a unique opportunity to explore networking concepts and protocols envisioned for the future Solar System Internet (SSI). This paper will provide a general description of the system's design and the networking protocols implemented and characterized on the testbed, including Encapsulation, IP over CCSDS, and Delay-Tolerant Networking (DTN). Due to the research nature of the implementation, flexibility and robustness are considered in the design to enable expansion for future adaptive and cognitive techniques. Following a detailed design discussion, lessons learned and suggestions for future missions and communication infrastructure elements will be provided. Plans for the evolving research on SCaN Testbed as it moves towards a more adaptive, autonomous system will be discussed.

  5. LWS/SET End-to-End Data System

    NASA Technical Reports Server (NTRS)

    Giffin, Geoff; Sherman, Barry; Colon, Gilberto (Technical Monitor)

    2002-01-01

    This paper describes the concept for the End-to-End Data System that will support NASA's Living With a Star Space Environment Testbed missions. NASA has initiated the Living With a Star (LWS) Program to develop a better scientific understanding to address the aspects of the connected Sun-Earth system that affect life and society. A principal goal of the program is to bridge the gap.between science, engineering, and user application communities. The Space Environment Testbed (SET) Project is one element of LWS. The Project will enable future science, operational, and commercial objectives in space and atmospheric environments by improving engineering approaches to the accommodation and/or mitigation of the effects of solar variability on technological systems. The End-to-end data system allows investigators to access the SET control center, command their experiments, and receive data from their experiments back at their home facility, using the Internet. The logical functioning of major components of the end-to-end data system are described, including the GSFC Payload Operations Control Center (POCC), SET Payloads, the GSFC SET Simulation Lab, SET Experiment PI Facilities, and Host Systems. Host Spacecraft Operations Control Centers (SOCC) and the Host Spacecraft are essential links in the end-to-end data system, but are not directly under the control of the SET Project. Formal interfaces will be established between these entities and elements of the SET Project. The paper describes data flow through the system, from PI facilities connecting to the SET operations center via the Internet, communications to SET carriers and experiments via host systems, to telemetry returns to investigators from their flight experiments. It also outlines the techniques that will be used to meet mission requirements, while holding development and operational costs to a minimum. Additional information is included in the original extended abstract.

  6. Evolving Storage and Cyber Infrastructure at the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen; Duffy, Daniel; Spear, Carrie; Sinno, Scott; Vaughan, Garrison; Bowen, Michael

    2018-01-01

    This talk will describe recent developments at the NASA Center for Climate Simulation, which is funded by NASAs Science Mission Directorate, and supports the specialized data storage and computational needs of weather, ocean, and climate researchers, as well as astrophysicists, heliophysicists, and planetary scientists. To meet requirements for higher-resolution, higher-fidelity simulations, the NCCS augments its High Performance Computing (HPC) and storage retrieval environment. As the petabytes of model and observational data grow, the NCCS is broadening data services offerings and deploying and expanding virtualization resources for high performance analytics.

  7. High-Fidelity Computational Aerodynamics of the Elytron 4S UAV

    NASA Technical Reports Server (NTRS)

    Ventura Diaz, Patricia; Yoon, Seokkwan; Theodore, Colin R.

    2018-01-01

    High-fidelity Computational Fluid Dynamics (CFD) have been carried out for the Elytron 4S Unmanned Aerial Vehicle (UAV), also known as the converticopter "proto12". It is the scaled wind tunnel model of the Elytron 4S, an Urban Air Mobility (UAM) concept, a tilt-wing, box-wing rotorcraft capable of Vertical Take-Off and Landing (VTOL). The three-dimensional unsteady Navier-Stokes equations are solved on overset grids employing high-order accurate schemes, dual-time stepping, and a hybrid turbulence model using NASA's CFD code OVERFLOW. The Elytron 4S UAV has been simulated in airplane mode and in helicopter mode.

  8. Preliminary Computational Study for Future Tests in the NASA Ames 9 foot' x 7 foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Pearl, Jason M.; Carter, Melissa B.; Elmiligui, Alaa A.; WInski, Courtney S.; Nayani, Sudheer N.

    2016-01-01

    The NASA Advanced Air Vehicles Program, Commercial Supersonics Technology Project seeks to advance tools and techniques to make over-land supersonic flight feasible. In this study, preliminary computational results are presented for future tests in the NASA Ames 9 foot x 7 foot supersonic wind tunnel to be conducted in early 2016. Shock-plume interactions and their effect on pressure signature are examined for six model geometries. Near- field pressure signatures are assessed using the CFD code USM3D to model the proposed test geometries in free-air. Additionally, results obtained using the commercial grid generation software Pointwise Reigistered Trademark are compared to results using VGRID, the NASA Langley Research Center in-house mesh generation program.

  9. NASA Aeronautics: Research and Technology Program Highlights

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This report contains numerous color illustrations to describe the NASA programs in aeronautics. The basic ideas involved are explained in brief paragraphs. The seven chapters deal with Subsonic aircraft, High-speed transport, High-performance military aircraft, Hypersonic/Transatmospheric vehicles, Critical disciplines, National facilities and Organizations & installations. Some individual aircraft discussed are : the SR-71 aircraft, aerospace planes, the high-speed civil transport (HSCT), the X-29 forward-swept wing research aircraft, and the X-31 aircraft. Critical disciplines discussed are numerical aerodynamic simulation, computational fluid dynamics, computational structural dynamics and new experimental testing techniques.

  10. The 1995 NASA High-Speed Research Program Sonic Boom Workshop. Volume 1

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1996-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Sonic Boom Workshop on September 12-13, 1995. The workshop was designed to bring together NASAs scientists and engineers and their counterparts in industry, other Government agencies, and academia working together in the sonic boom element of NASAs High-Speed Research Program. Specific objectives of this workshop were to (1) report the progress and status of research in sonic boom propagation, acceptability, and design; (2) promote and disseminate this technology within the appropriate technical communities; (3) help promote synergy among the scientists working in the Program; and (4) identify technology pacing the development of viable reduced-boom High-Speed Civil Transport concepts. The Workshop included these sessions: Session 1 - Sonic Boom Propagation (Theoretical); Session 2 - Sonic Boom Propagation (Experimental); and Session 3 - Acceptability Studies - Human and Animal.

  11. NASA's Astrophysics Data Archives

    NASA Astrophysics Data System (ADS)

    Hasan, H.; Hanisch, R.; Bredekamp, J.

    2000-09-01

    The NASA Office of Space Science has established a series of archival centers where science data acquired through its space science missions is deposited. The availability of high quality data to the general public through these open archives enables the maximization of science return of the flight missions. The Astrophysics Data Centers Coordinating Council, an informal collaboration of archival centers, coordinates data from five archival centers distiguished primarily by the wavelength range of the data deposited there. Data are available in FITS format. An overview of NASA's data centers and services is presented in this paper. A standard front-end modifyer called `Astrowbrowse' is described. Other catalog browsers and tools include WISARD and AMASE supported by the National Space Scince Data Center, as well as ISAIA, a follow on to Astrobrowse.

  12. Integrating thematic web portal capabilities into the NASA Earthdata Web Infrastructure

    NASA Astrophysics Data System (ADS)

    Wong, M. M.; McLaughlin, B. D.; Huang, T.; Baynes, K.

    2015-12-01

    The National Aeronautics and Space Administration (NASA) acquires and distributes an abundance of Earth science data on a daily basis to a diverse user community worldwide. To assist the scientific community and general public in achieving a greater understanding of the interdisciplinary nature of Earth science and of key environmental and climate change topics, the NASA Earthdata web infrastructure is integrating new methods of presenting and providing access to Earth science information, data, research and results. This poster will present the process of integrating thematic web portal capabilities into the NASA Earthdata web infrastructure, with examples from the Sea Level Change Portal. The Sea Level Change Portal will be a source of current NASA research, data and information regarding sea level change. The portal will provide sea level change information through articles, graphics, videos and animations, an interactive tool to view and access sea level change data and a dashboard showing sea level change indicators. Earthdata is a part of the Earth Observing System Data and Information System (EOSDIS) project. EOSDIS is a key core capability in NASA's Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA's Earth science data from various sources - satellites, aircraft, field measurements, and various other programs. It is comprised of twelve Distributed Active Archive Centers (DAACs), Science Computing Facilities (SCFs), data discovery and service access client (Reverb and Earthdata Search), dataset directory (Global Change Master Directory - GCMD), near real-time data (Land Atmosphere Near real-time Capability for EOS - LANCE), Worldview (an imagery visualization interface), Global Imagery Browse Services, the Earthdata Code Collaborative and a host of other discipline specific data discovery, data access, data subsetting and visualization tools.

  13. Validation of NASA Thermal Ice Protection Computer Codes. Part 3; The Validation of Antice

    NASA Technical Reports Server (NTRS)

    Al-Khalil, Kamel M.; Horvath, Charles; Miller, Dean R.; Wright, William B.

    2001-01-01

    An experimental program was generated by the Icing Technology Branch at NASA Glenn Research Center to validate two ice protection simulation codes: (1) LEWICE/Thermal for transient electrothermal de-icing and anti-icing simulations, and (2) ANTICE for steady state hot gas and electrothermal anti-icing simulations. An electrothermal ice protection system was designed and constructed integral to a 36 inch chord NACA0012 airfoil. The model was fully instrumented with thermo-couples, RTD'S, and heat flux gages. Tests were conducted at several icing environmental conditions during a two week period at the NASA Glenn Icing Research Tunnel. Experimental results of running-wet and evaporative cases were compared to the ANTICE computer code predictions and are presented in this paper.

  14. Evaluating the Performance of the NASA LaRC CMF Motion Base Safety Devices

    NASA Technical Reports Server (NTRS)

    Gupton, Lawrence E.; Bryant, Richard B., Jr.; Carrelli, David J.

    2006-01-01

    This paper describes the initial measured performance results of the previously documented NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base hardware safety devices. These safety systems are required to prevent excessive accelerations that could injure personnel and damage simulator cockpits or the motion base structure. Excessive accelerations may be caused by erroneous commands or hardware failures driving an actuator to the end of its travel at high velocity, stepping a servo valve, or instantly reversing servo direction. Such commands may result from single order failures of electrical or hydraulic components within the control system itself, or from aggressive or improper cueing commands from the host simulation computer. The safety systems must mitigate these high acceleration events while minimizing the negative performance impacts. The system accomplishes this by controlling the rate of change of valve signals to limit excessive commanded accelerations. It also aids hydraulic cushion performance by limiting valve command authority as the actuator approaches its end of travel. The design takes advantage of inherent motion base hydraulic characteristics to implement all safety features using hardware only solutions.

  15. NASA tech brief evaluations

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.

    1994-01-01

    A major step in transferring technology is to disseminate information about new developments to the appropriate sector(s). A useful vehicle for transferring technology from the government sector to industry has been demonstrated with the use of periodical and journal announcements to highlight technological achievements which may meet the needs of industries other than the one who developed the innovation. To meet this end, NASA has very successfully pursued the goal of identifying technical innovations through the national circulation publication; NASA Tech Briefs. At one time the Technology Utilization Offices of the various centers coordinated the selection of appropriate technologies through a common channel. In recent years, each NASA field center has undertaken the task of evaluating submittals for Tech Brief publication independently of the others. The University of Alabama in Huntsville was selected to assist MSFC in evaluating technology developed under the various programs managed by the NASA center for publication in the NASA Tech Briefs journal. The primary motivation for the NASA Tech Briefs publication is to bring to the attention of industry the various NASA technologies which, in general, have been developed for a specific aerospace requirement, but has application in other areas. Since there are a number of applications outside of NASA that can benefit from innovative concepts developed within the MSPC programs, the ability to transfer technology to other sectors is very high. In most cases, the innovator(s) are not always knowledgeable about other industries which might potentially benefit from their innovation. The evaluation process can therefore contribute to the list of potential users through a knowledgeable evaluator.

  16. Development of computational methods for unsteady aerodynamics at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.; Whitlow, Woodrow, Jr.

    1987-01-01

    The current scope, recent progress, and plans for research and development of computational methods for unsteady aerodynamics at the NASA Langley Research Center are reviewed. Both integral equations and finite difference methods for inviscid and viscous flows are discussed. Although the great bulk of the effort has focused on finite difference solution of the transonic small perturbation equation, the integral equation program is given primary emphasis here because it is less well known.

  17. Development of computational methods for unsteady aerodynamics at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.; Whitlow, Woodrow, Jr.

    1987-01-01

    The current scope, recent progress, and plans for research and development of computational methods for unsteady aerodynamics at the NASA Langley Research Center are reviewed. Both integral-equations and finite-difference method for inviscid and viscous flows are discussed. Although the great bulk of the effort has focused on finite-difference solution of the transonic small-perturbation equation, the integral-equation program is given primary emphasis here because it is less well known.

  18. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    Nechnical producer for NASA's Eyes at JPL, Jason Craig discusses the Cassini mission as seen through the NASA Eyes program during a NASA Social, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  19. Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 6: Study issues report

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex (PTC) at the Marshall Space Flight Center (MSFC). The PTC will train the space station payload specialists and mission specialists to operate the wide variety of experiments that will be on-board the Freedom Space Station. This simulation Computer System (SCS) study issues report summarizes the analysis and study done as task 1-identify and analyze the CSC study issues- of the SCS study contract.This work was performed over the first three months of the SCS study which began in August of 1988. First issues were identified from all sources. These included the NASA SOW, the TRW proposal, and working groups which focused the experience of NASA and the contractor team performing the study-TRW, Essex, and Grumman. The final list is organized into training related issues, and SCS associated development issues. To begin the analysis of the issues, a list of all the functions for which the SCS could be used was created, i.e., when the computer is turned on, what will it be doing. Analysis was continued by creating an operational functions matrix of SCS users vs. SCS functions to insure all the functions considered were valid, and to aid in identification of users as the analysis progressed. The functions will form the basis for the requirements, which are currently being developed under task 3 of the SCS study.

  20. End-user satisfaction of a patient education tool manual versus computer-generated tool.

    PubMed

    Tronni, C; Welebob, E

    1996-01-01

    This article reports a nonexperimental comparative study of end-user satisfaction before and after implementation of a vendor supplied computerized system (Micromedex, Inc) for providing up-to-date patient instructions regarding diseases, injuries, procedures, and medications. The purpose of this research was to measure the satisfaction of nurses who directly interact with a specific patient educational software application and to compare user satisfaction with manual versus computer generated materials. A computing satisfaction questionnaire that uses a scale of 1 to 5 (1 being the lowest) was used to measure end-user computing satisfaction in five constructs: content, accuracy, format, ease of use, and timeliness. Summary statistics were used to calculate mean ratings for each of the questionnaire's 12 items and for each of the five constructs. Mean differences between the ratings before and after implementation of the five constructs were significant by paired t test. Total user satisfaction improved with the computerized system, and the computer generated materials were given a higher rating than were the manual materials. Implications of these findings are discussed.

  1. Custom Sky-Image Mosaics from NASA's Information Power Grid

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Collier, James; Craymer, Loring; Curkendall, David

    2005-01-01

    yourSkyG is the second generation of the software described in yourSky: Custom Sky-Image Mosaics via the Internet (NPO-30556), NASA Tech Briefs, Vol. 27, No. 6 (June 2003), page 45. Like its predecessor, yourSkyG supplies custom astronomical image mosaics of sky regions specified by requesters using client computers connected to the Internet. Whereas yourSky constructs mosaics on a local multiprocessor system, yourSkyG performs the computations on NASA s Information Power Grid (IPG), which is capable of performing much larger mosaicking tasks. (The IPG is high-performance computation and data grid that integrates geographically distributed 18 NASA Tech Briefs, September 2005 computers, databases, and instruments.) A user of yourSkyG can specify parameters describing a mosaic to be constructed. yourSkyG then constructs the mosaic on the IPG and makes it available for downloading by the user. The complexities of determining which input images are required to construct a mosaic, retrieving the required input images from remote sky-survey archives, uploading the images to the computers on the IPG, performing the computations remotely on the Grid, and downloading the resulting mosaic from the Grid are all transparent to the user

  2. Mobile Computing for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Swietek, Gregory E. (Technical Monitor)

    1994-01-01

    The use of commercial computer technology in specific aerospace mission applications can reduce the cost and project cycle time required for the development of special-purpose computer systems. Additionally, the pace of technological innovation in the commercial market has made new computer capabilities available for demonstrations and flight tests. Three areas of research and development being explored by the Portable Computer Technology Project at NASA Ames Research Center are the application of commercial client/server network computing solutions to crew support and payload operations, the analysis of requirements for portable computing devices, and testing of wireless data communication links as extensions to the wired network. This paper will present computer architectural solutions to portable workstation design including the use of standard interfaces, advanced flat-panel displays and network configurations incorporating both wired and wireless transmission media. It will describe the design tradeoffs used in selecting high-performance processors and memories, interfaces for communication and peripheral control, and high resolution displays. The packaging issues for safe and reliable operation aboard spacecraft and aircraft are presented. The current status of wireless data links for portable computers is discussed from a system design perspective. An end-to-end data flow model for payload science operations from the experiment flight rack to the principal investigator is analyzed using capabilities provided by the new generation of computer products. A future flight experiment on-board the Russian MIR space station will be described in detail including system configuration and function, the characteristics of the spacecraft operating environment, the flight qualification measures needed for safety review, and the specifications of the computing devices to be used in the experiment. The software architecture chosen shall be presented. An analysis of the

  3. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those

  4. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    NASA Technical Reports Server (NTRS)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  5. Hurricane Intensity Forecasts with a Global Mesoscale Model on the NASA Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Atlas, Robert

    2006-01-01

    It is known that General Circulation Models (GCMs) have insufficient resolution to accurately simulate hurricane near-eye structure and intensity. The increasing capabilities of high-end computers (e.g., the NASA Columbia Supercomputer) have changed this. In 2004, the finite-volume General Circulation Model at a 1/4 degree resolution, doubling the resolution used by most of operational NWP center at that time, was implemented and run to obtain promising landfall predictions for major hurricanes (e.g., Charley, Frances, Ivan, and Jeanne). In 2005, we have successfully implemented the 1/8 degree version, and demonstrated its performance on intensity forecasts with hurricane Katrina (2005). It is found that the 1/8 degree model is capable of simulating the radius of maximum wind and near-eye wind structure, and thereby promising intensity forecasts. In this study, we will further evaluate the model s performance on intensity forecasts of hurricanes Ivan, Jeanne, Karl in 2004. Suggestions for further model development will be made in the end.

  6. Computer systems and software engineering

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  7. Lessons Learned while Exploring Cloud-Native Architectures for NASA EOSDIS Applications and Systems

    NASA Astrophysics Data System (ADS)

    Pilone, D.

    2016-12-01

    As new, high data rate missions begin collecting data, the NASA's Earth Observing System Data and Information System (EOSDIS) archive is projected to grow roughly 20x to over 300PBs by 2025. To prepare for the dramatic increase in data and enable broad scientific inquiry into larger time series and datasets, NASA has been exploring the impact of applying cloud technologies throughout EOSDIS. In this talk we will provide an overview of NASA's prototyping and lessons learned in applying cloud architectures to: Highly scalable and extensible ingest and archive of EOSDIS data Going "all-in" on cloud based application architectures including "serverless" data processing pipelines and evaluating approaches to vendor-lock in Rethinking data distribution and approaches to analysis in a cloud environment Incorporating and enforcing security controls while minimizing the barrier for research efforts to deploy to NASA compliant, operational environments. NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a multi-petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 6000 data products ranging from various types of science disciplines. EOSDIS has continually evolved to improve the discoverability, accessibility, and usability of high-impact NASA data spanning the multi-petabyte-scale archive of Earth science data products.

  8. THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauschlicher, C. W.; Ricca, A.; Boersma, C.

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 {mu}m (5000-5 cm{sup -1}). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyzemore » and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.« less

  9. NASA High Contrast Imaging for Exoplanets

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    2008-01-01

    Described is NASA's ongoing program for the detection and characterization of exosolar planets via high-contrast imaging. Some of the more promising proposed techniques under assessment may enable detection of life outside our solar system. In visible light terrestrial planets are approximately 10(exp -10) dimmer than the parent star. Issues such as diffraction, scatter, wavefront, amplitude and polarization all contribute to a reduction in contrast. An overview of the techniques will be discussed.

  10. Guidelines in preparing computer-generated plots for NASA technical reports with the LaRC graphics output system

    NASA Technical Reports Server (NTRS)

    Taylor, N. L.

    1983-01-01

    To response to a need for improved computer-generated plots that are acceptable to the Langley publication process, the LaRC Graphics Output System has been modified to encompass the publication requirements, and a guideline has been established. This guideline deals only with the publication requirements of computer-generated plots. This report explains the capability that authors of NASA technical reports can use to obtain publication--quality computer-generated plots or the Langley publication process. The rules applied in developing this guideline and examples illustrating the rules are included.

  11. Computation Methods for NASA Data-streams for Agricultural Efficiency Applications

    NASA Astrophysics Data System (ADS)

    Shrestha, B.; O'Hara, C. G.; Mali, P.

    2007-12-01

    Temporal Map Algebra (TMA) is a novel technique for analyzing time-series of satellite imageries using simple algebraic operators that treats time-series imageries as a three-dimensional dataset, where two dimensions encode planimetric position on earth surface and the third dimension encodes time. Spatio-temporal analytical processing methods such as TMA that utilize moderate spatial resolution satellite imagery having high temporal resolution to create multi-temporal composites are data intensive as well as computationally intensive. TMA analysis for multi-temporal composites provides dramatically enhanced usefulness that will yield previously unavailable capabilities to user communities, if deployment is coupled with significant High Performance Computing (HPC) capabilities; and interfaces are designed to deliver the full potential for these new technological developments. In this research, cross-platform data fusion and adaptive filtering using TMA was employed to create highly useful daily datasets and cloud-free high-temporal resolution vegetation index (VI) composites with enhanced information content for vegetation and bio-productivity monitoring, surveillance, and modeling. Fusion of Normalized Difference Vegetation Index (NDVI) data created from Aqua and Terra Moderate Resolution Imaging Spectroradiometer (MODIS) surface-reflectance data (MOD09) enables the creation of daily composites which are of immense value to a broad spectrum of global and national applications. Additionally these products are highly desired by many natural resources agencies like USDA/FAS/PECAD. Utilizing data streams collected by similar sensors on different platforms that transit the same areas at slightly different times of the day offers the opportunity to develop fused data products that have enhanced cloud-free and reduced noise characteristics. Establishing a Fusion Quality Confidence Code (FQCC) provides a metadata product that quantifies the method of fusion for a given

  12. A Sample of NASA Langley Unsteady Pressure Experiments for Computational Aerodynamics Code Evaluation

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Scott, Robert C.; Bartels, Robert E.; Edwards, John W.; Bennett, Robert M.

    2000-01-01

    As computational fluid dynamics methods mature, code development is rapidly transitioning from prediction of steady flowfields to unsteady flows. This change in emphasis offers a number of new challenges to the research community, not the least of which is obtaining detailed, accurate unsteady experimental data with which to evaluate new methods. Researchers at NASA Langley Research Center (LaRC) have been actively measuring unsteady pressure distributions for nearly 40 years. Over the last 20 years, these measurements have focused on developing high-quality datasets for use in code evaluation. This paper provides a sample of unsteady pressure measurements obtained by LaRC and available for government, university, and industry researchers to evaluate new and existing unsteady aerodynamic analysis methods. A number of cases are highlighted and discussed with attention focused on the unique character of the individual datasets and their perceived usefulness for code evaluation. Ongoing LaRC research in this area is also presented.

  13. Space Station Simulation Computer System (SCS) study for NASA/MSFC. Phased development plan

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.

  14. Space Station Simulation Computer System (SCS) study for NASA/MSFC. Operations concept report

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.

  15. Product or waste? Importation and end-of-life processing of computers in Peru.

    PubMed

    Kahhat, Ramzy; Williams, Eric

    2009-08-01

    This paper considers the importation of used personal computers (PCs) in Peru and domestic practices in their production, reuse, and end-of-life processing. The empirical pillars of this study are analysis of government data describing trade in used and new computers and surveys and interviews of computer sellers, refurbishers, and recyclers. The United States is the primary source of used PCs imported to Peru. Analysis of shipment value (as measured by trade statistics) shows that 87-88% of imported used computers had a price higher than the ideal recycle value of constituent materials. The official trade in end-of-life computers is thus driven by reuse as opposed to recycling. The domestic reverse supply chain of PCs is well developed with extensive collection, reuse, and recycling. Environmental problems identified include open burning of copper-bearing wires to remove insulation and landfilling of CRT glass. Distinct from informal recycling in China and India, printed circuit boards are usually not recycled domestically but exported to Europe for advanced recycling or to China for (presumably) informal recycling. It is notable that purely economic considerations lead to circuit boards being exported to Europe where environmental standards are stringent, presumably due to higher recovery of precious metals.

  16. SPoRT - An End-to-End R2O Activity

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.

    2009-01-01

    Established in 2002 to demonstrate the weather and forecasting application of real-time EOS measurements, the Short-term Prediction Research and Transition (SPoRT) program has grown to be an end-to-end research to operations activity focused on the use of advanced NASA modeling and data assimilation approaches, nowcasting techniques, and unique high-resolution multispectral observational data applications from EOS satellites to improve short-term weather forecasts on a regional and local scale. SPoRT currently partners with several universities and other government agencies for access to real-time data and products, and works collaboratively with them and operational end users at 13 WFOs to develop and test the new products and capabilities in a "test-bed" mode. The test-bed simulates key aspects of the operational environment without putting constraints on the forecaster workload. Products and capabilities which show utility in the test-bed environment are then transitioned experimentally into the operational environment for further evaluation and assessment. SPoRT focuses on a suite of data and products from MODIS, AMSR-E, and AIRS on the NASA Terra and Aqua satellites, and total lightning measurements from ground-based networks. Some of the observations are assimilated into or used with various versions of the WRF model to provide supplemental forecast guidance to operational end users. SPoRT is enhancing partnerships with NOAA / NESDIS for new product development and data access to exploit the remote sensing capabilities of instruments on the NPOESS satellites to address short term weather forecasting problems. The VIIRS and CrIS instruments on the NPP and follow-on NPOESS satellites provide similar observing capabilities to the MODIS and AIRS instruments on Terra and Aqua. SPoRT will be transitioning existing and new capabilities into the AWIIPS II environment to continue the continuity of its activities.

  17. Consolidating NASA's Arc Jets

    NASA Technical Reports Server (NTRS)

    Balboni, John A.; Gokcen, Tahir; Hui, Frank C. L.; Graube, Peter; Morrissey, Patricia; Lewis, Ronald

    2015-01-01

    The paper describes the consolidation of NASA's high powered arc-jet testing at a single location. The existing plasma arc-jet wind tunnels located at the Johnson Space Center were relocated to Ames Research Center while maintaining NASA's technical capability to ground-test thermal protection system materials under simulated atmospheric entry convective heating. The testing conditions at JSC were reproduced and successfully demonstrated at ARC through close collaboration between the two centers. New equipment was installed at Ames to provide test gases of pure nitrogen mixed with pure oxygen, and for future nitrogen-carbon dioxide mixtures. A new control system was custom designed, installed and tested. Tests demonstrated the capability of the 10 MW constricted-segmented arc heater at Ames meets the requirements of the major customer, NASA's Orion program. Solutions from an advanced computational fluid dynamics code were used to aid in characterizing the properties of the plasma stream and the surface environment on the calorimeters in the supersonic flow stream produced by the arc heater.

  18. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    Associate administrator for NASA's Science Mission Directorate Thomas Zurbuchen, speaks to NASA Social attendees about the Cassini mission, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  19. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    NASA JPL digital and social media lead Stephanie Smith, introduces technical producer for NASA's Eyes at JPL, Jason Craig, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  20. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    Director of NASA's Planetary Science Division, Jim Green, speaks to NASA Social attendees, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  1. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    NASA Social attendees film director of NASA's Planetary Science Division, Jim Green as he discusses the Cassini mission, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  2. An Overview of NASA's Intelligent Systems Program

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    NASA and the Computer Science Research community are poised to enter a critical era. An era in which - it seems - that each needs the other. Market forces, driven by the immediate economic viability of computer science research results, place Computer Science in a relatively novel position. These forces impact how research is done, and could, in worst case, drive the field away from significant innovation opting instead for incremental advances that result in greater stability in the market place. NASA, however, requires significant advances in computer science research in order to accomplish the exploration and science agenda it has set out for itself. NASA may indeed be poised to advance computer science research in this century much the way it advanced aero-based research in the last.

  3. NASA automatic subject analysis technique for extracting retrievable multi-terms (NASA TERM) system

    NASA Technical Reports Server (NTRS)

    Kirschbaum, J.; Williamson, R. E.

    1978-01-01

    Current methods for information processing and retrieval used at the NASA Scientific and Technical Information Facility are reviewed. A more cost effective computer aided indexing system is proposed which automatically generates print terms (phrases) from the natural text. Satisfactory print terms can be generated in a primarily automatic manner to produce a thesaurus (NASA TERMS) which extends all the mappings presently applied by indexers, specifies the worth of each posting term in the thesaurus, and indicates the areas of use of the thesaurus entry phrase. These print terms enable the computer to determine which of several terms in a hierarchy is desirable and to differentiate ambiguous terms. Steps in the NASA TERMS algorithm are discussed and the processing of surrogate entry phrases is demonstrated using four previously manually indexed STAR abstracts for comparison. The simulation shows phrase isolation, text phrase reduction, NASA terms selection, and RECON display.

  4. Joint-space adaptive control of a 6 DOF end-effector with closed-kinematic chain mechanism

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Zhou, Zhen-Lei

    1989-01-01

    The development is presented for a joint-space adaptive scheme that controls the joint position of a six-degree-of-freedom (DOF) robot end-effector performing fine and precise motion within a very limited workspace. The end-effector was built to study autonomous assembly of NASA hardware in space. The design of the adaptive controller is based on the concept of model reference adaptive control (MRAC) and Lyapunov direct method. In the development, it is assumed that the end-effector performs slowly varying motion. Computer simulation is performed to investigate the performance of the developed control scheme on position control of the end-effector. Simulation results manifest that the adaptive control scheme provides excellent tracking of several test paths.

  5. Mixed Layer Heights Derived from the NASA Langley Research Center Airborne High Spectral Resolution Lidar

    NASA Technical Reports Server (NTRS)

    Scarino, Amy J.; Burton, Sharon P.; Ferrare, Rich A.; Hostetler, Chris A.; Hair, Johnathan W.; Obland, Michael D.; Rogers, Raymond R.; Cook, Anthony L.; Harper, David B.; Fast, Jerome; hide

    2012-01-01

    The NASA airborne High Spectral Resolution Lidar (HSRL) has been deployed on board the NASA Langley Research Center's B200 aircraft to several locations in North America from 2006 to 2012 to aid in characterizing aerosol properties for over fourteen field missions. Measurements of aerosol extinction (532 nm), backscatter (532 and 1064 nm), and depolarization (532 and 1064 nm) during 349 science flights, many in coordination with other participating research aircraft, satellites, and ground sites, constitute a diverse data set for use in characterizing the spatial and temporal distribution of aerosols, as well as properties and variability of the Mixing Layer (ML) height. We describe the use of the HSRL data collected during these missions for computing ML heights and show how the HSRL data can be used to determine the fraction of aerosol optical thickness within and above the ML, which is important for air quality assessments. We describe the spatial and temporal variations in ML heights found in the diverse locations associated with these experiments. We also describe how the ML heights derived from HSRL have been used to help assess simulations of Planetary Boundary Layer (PBL) derived using various models, including the Weather Research and Forecasting Chemistry (WRF-Chem), NASA GEOS-5 model, and the ECMWF/MACC models.

  6. Advanced Methodologies for NASA Science Missions

    NASA Astrophysics Data System (ADS)

    Hurlburt, N. E.; Feigelson, E.; Mentzel, C.

    2017-12-01

    Most of NASA's commitment to computational space science involves the organization and processing of Big Data from space-based satellites, and the calculations of advanced physical models based on these datasets. But considerable thought is also needed on what computations are needed. The science questions addressed by space data are so diverse and complex that traditional analysis procedures are often inadequate. The knowledge and skills of the statistician, applied mathematician, and algorithmic computer scientist must be incorporated into programs that currently emphasize engineering and physical science. NASA's culture and administrative mechanisms take full cognizance that major advances in space science are driven by improvements in instrumentation. But it is less well recognized that new instruments and science questions give rise to new challenges in the treatment of satellite data after it is telemetered to the ground. These issues might be divided into two stages: data reduction through software pipelines developed within NASA mission centers; and science analysis that is performed by hundreds of space scientists dispersed through NASA, U.S. universities, and abroad. Both stages benefit from the latest statistical and computational methods; in some cases, the science result is completely inaccessible using traditional procedures. This paper will review the current state of NASA and present example applications using modern methodologies.

  7. Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  8. High Voltage Hall Accelerator Propulsion System Development for NASA Science Missions

    NASA Technical Reports Server (NTRS)

    Kamhawi, Hani; Haag, Thomas; Huang, Wensheng; Shastry, Rohit; Pinero, Luis; Peterson, Todd; Dankanich, John; Mathers, Alex

    2013-01-01

    NASA Science Mission Directorates In-Space Propulsion Technology Program is sponsoring the development of a 3.8 kW-class engineering development unit Hall thruster for implementation in NASA science and exploration missions. NASA Glenn Research Center and Aerojet are developing a high fidelity high voltage Hall accelerator (HiVHAc) thruster that can achieve specific impulse magnitudes greater than 2,700 seconds and xenon throughput capability in excess of 300 kilograms. Performance, plume mappings, thermal characterization, and vibration tests of the HiVHAc engineering development unit thruster have been performed. In addition, the HiVHAc project is also pursuing the development of a power processing unit (PPU) and xenon feed system (XFS) for integration with the HiVHAc engineering development unit thruster. Colorado Power Electronics and NASA Glenn Research Center have tested a brassboard PPU for more than 1,500 hours in a vacuum environment, and a new brassboard and engineering model PPU units are under development. VACCO Industries developed a xenon flow control module which has undergone qualification testing and will be integrated with the HiVHAc thruster extended duration tests. Finally, recent mission studies have shown that the HiVHAc propulsion system has sufficient performance for four Discovery- and two New Frontiers-class NASA design reference missions.

  9. Overview of Fundamental High-Lift Research for Transport Aircraft at NASA

    NASA Technical Reports Server (NTRS)

    Leavitt, L. D.; Washburn, A. E.; Wahls, R. A.

    2007-01-01

    NASA has had a long history in fundamental and applied high lift research. Current programs provide a focus on the validation of technologies and tools that will enable extremely short take off and landing coupled with efficient cruise performance, simple flaps with flow control for improved effectiveness, circulation control wing concepts, some exploration into new aircraft concepts, and partnership with Air Force Research Lab in mobility. Transport high-lift development testing will shift more toward mid and high Rn facilities at least until the question: "How much Rn is required" is answered. This viewgraph presentation provides an overview of High-Lift research at NASA.

  10. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  11. [Activities of Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  12. Storage and network bandwidth requirements through the year 2000 for the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen

    1996-01-01

    The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.

  13. Going beyond the NASA Earthdata website: Reaching out to new audiences via social media and webinars

    NASA Astrophysics Data System (ADS)

    Bagwell, R.; Wong, M. M.; Brennan, J.; Murphy, K. J.; Behnke, J.

    2014-12-01

    This poster will introduce and explore the various social media efforts and monthly webinar series recently established by the National Aeronautics and Space Administration (NASA) Earth Observing System Data and Information System (EOSDIS) project. EOSDIS is a key core capability in NASA's Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA's Earth science data from various sources - satellites, aircraft, field measurements, and various other programs. Some of the capabilities include twelve Distributed Active Archive Centers (DAACs), Science Computing Facilities (SCFs), a data discovery and service access client (Reverb), dataset directory (Global Change Master Directory - GCMD), near real-time data (Land Atmosphere Near real-time Capability for EOS - LANCE), Worldview (an imagery visualization interface), Global Imagery Browse Services, the Earthdata Code Collaborative, and a host of other discipline specific data discovery, data access, data subsetting and visualization tools and services. We have embarked on these efforts to reach out to new audiences and potential new users and to engage our diverse end user communities world-wide. One of the key objectives is to increase awareness of the breadth of Earth science data information, services, and tools that are publicly available while also highlighting how these data and technologies enable scientific research.

  14. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Director of NASA's Planetary Science Division, Jim Green, is seen during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  15. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Preston Dyches, media relations specialist at NASA's Jet Propulsion Laboratory, during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  16. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    director of NASA's Planetary Science Division, Jim Green answers questions a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  17. Eclipse Across America: Through the Eyes of NASA

    NASA Astrophysics Data System (ADS)

    Young, C. Alex; Heliophysics Education Consortium

    2018-01-01

    Monday, August 21, 2017, marked the first total solar eclipse to cross the continental United States coast-to-coast in almost a century. NASA scientists and educators, working alongside many partners, were spread across the entire country, both inside and outside the path of totality. Like many other organizations, NASA prepared for this eclipse for several years. The August 21 eclipse was NASA's biggest media event in recent history, and was made possible by the work of thousands of volunteers, collaborators and NASA employees. The agency supported science, outreach, and media communications activities along the path of totality and across the country. This culminated in a 3 ½-hour broadcast from Charleston, SC, showcasing the sights and sounds of the eclipse – starting with the view from a plane off the coast of Oregon and ending with images from the International Space Station as the Moon's inner shadow left the US East Coast. Along the way, NASA shared experiments and research from different groups of scientists, including 11 NASA-supported studies, 50+ high-altitude balloon launches, and 12 NASA and partner space-based assets. This talk shares the timeline of this momentous event from NASA's perspective, describing outreach successes and providing a glimpse at some of the science results available and yet to come.

  18. A self-analysis of the NASA-TLX workload measure.

    PubMed

    Noyes, Jan M; Bruneau, Daniel P J

    2007-04-01

    Computer use and, more specifically, the administration of tests and materials online continue to proliferate. A number of subjective, self-report workload measures exist, but the National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is probably the most well known and used. The aim of this paper is to consider the workload costs associated with the computer-based and paper versions of the NASA-TLX measure. It was found that there is a significant difference between the workload scores for the two media, with the computer version of the NASA-TLX incurring more workload. This has implications for the practical use of the NASA-TLX as well as for other computer-based workload measures.

  19. Benefit from NASA

    NASA Image and Video Library

    2001-09-01

    The high-tech art of digital signal processing (DSP) was pioneered at NASA's Jet Propulsion Laboratory (JPL) in the mid-1960s for use in the Apollo Lunar Landing Program. Designed to computer enhance pictures of the Moon, this technology became the basis for the Landsat Earth resources satellites and subsequently has been incorporated into a broad range of Earthbound medical and diagnostic tools. DSP is employed in advanced body imaging techniques including Computer-Aided Tomography, also known as CT and CATScan, and Magnetic Resonance Imaging (MRI). CT images are collected by irradiating a thin slice of the body with a fan-shaped x-ray beam from a number of directions around the body's perimeter. A tomographic (slice-like) picture is reconstructed from these multiple views by a computer. MRI employs a magnetic field and radio waves, rather than x-rays, to create images.

  20. 1999 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 2; High Lift

    NASA Technical Reports Server (NTRS)

    Hahne, David E. (Editor)

    1999-01-01

    The High-Speed Research Program sponsored the NASA High-Speed Research Program Aerodynamic Performance Review on February 8-12, 1999 in Anaheim, California. The review was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in areas of: Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization) and High-Lift. The review objectives were to: (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. The HSR AP Technical Review was held simultaneously with the annual review of the following airframe technology areas: Materials and Structures, Environmental Impact, Flight Deck, and Technology Integration Thus, a fourth objective of the Review was to promote synergy between the Aerodynamic Performance technology area and the other technology areas within the airframe element of the HSR Program. This Volume 2/Part 1 publication presents the High-Lift Configuration Development session.

  1. The Kalman Filter and High Performance Computing at NASA's Data Assimilation Office (DAO)

    NASA Technical Reports Server (NTRS)

    Lyster, Peter M.

    1999-01-01

    Atmospheric data assimilation is a method of combining actual observations with model simulations to produce a more accurate description of the earth system than the observations alone provide. The output of data assimilation, sometimes called "the analysis", are accurate regular, gridded datasets of observed and unobserved variables. This is used not only for weather forecasting but is becoming increasingly important for climate research. For example, these datasets may be used to assess retrospectively energy budgets or the effects of trace gases such as ozone. This allows researchers to understand processes driving weather and climate, which have important scientific and policy implications. The primary goal of the NASA's Data Assimilation Office (DAO) is to provide datasets for climate research and to support NASA satellite and aircraft missions. This presentation will: (1) describe ongoing work on the advanced Kalman/Lagrangian filter parallel algorithm for the assimilation of trace gases in the stratosphere; and (2) discuss the Kalman filter in relation to other presentations from the DAO on Four Dimensional Data Assimilation at this meeting. Although the designation "Kalman filter" is often used to describe the overarching work, the series of talks will show that the scientific software and the kind of parallelization techniques that are being developed at the DAO are very different depending on the type of problem being considered, the extent to which the problem is mission critical, and the degree of Software Engineering that has to be applied.

  2. NASA/MSFC multilayer diffusion models and computer program for operational prediction of toxic fuel hazards

    NASA Technical Reports Server (NTRS)

    Dumbauld, R. K.; Bjorklund, J. R.; Bowers, J. F.

    1973-01-01

    The NASA/MSFC multilayer diffusion models are discribed which are used in applying meteorological information to the estimation of toxic fuel hazards resulting from the launch of rocket vehicle and from accidental cold spills and leaks of toxic fuels. Background information, definitions of terms, description of the multilayer concept are presented along with formulas for determining the buoyant rise of hot exhaust clouds or plumes from conflagrations, and descriptions of the multilayer diffusion models. A brief description of the computer program is given, and sample problems and their solutions are included. Derivations of the cloud rise formulas, users instructions, and computer program output lists are also included.

  3. DORMAN computer program (study 2.5). Volume 1: Executive summary. [development of data bank for computerized information storage of NASA programs

    NASA Technical Reports Server (NTRS)

    Stricker, L. T.

    1973-01-01

    The DORCA Applications study has been directed at development of a data bank management computer program identified as DORMAN. Because of the size of the DORCA data files and the manipulations required on that data to support analyses with the DORCA program, automated data techniques to replace time-consuming manual input generation are required. The Dynamic Operations Requirements and Cost Analysis (DORCA) program was developed for use by NASA in planning future space programs. Both programs are designed for implementation on the UNIVAC 1108 computing system. The purpose of this Executive Summary Report is to define for the NASA management the basic functions of the DORMAN program and its capabilities.

  4. P2P Technology for High-Performance Computing: An Overview

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J. (Technical Monitor); Berry, Jason

    2003-01-01

    The transition from cluster computing to peer-to-peer (P2P) high-performance computing has recently attracted the attention of the computer science community. It has been recognized that existing local networks and dedicated clusters of headless workstations can serve as inexpensive yet powerful virtual supercomputers. It has also been recognized that the vast number of lower-end computers connected to the Internet stay idle for as long as 90% of the time. The growing speed of Internet connections and the high availability of free CPU time encourage exploration of the possibility to use the whole Internet rather than local clusters as massively parallel yet almost freely available P2P supercomputer. As a part of a larger project on P2P high-performance computing, it has been my goal to compile an overview of the 2P2 paradigm. I have studied various P2P platforms and I have compiled systematic brief descriptions of their most important characteristics. I have also experimented and obtained hands-on experience with selected P2P platforms focusing on those that seem promising with respect to P2P high-performance computing. I have also compiled relevant literature and web references. I have prepared a draft technical report and I have summarized my findings in a poster paper.

  5. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  6. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  7. High temperature superconducting magnetic energy storage for future NASA missions

    NASA Technical Reports Server (NTRS)

    Faymon, Karl A.; Rudnick, Stanley J.

    1988-01-01

    Several NASA sponsored studies based on 'conventional' liquid helium temperature level superconductivity technology have concluded that superconducting magnetic energy storage has considerable potential for space applications. The advent of high temperature superconductivity (HTSC) may provide additional benefits over conventional superconductivity technology, making magnetic energy storage even more attractive. The proposed NASA space station is a possible candidate for the application of HTSC energy storage. Alternative energy storage technologies for this and other low Earth orbit missions are compared.

  8. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  9. NASA-IGES Translator and Viewer

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Logan, Michael A.

    1995-01-01

    NASA-IGES Translator (NIGEStranslator) is a batch program that translates a general IGES (Initial Graphics Exchange Specification) file to a NASA-IGES-Nurbs-Only (NINO) file. IGES is the most popular geometry exchange standard among Computer Aided Geometric Design (CAD) systems. NINO format is a subset of IGES, implementing the simple and yet the most popular NURBS (Non-Uniform Rational B-Splines) representation. NIGEStranslator converts a complex IGES file to the simpler NINO file to simplify the tasks of CFD grid generation for models in CAD format. The NASA-IGES Viewer (NIGESview) is an Open-Inventor-based, highly interactive viewer/ editor for NINO files. Geometry in the IGES files can be viewed, copied, transformed, deleted, and inquired. Users can use NIGEStranslator to translate IGES files from CAD systems to NINO files. The geometry then can be examined with NIGESview. Extraneous geometries can be interactively removed, and the cleaned model can be written to an IGES file, ready to be used in grid generation.

  10. High-torque open-end wrench

    NASA Technical Reports Server (NTRS)

    Giandomenico, A.; Dame, J. M.; Behimer, H. (Inventor)

    1978-01-01

    A wrench is described that is usable where limited access normally requires an open-end wrench, but which has substantially the high-torque capacity and small radial clearance characteristics of a closed-end wrench. The wrench includes a sleeve forming a nut-engageable socket with a gap in its side, and an adaptor forming a socket with a gap in its side, the adaptor closely surrounding the sleeve and extending across the gap in the sleeve. The sleeve and adaptor have surfaces that become fully engaged when a wrench handle is applied to the adaptor to turn it so as to tighten a nut engaged by the sleeve.

  11. 1998 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 2; High Lift

    NASA Technical Reports Server (NTRS)

    McMillin, S. Naomi (Editor)

    1999-01-01

    NASA's High-Speed Research Program sponsored the 1998 Aerodynamic Performance Technical Review on February 9-13, in Los Angeles, California. The review was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in areas of Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization), High-Lift, and Flight Controls. The review objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientists and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT simulation results were presented along with executive summaries for all the Aerodynamic Performance technology areas. The HSR Aerodynamic Performance Technical Review was held simultaneously with the annual review of the following airframe technology areas: Materials and Structures, Environmental Impact, Flight Deck, and Technology Integration. Thus, a fourth objective of the Review was to promote synergy between the Aerodynamic Performance technology area and the other technology areas of the HSR Program.

  12. NASA Automatic Information Security Handbook

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This handbook details the Automated Information Security (AIS) management process for NASA. Automated information system security is becoming an increasingly important issue for all NASA managers and with rapid advancements in computer and network technologies and the demanding nature of space exploration and space research have made NASA increasingly dependent on automated systems to store, process, and transmit vast amounts of mission support information, hence the need for AIS systems and management. This handbook provides the consistent policies, procedures, and guidance to assure that an aggressive and effective AIS programs is developed, implemented, and sustained at all NASA organizations and NASA support contractors.

  13. End User Computing at a South African Technikon: Enabling Disadvantaged Students To Meet Employers' Requirements.

    ERIC Educational Resources Information Center

    Marsh, Cecille

    A two-phase study examined the skills required of competent end-users of computers in the workplace and assessed the computing awareness and technological environment of first-year students entering historically disadvantaged technikons in South Africa. First, a DACUM (Developing a Curriculum) panel of nine representatives of local business and…

  14. 1997 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 1; Configuration Aerodynamics

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Aerodynamic Performance Workshop on February 25-28, 1997. The workshop was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in areas of Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization), High-Lift, Flight Controls, Supersonic Laminar Flow Control, and Sonic Boom Prediction. The workshop objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT Motion Simulator results were presented along with executive summaries for all the Aerodynamic Performance technology areas.

  15. 1997 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 1; Configuration Aerodynamics

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Aerodynamic Performance Workshop on February 25-28, 1997. The workshop was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in area of Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization), High-Lift, Flight Controls, Supersonic Laminar Flow Control, and Sonic Boom Prediction. The workshop objectives were to (1) report the progress and status of HSCT aerodyamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT Motion Simulator results were presented along with executive summaries for all the Aerodynamic Performance technology areas.

  16. 1997 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 1; Configuration Aerodynamics

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Aerodynamic Performance Workshop on February 25-28, 1997. The workshop was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in area of Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization), High-Lift, Flight Controls, Supersonic Laminar Flow Control, and Sonic Boom Prediction. The workshop objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT Motion Simulator results were presented along with executive summaries for all the Aerodynamic Performance technology areas.

  17. A study of the optimization method used in the NAVY/NASA gas turbine engine computer code

    NASA Technical Reports Server (NTRS)

    Horsewood, J. L.; Pines, S.

    1977-01-01

    Sources of numerical noise affecting the convergence properties of the Powell's Principal Axis Method of Optimization in the NAVY/NASA gas turbine engine computer code were investigated. The principal noise source discovered resulted from loose input tolerances used in terminating iterations performed in subroutine CALCFX to satisfy specified control functions. A minor source of noise was found to be introduced by an insufficient number of digits in stored coefficients used by subroutine THERM in polynomial expressions of thermodynamic properties. Tabular results of several computer runs are presented to show the effects on program performance of selective corrective actions taken to reduce noise.

  18. Benefit from NASA

    NASA Image and Video Library

    2001-01-01

    The high-tech art of digital signal processing (DSP) was pioneered at NASA's Jet Propulsion Laboratory (JPL) in the mid-1960s for use in the Apollo Lunar Landing Program. Designed to computer enhance pictures of the Moon, this technology became the basis for the Landsat Earth resources satellites and subsequently has been incorporated into a broad range of Earthbound medical and diagnostic tools. DSP is employed in advanced body imaging techniques including Computer-Aided Tomography, also known as CT and CATScan, and Magnetic Resonance Imaging (MRI). CT images are collected by irradiating a thin slice of the body with a fan-shaped x-ray beam from a number of directions around the body's perimeter. A tomographic (slice-like) picture is reconstructed from these multiple views by a computer. MRI employs a magnetic field and radio waves, rather than x-rays, to create images. In this photograph, a patient undergoes an open MRI.

  19. The NASA CSTI high capacity power project

    NASA Technical Reports Server (NTRS)

    Winter, J.; Dudenhoefer, J.; Juhasz, A.; Schwarze, G.; Patterson, R.; Ferguson, D.; Titran, R.; Schmitz, P.; Vandersande, J.

    1992-01-01

    The SP-100 Space Nuclear Power Program was established in 1983 by DOD, DOE, and NASA as a joint program to develop technology for military and civil applications. Starting in 1986, NASA has funded a technology program to maintain the momentum of promising aerospace technology advancement started during Phase 1 of SP-100 and to strengthen, in key areas, the chances for successful development and growth capability of space nuclear reactor power systems for a wide range of future space applications. The elements of the Civilian Space Technology Initiative (CSTI) High Capacity Power Project include Systems Analysis, Stirling Power Conversion, Thermoelectric Power Conversion, Thermal Management, Power Management, Systems Diagnostics, Environmental Interactions, and Material/Structural Development. Technology advancement in all elements is required to provide the growth capability, high reliability and 7 to 10 year lifetime demanded for future space nuclear power systems. The overall project will develop and demonstrate the technology base required to provide a wide range of modular power systems compatible with the SP-100 reactor which facilitates operation during lunar and planetary day/night cycles as well as allowing spacecraft operation at any attitude or distance from the sun. Significant accomplishments in all of the project elements will be presented, along with revised goals and project timelines recently developed.

  20. The NASA CSTI high capacity power project

    NASA Astrophysics Data System (ADS)

    Winter, J.; Dudenhoefer, J.; Juhasz, A.; Schwarze, G.; Patterson, R.; Ferguson, D.; Titran, R.; Schmitz, P.; Vandersande, J.

    1992-08-01

    The SP-100 Space Nuclear Power Program was established in 1983 by DOD, DOE, and NASA as a joint program to develop technology for military and civil applications. Starting in 1986, NASA has funded a technology program to maintain the momentum of promising aerospace technology advancement started during Phase 1 of SP-100 and to strengthen, in key areas, the chances for successful development and growth capability of space nuclear reactor power systems for a wide range of future space applications. The elements of the Civilian Space Technology Initiative (CSTI) High Capacity Power Project include Systems Analysis, Stirling Power Conversion, Thermoelectric Power Conversion, Thermal Management, Power Management, Systems Diagnostics, Environmental Interactions, and Material/Structural Development. Technology advancement in all elements is required to provide the growth capability, high reliability and 7 to 10 year lifetime demanded for future space nuclear power systems. The overall project will develop and demonstrate the technology base required to provide a wide range of modular power systems compatible with the SP-100 reactor which facilitates operation during lunar and planetary day/night cycles as well as allowing spacecraft operation at any attitude or distance from the sun. Significant accomplishments in all of the project elements will be presented, along with revised goals and project timelines recently developed.

  1. Development of High-Power Hall Thruster Power Processing Units at NASA GRC

    NASA Technical Reports Server (NTRS)

    Pinero, Luis R.; Bozak, Karin E.; Santiago, Walter; Scheidegger, Robert J.; Birchenough, Arthur G.

    2015-01-01

    NASA GRC successfully designed, built and tested four different power processor concepts for high power Hall thrusters. Each design satisfies unique goals including the evaluation of a novel silicon carbide semiconductor technology, validation of innovative circuits to overcome the problems with high input voltage converter design, development of a direct-drive unit to demonstrate potential benefits, or simply identification of lessonslearned from the development of a PPU using a conventional design approach. Any of these designs could be developed further to satisfy NASA's needs for high power electric propulsion in the near future.

  2. Reducing the complexity of NASA's space communications infrastructure

    NASA Technical Reports Server (NTRS)

    Miller, Raymond E.; Liu, Hong; Song, Junehwa

    1995-01-01

    This report describes the range of activities performed during the annual reporting period in support of the NASA Code O Success Team - Lifecycle Effectiveness for Strategic Success (COST LESS) team. The overall goal of the COST LESS team is to redefine success in a constrained fiscal environment and reduce the cost of success for end-to-end mission operations. This goal is more encompassing than the original proposal made to NASA for reducing complexity of NASA's Space Communications Infrastructure. The COST LESS team approach for reengineering the space operations infrastructure has a focus on reversing the trend of engineering special solutions to similar problems.

  3. 48 CFR 1825.003-70 - NASA definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false NASA definitions. 1825.003-70 Section 1825.003-70 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION 1825.003-70 NASA definitions. “Canadian end product...

  4. 48 CFR 1825.003-70 - NASA definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true NASA definitions. 1825.003-70 Section 1825.003-70 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION 1825.003-70 NASA definitions. “Canadian end product...

  5. 48 CFR 1825.003-70 - NASA definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false NASA definitions. 1825.003-70 Section 1825.003-70 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION 1825.003-70 NASA definitions. “Canadian end product...

  6. 48 CFR 1825.003-70 - NASA definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false NASA definitions. 1825.003-70 Section 1825.003-70 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION 1825.003-70 NASA definitions. “Canadian end product...

  7. 48 CFR 1825.003-70 - NASA definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false NASA definitions. 1825.003-70 Section 1825.003-70 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION 1825.003-70 NASA definitions. “Canadian end product...

  8. Telepresence master glove controller for dexterous robotic end-effectors

    NASA Technical Reports Server (NTRS)

    Fisher, Scott S.

    1987-01-01

    This paper describes recent research in the Aerospace Human Factors Research Division at NASA's Ames Research Center to develop a glove-like, control and data-recording device (DataGlove) that records and transmits to a host computer in real time, and at appropriate resolution, a numeric data-record of a user's hand/finger shape and dynamics. System configuration and performance specifications are detailed, and current research is discussed investigating its applications in operator control of dexterous robotic end-effectors and for use as a human factors research tool in evaluation of operator hand function requirements and performance in other specialized task environments.

  9. 2012 NASA Cost Estimating Handbook Highlights

    NASA Technical Reports Server (NTRS)

    Rosenberg, Leigh; Stukes, Sherry

    2012-01-01

    The major goal is to ensure that appropriate policy is adopted and that best practices are being developed, communicated, and used across the Agency. -- Accomplished by engaging the NASA Cost Estimating Community representatives in the update. Scheduled to be complete by the end of FY 2012. Document has been through 3 detailed reviews across NASA.

  10. High-Performance Computing Systems and Operations | Computational Science |

    Science.gov Websites

    NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate

  11. NASA's Systems Engineering Approaches for Addressing Public Health Surveillance Requirements

    NASA Technical Reports Server (NTRS)

    Vann, Timi

    2003-01-01

    NASA's systems engineering has its heritage in space mission analysis and design, including the end-to-end approach to managing every facet of the extreme engineering required for successful space missions. NASA sensor technology, understanding of remote sensing, and knowledge of Earth system science, can be powerful new tools for improved disease surveillance and environmental public health tracking. NASA's systems engineering framework facilitates the match between facilitates the match between partner needs and decision support requirements in the areas of 1) Science/Data; 2) Technology; 3) Integration. Partnerships between NASA and other Federal agencies are diagrammed in this viewgraph presentation. NASA's role in these partnerships is to provide systemic and sustainable solutions that contribute to the measurable enhancement of a partner agency's disease surveillance efforts.

  12. Experimental and computational investigation of the NASA Low-Speed Centrifugal Compressor flow field

    NASA Technical Reports Server (NTRS)

    Hathaway, M. D.; Chriss, R. M.; Wood, J. R.; Strazisar, A. J.

    1992-01-01

    An experimental and computational investigation of the NASA Low-Speed Centrifugal Compressor (LSCC) flow field has been conducted using laser anemometry and Dawes' 3D viscous code. The experimental configuration consists of a backswept impeller followed by a vaneless diffuser. Measurements of the three-dimensional velocity field were acquired at several measurement planes through the compressor. The measurements describe both the throughflow and secondary velocity field along each measurement plane. In several cases the measurements provide details of the flow within the blade boundary layers. Insight into the complex flow physics within centrifugal compressors is provided by the computational analysis, and assessment of the CFD predictions is provided by comparison with the measurements. Five-hole probe and hot-wire surveys at the inlet and exit to the rotor as well as surface flow visualization along the impeller blade surfaces provide independent confirmation of the laser measurement technique.

  13. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    Cassini imaging science subsystem (ISS) team associate Mike Evans speaks with Cassini NASA Social attendees, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  14. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    Cassini interdisciplinary Titan scientist at Cornell University, Jonathan Lunine, speaks to NASA Social attendees about the Cassini mission, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  15. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    NASA Astrophysics Data System (ADS)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  16. Generic Divide and Conquer Internet-Based Computing

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J. (Technical Monitor); Radenski, Atanas

    2003-01-01

    The growth of Internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of Peer to Peer (P2P) software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high-performance computing applications community. The general goal of this project is to achieve better understanding of the transition to Internet-based high-performance computing and to develop solutions for some of the technical challenges of this transition. In particular, we are interested in creating long-term motivation for end users to provide their idle processor time to support computationally intensive tasks. We believe that a practical P2P architecture should provide useful service to both clients with high-performance computing needs and contributors of lower-end computing resources. To achieve this, we are designing dual -service architecture for P2P high-performance divide-and conquer computing; we are also experimenting with a prototype implementation. Our proposed architecture incorporates a master server, utilizes dual satellite servers, and operates on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. A dual satellite server comprises a high-performance computing engine and a lower-end contributor service engine. The computing engine provides generic support for divide and conquer computations. The service engine is intended to provide free useful HTTP-based services to contributors of lower-end computing resources. Our proposed architecture is complementary to and accessible from computational grids, such as Globus, Legion, and Condor. Grids provide remote access to existing higher-end computing resources; in contrast, our goal is to utilize idle processor time of

  17. Production version of the extended NASA-Langley Vortex Lattice FORTRAN computer program. Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Lamar, J. E.; Herbert, H. E.

    1982-01-01

    The latest production version, MARK IV, of the NASA-Langley vortex lattice computer program is summarized. All viable subcritical aerodynamic features of previous versions were retained. This version extends the previously documented program capabilities to four planforms, 400 panels, and enables the user to obtain vortex-flow aerodynamics on cambered planforms, flowfield properties off the configuration in attached flow, and planform longitudinal load distributions.

  18. NASA thesaurus aeronautics vocabulary

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The controlled vocabulary used by the NASA Scientific and Technical Information effort to index documents in the area of aeronautics is presented. The terms comprise a subset of the 1988 edition of the NASA Thesaurus and its supplements issued through the end of 1990. The Aeronautics Vocabulary contains over 4700 terms presented in a hierarchical display format. In addition to aeronautics per se, the vocabulary covers supporting terminology from areas such as fluid dynamics, propulsion engineering, and test facilities and instrumentation.

  19. Information Systems for NASA's Aeronautics and Space Enterprises

    NASA Technical Reports Server (NTRS)

    Kutler, Paul

    1998-01-01

    The aerospace industry is being challenged to reduce costs and development time as well as utilize new technologies to improve product performance. Information technology (IT) is the key to providing revolutionary solutions to the challenges posed by the increasing complexity of NASA's aeronautics and space missions and the sophisticated nature of the systems that enable them. The NASA Ames vision is to develop technologies enabling the information age, expanding the frontiers of knowledge for aeronautics and space, improving America's competitive position, and inspiring future generations. Ames' missions to accomplish that vision include: 1) performing research to support the American aviation community through the unique integration of computation, experimentation, simulation and flight testing, 2) studying the health of our planet, understanding living systems in space and the origins of the universe, developing technologies for space flight, and 3) to research, develop and deliver information technologies and applications. Information technology may be defined as the use of advance computing systems to generate data, analyze data, transform data into knowledge and to use as an aid in the decision-making process. The knowledge from transformed data can be displayed in visual, virtual and multimedia environments. The decision-making process can be fully autonomous or aided by a cognitive processes, i.e., computational aids designed to leverage human capacities. IT Systems can learn as they go, developing the capability to make decisions or aid the decision making process on the basis of experiences gained using limited data inputs. In the future, information systems will be used to aid space mission synthesis, virtual aerospace system design, aid damaged aircraft during landing, perform robotic surgery, and monitor the health and status of spacecraft and planetary probes. NASA Ames through the Center of Excellence for Information Technology Office is leading the

  20. The NASA Space Communications Data Networking Architecture

    NASA Technical Reports Server (NTRS)

    Israel, David J.; Hooke, Adrian J.; Freeman, Kenneth; Rush, John J.

    2006-01-01

    The NASA Space Communications Architecture Working Group (SCAWG) has recently been developing an integrated agency-wide space communications architecture in order to provide the necessary communication and navigation capabilities to support NASA's new Exploration and Science Programs. A critical element of the space communications architecture is the end-to-end Data Networking Architecture, which must provide a wide range of services required for missions ranging from planetary rovers to human spaceflight, and from sub-orbital space to deep space. Requirements for a higher degree of user autonomy and interoperability between a variety of elements must be accommodated within an architecture that necessarily features minimum operational complexity. The architecture must also be scalable and evolvable to meet mission needs for the next 25 years. This paper will describe the recommended NASA Data Networking Architecture, present some of the rationale for the recommendations, and will illustrate an application of the architecture to example NASA missions.

  1. Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.

  2. End-to-End Modeling with the Heimdall Code to Scope High-Power Microwave Systems

    DTIC Science & Technology

    2007-06-01

    END-TO-END MODELING WITH THE HEIMDALL CODE TO SCOPE HIGH - POWER MICROWAVE SYSTEMS ∗ John A. Swegleξ Savannah River National Laboratory, 743A...describe the expert-system code HEIMDALL, which is used to model full high - power microwave systems using over 60 systems-engineering models, developed in...of our calculations of the mass of a Supersystem producing 500-MW, 15-ns output pulses in the X band for bursts of 1 s , interspersed with 10- s

  3. NASA and Industry Benefits of ACTS High Speed Network Interoperability Experiments

    NASA Technical Reports Server (NTRS)

    Zernic, M. J.; Beering, D. R.; Brooks, D. E.

    2000-01-01

    This paper provides synopses of the design. implementation, and results of key high data rate communications experiments utilizing the technologies of NASA's Advanced Communications Technology Satellite (ACTS). Specifically, the network protocol and interoperability performance aspects will be highlighted. The objectives of these key experiments will be discussed in their relevant context to NASA missions, as well as, to the comprehensive communications industry. Discussion of the experiment implementation will highlight the technical aspects of hybrid network connectivity, a variety of high-speed interoperability architectures, a variety of network node platforms, protocol layers, internet-based applications, and new work focused on distinguishing between link errors and congestion. In addition, this paper describes the impact of leveraging government-industry partnerships to achieve technical progress and forge synergistic relationships. These relationships will be the key to success as NASA seeks to combine commercially available technology with its own internal technology developments to realize more robust and cost effective communications for space operations.

  4. Storage system software solutions for high-end user needs

    NASA Technical Reports Server (NTRS)

    Hogan, Carole B.

    1992-01-01

    Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.

  5. 1999 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 2; High Lift

    NASA Technical Reports Server (NTRS)

    Hahne, David E. (Editor)

    1999-01-01

    NASA's High-Speed Research Program sponsored the 1999 Aerodynamic Performance Technical Review on February 8-12, 1999 in Anaheim, California. The review was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in the areas of Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization), High Lift, and Flight Controls. The review objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among die scientists and engineers working on HSCT aerodynamics. In particular, single and midpoint optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT simulation results were presented, along with executive summaries for all the Aerodynamic Performance technology areas. The HSR Aerodynamic Performance Technical Review was held simultaneously with the annual review of the following airframe technology areas: Materials and Structures, Environmental Impact, Flight Deck, and Technology Integration. Thus, a fourth objective of the Review was to promote synergy between the Aerodynamic Performance technology area and the other technology areas of the HSR Program. This Volume 2/Part 2 publication covers the tools and methods development session.

  6. Xenon Acquisition Strategies for High-Power Electric Propulsion NASA Missions

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.; Unfried, Kenneth G.

    2015-01-01

    Solar electric propulsion (SEP) has been used for station-keeping of geostationary communications satellites since the 1980s. Solar electric propulsion has also benefitted from success on NASA Science Missions such as Deep Space One and Dawn. The xenon propellant loads for these applications have been in the 100s of kilograms range. Recent studies performed for NASA's Human Exploration and Operations Mission Directorate (HEOMD) have demonstrated that SEP is critically enabling for both near-term and future exploration architectures. The high payoff for both human and science exploration missions and technology investment from NASA's Space Technology Mission Directorate (STMD) are providing the necessary convergence and impetus for a 30-kilowatt-class SEP mission. Multiple 30-50- kilowatt Solar Electric Propulsion Technology Demonstration Mission (SEP TDM) concepts have been developed based on the maturing electric propulsion and solar array technologies by STMD with recent efforts focusing on an Asteroid Redirect Robotic Mission (ARRM). Xenon is the optimal propellant for the existing state-of-the-art electric propulsion systems considering efficiency, storability, and contamination potential. NASA mission concepts developed and those proposed by contracted efforts for the 30-kilowatt-class demonstration have a range of xenon propellant loads from 100s of kilograms up to 10,000 kilograms. This paper examines the status of the xenon industry worldwide, including historical xenon supply and pricing. The paper will provide updated information on the xenon market relative to previous papers that discussed xenon production relative to NASA mission needs. The paper will discuss the various approaches for acquiring on the order of 10 metric tons of xenon propellant to support potential near-term NASA missions. Finally, the paper will discuss acquisitions strategies for larger NASA missions requiring 100s of metric tons of xenon will be discussed.

  7. The Nasa-Isro SAR Mission Science Data Products and Processing Workflows

    NASA Astrophysics Data System (ADS)

    Rosen, P. A.; Agram, P. S.; Lavalle, M.; Cohen, J.; Buckley, S.; Kumar, R.; Misra-Ray, A.; Ramanujam, V.; Agarwal, K. M.

    2017-12-01

    The NASA-ISRO SAR (NISAR) Mission is currently in the development phase and in the process of specifying its suite of data products and algorithmic workflows, responding to inputs from the NISAR Science and Applications Team. NISAR will provide raw data (Level 0), full-resolution complex imagery (Level 1), and interferometric and polarimetric image products (Level 2) for the entire data set, in both natural radar and geocoded coordinates. NASA and ISRO are coordinating the formats, meta-data layers, and algorithms for these products, for both the NASA-provided L-band radar and the ISRO-provided S-band radar. Higher level products will be also be generated for the purpose of calibration and validation, over large areas of Earth, including tectonic plate boundaries, ice sheets and sea-ice, and areas of ecosystem disturbance and change. This level of comprehensive product generation has been unprecedented for SAR missions in the past, and leads to storage processing challenges for the production system and the archive center. Further, recognizing the potential to support applications that require low latency product generation and delivery, the NISAR team is optimizing the entire end-to-end ground data system for such response, including exploring the advantages of cloud-based processing, algorithmic acceleration using GPUs, and on-demand processing schemes that minimize computational and transport costs, but allow rapid delivery to science and applications users. This paper will review the current products, workflows, and discuss the scientific and operational trade-space of mission capabilities.

  8. High-Performance Computing User Facility | Computational Science | NREL

    Science.gov Websites

    User Facility High-Performance Computing User Facility The High-Performance Computing User Facility technologies. Photo of the Peregrine supercomputer The High Performance Computing (HPC) User Facility provides Gyrfalcon Mass Storage System. Access Our HPC User Facility Learn more about these systems and how to access

  9. Cloud Computing for DoD

    DTIC Science & Technology

    2012-05-01

    cloud computing 17 NASA Nebula Platform •  Cloud computing pilot program at NASA Ames •  Integrates open-source components into seamless, self...Mission support •  Education and public outreach (NASA Nebula , 2010) 18 NSF Supported Cloud Research •  Support for Cloud Computing in...Mell, P. & Grance, T. (2011). The NIST Definition of Cloud Computing. NIST Special Publication 800-145 •  NASA Nebula (2010). Retrieved from

  10. NASA thrusts in high-speed aeropropulsion research and development: An overview

    NASA Technical Reports Server (NTRS)

    Ziemianski, Joseph A.

    1990-01-01

    NASA is conducting aeronautical research over a broad range of Mach numbers. In addition to the advanced conventional takeoff or landing (CTOL) propulsion research described elsewhere, NASA Lewis has intensified its efforts towards propulsion technology for selected high speed flight applications. In a companion program, NASA Langley has also accomplished significant research in supersonic combustion ramjet (SCRAM) propulsion. An unclassified review is presented of the propulsion research results that are applicable for supersonic to hypersonic vehicles. This overview not only provides a preview of the more detailed presentations which follow, it also presents a viewpoint on future research directions by calling attention to the unique cycles, components, and facilities involved in this expanding area of work.

  11. Data Serving Climate Simulation Science at the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2011-01-01

    The NASA Center for Climate Simulation (NCCS) provides high performance computational resources, a multi-petabyte archive, and data services in support of climate simulation research and other NASA-sponsored science. This talk describes the NCCS's data-centric architecture and processing, which are evolving in anticipation of researchers' growing requirements for higher resolution simulations and increased data sharing among NCCS users and the external science community.

  12. NASA Weather Support 2017

    NASA Technical Reports Server (NTRS)

    Carroll, Matt

    2017-01-01

    In the mid to late 1980's, as NASA was studying ways to improve weather forecasting capabilities to reduce excessive weather launch delays and to reduce excessive weather Launch Commit Criteria (LCC) waivers, the Challenger Accident occurred and the AC-67 Mishap occurred.[1] NASA and USAF weather personnel had advance knowledge of extremely high levels of weather hazards that ultimately caused or contributed to both of these accidents. In both cases, key knowledge of the risks posed by violations of weather LCC was not in the possession of final decision makers on the launch teams. In addition to convening the mishap boards for these two lost missions, NASA convened expert meteorological boards focusing on weather support. These meteorological boards recommended the development of a dedicated organization with the highest levels of weather expertise and influence to support all of American spaceflight. NASA immediately established the Weather Support Office (WSO) in the Office of Space Flight (OSF), and in coordination with the United Stated Air Force (USAF), initiated an overhaul of the organization and an improvement in technology used for weather support as recommended. Soon after, the USAF established a senior civilian Launch Weather Officer (LWO) position to provide meteorological support and continuity of weather expertise and knowledge over time. The Applied Meteorology Unit (AMU) was established by NASA, USAF, and the National Weather Service to support initiatives to place new tools and methods into an operational status. At the end of the Shuttle Program, after several weather office reorganizations, the WSO function had been assigned to a weather branch at Kennedy Space Center (KSC). This branch was dismantled in steps due to further reorganization, loss of key personnel, and loss of budget line authority. NASA is facing the loss of sufficient expertise and leadership required to provide current levels of weather support. The recommendation proposed

  13. NASA/DOD Aerospace Knowledge Diffusion Research Project. Paper 23: Information technology and aerospace knowledge diffusion: Exploring the intermediary-end user interface in a policy framework

    NASA Technical Reports Server (NTRS)

    Pinelli, Thomas E.; Barclay, Rebecca O.; Bishop, Ann P.; Kennedy, John M.

    1992-01-01

    Federal attempts to stimulate technological innovation have been unsuccessful because of the application of an inappropriate policy framework that lacks conceptual and empirical knowledge of the process of technological innovation and fails to acknowledge the relationship between knowled reproduction, transfer, and use as equally important components of the process of knowledge diffusion. It is argued that the potential contributions of high-speed computing and networking systems will be diminished unless empirically derived knowledge about the information-seeking behavior of the members of the social system is incorporated into a new policy framework. Findings from the NASA/DoD Aerospace Knowledge Diffusion Research Project are presented in support of this assertion.

  14. NASA/DoD Aerospace Knowledge Diffusion Research Project. XXIII - Information technology and aerospace knowledge diffusion: Exploring the intermediary-end user interface in a policy framework

    NASA Technical Reports Server (NTRS)

    Pinelli, Thomas E.; Barclay, Rebecca O.; Bishop, Ann P.; Kennedy, John M.

    1992-01-01

    Federal attempts to stimulate technological innovation have been unsuccessful because of the application of an inappropriate policy framework that lacks conceptual and empirical knowledge of the process of technological innovation and fails to acknowledge the relationship between knowledge production, transfer, and use as equally important components of the process of knowledge diffusion. This article argues that the potential contributions of high-speed computing and networking systems will be diminished unless empirically derived knowledge about the information-seeking behavior of members of the social system is incorporated into a new policy framework. Findings from the NASA/DoD Aerospace Knowledge Diffusion Research Project are presented in support of this assertion.

  15. Control research in the NASA high-alpha technology program

    NASA Technical Reports Server (NTRS)

    Gilbert, William P.; Nguyen, Luat T.; Gera, Joseph

    1990-01-01

    NASA is conducting a focused technology program, known as the High-Angle-of-Attack Technology Program, to accelerate the development of flight-validated technology applicable to the design of fighters with superior stall and post-stall characteristics and agility. A carefully integrated effort is underway combining wind tunnel testing, analytical predictions, piloted simulation, and full-scale flight research. A modified F-18 aircraft has been extensively instrumented for use as the NASA High-Angle-of-Attack Research Vehicle used for flight verification of new methods and concepts. This program stresses the importance of providing improved aircraft control capabilities both by powered control (such as thrust-vectoring) and by innovative aerodynamic control concepts. The program is accomplishing extensive coordinated ground and flight testing to assess and improve available experimental and analytical methods and to develop new concepts for enhanced aerodynamics and for effective control, guidance, and cockpit displays essential for effective pilot utilization of the increased agility provided.

  16. NASA STI Program Coordinating Council Eleventh Meeting: NASA STI Modernization Plan

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The theme of this NASA Scientific and Technical Information Program Coordinating Council Meeting was the modernization of the STI Program. Topics covered included the activities of the Engineering Review Board in the creation of the Infrastructure Upgrade Plan, the progress of the RECON Replacement Project, the use and status of Electronic SCAN (Selected Current Aerospace Notices), the Machine Translation Project, multimedia, electronic document interchange, the NASA Access Mechanism, computer network upgrades, and standards in the architectural effort.

  17. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    NASA Astrophysics Data System (ADS)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  18. Space Station Simulation Computer System (SCS) study for NASA/MSFC. Volume 2: Baseline architecture report

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is the computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.

  19. Space Station Simulation Computer System (SCS) study for NASA/MSFC. Volume 1: Baseline architecture report

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA's Space Station Freedom Program (SSFP) planning efforts have identified a need for a payload training simulator system to serve as both a training facility and as a demonstrator to validate operational concepts. The envisioned MSFC Payload Training Complex (PTC) required to meet this need will train the Space Station payload scientists, station scientists, and ground controllers to operate the wide variety of experiments that will be onboard the Space Station Freedom. The Simulation Computer System (SCS) is made up of the computer hardware, software, and workstations that will support the Payload Training Complex at MSFC. The purpose of this SCS Study is to investigate issues related to the SCS, alternative requirements, simulator approaches, and state-of-the-art technologies to develop candidate concepts and designs.

  20. A New Time Measurement Method Using a High-End Global Navigation Satellite System to Analyze Alpine Skiing

    ERIC Educational Resources Information Center

    Supej, Matej; Holmberg, Hans-Christer

    2011-01-01

    Accurate time measurement is essential to temporal analysis in sport. This study aimed to (a) develop a new method for time computation from surveyed trajectories using a high-end global navigation satellite system (GNSS), (b) validate its precision by comparing GNSS with photocells, and (c) examine whether gate-to-gate times can provide more…

  1. NASA Technology Investments in Electric Propulsion: New Directions in the New Millennium

    NASA Technical Reports Server (NTRS)

    Sankovic, John M.

    2002-01-01

    The last decade was a period of unprecedented acceptance of NASA developed electric propulsion by the user community. The benefits of high performance electric propulsion systems are now widely recognized, and new technologies have been accepted across the commonly. NASA clearly recognizes the need for new, high performance, electric propulsion technologies for future solar system missions and is sponsoring aggressive efforts in this area. These efforts are mainly conducted under the Office of Aerospace Technology. Plans over the next six years include the development of next generation ion thrusters for end of decade missions. Additional efforts are planned for the development of very high power thrusters, including magnetoplasmadynamic, pulsed inductive, and VASIMR, and clusters of Hall thrusters. In addition to the in-house technology efforts, NASA continues to work closely with both supplier and user communities to maximize the acceptance of new technology in a timely and cost-effective manner. This paper provides an overview of NASA's activities in the area of electric propulsion with an emphasis on future program directions.

  2. Cassini End of Mission

    NASA Image and Video Library

    2017-09-15

    Cassini program manager at JPL, Earl Maize, center row, calls out the end of the Cassini mission, Friday, Sept. 15, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators deliberately plunged the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  3. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    Cassini imaging science subsystem (ISS) team associate Mike Evans discusses an image of Saturn's moon Daphnis with Cassini NASA Social attendees, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  4. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    Cassini NASA Social attendees speak with members of the Cassini mission team in the Charles Elachi Mission Control Center in the Space Flight Operation Center, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  5. Benefit from NASA

    NASA Image and Video Library

    1998-01-01

    Don Sirois, an Auburn University research associate, and Bruce Strom, a mechanical engineering Co-Op Student, are evaluating the dimensional characteristics of an aluminum automobile engine casting. More accurate metal casting processes may reduce the weight of some cast metal products used in automobiles, such as engines. Research in low gravity has taken an important first step toward making metal products used in homes, automobiles, and aircraft less expensive, safer, and more durable. Auburn University and industry are partnering with NASA to develop one of the first accurate computer model predictions of molten metals and molding materials used in a manufacturing process called casting. Ford Motor Company's casting plant in Cleveland, Ohio is using NASA-sponsored computer modeling information to improve the casting process of automobile and light-truck engine blocks.

  6. Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    NASA Technical Reports Server (NTRS)

    Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.

    1992-01-01

    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.

  7. Improving Situational Awareness for First Responders via Mobile Computing

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; Mah, Robert W.; Papasin, Richard; Del Mundo, Rommel; McIntosh, Dawn M.; Jorgensen, Charles

    2005-01-01

    This project looks to improve first responder situational awareness using tools and techniques of mobile computing. The prototype system combines wireless communication, real-time location determination, digital imaging, and three-dimensional graphics. Responder locations are tracked in an outdoor environment via GPS and uploaded to a central server via GPRS or an 802.11 network. Responders can also wirelessly share digital images and text reports, both with other responders and with the incident commander. A pre-built three dimensional graphics model of a particular emergency scene is used to visualize responder and report locations. Responders have a choice of information end points, ranging from programmable cellular phones to tablet computers. The system also employs location-aware computing to make responders aware of particular hazards as they approach them. The prototype was developed in conjunction with the NASA Ames Disaster Assistance and Rescue Team and has undergone field testing during responder exercise at NASA Ames.

  8. Improving Situational Awareness for First Responders via Mobile Computing

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; Mah, Robert W.; Papasin, Richard; Del Mundo, Rommel; McIntosh, Dawn M.; Jorgensen, Charles

    2006-01-01

    This project looks to improve first responder incident command, and an appropriately managed flow of situational awareness using mobile computing techniques. The prototype system combines wireless communication, real-time location determination, digital imaging, and three-dimensional graphics. Responder locations are tracked in an outdoor environment via GPS and uploaded to a central server via GPRS or an 802. II network. Responders can also wireless share digital images and text reports, both with other responders and with the incident commander. A pre-built three dimensional graphics model of the emergency scene is used to visualize responder and report locations. Responders have a choice of information end points, ranging from programmable cellular phones to tablet computers. The system also employs location-aware computing to make responders aware of particular hazards as they approach them. The prototype was developed in conjunction with the NASA Ames Disaster Assistance and Rescue Team and has undergone field testing during responder exercises at NASA Ames.

  9. Ceramic end seal design for high temperature high voltage nuclear instrumentation cables

    DOEpatents

    Meiss, James D.; Cannon, Collins P.

    1979-01-01

    A coaxial, hermetically sealed end structure is described for electrical instrumentation cables. A generally tubular ceramic body is hermetically sealed within a tubular sheath which is in turn sealed to the cable sheath. One end of the elongated tubular ceramic insulator is sealed to a metal end cap. The other end of the elongated tubular insulator has an end surface which is shaped concave relative to a central conductor which extends out of this end surface. When the end seal is hermetically sealed to an instrumentation cable device and the central conductor is maintained at a high positive potential relative to the tubular metal sheath, the electric field between the central conductor and the outer sheath tends to collect electrons from the concave end surface of the insulator. This minimizes breakdown pulse noise generation when instrumentation potentials are applied to the central conductor.

  10. Evaluating the Efficacy of the Cloud for Cluster Computation

    NASA Technical Reports Server (NTRS)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  11. NASA's Role in Aeronautics: A Workshop. Volume 6: Aeronautical research

    NASA Technical Reports Server (NTRS)

    1981-01-01

    While each aspect of its aeronautical technology program is important to the current preeminence of the United States in aeronautics, the most essential contributions of NASA derive from its research. Successes and challenges in NASA's efforts to improve civil and military aviation are discussed for the following areas: turbulence, noise, supercritical aerodynamics, computational aerodynamics, fuels, high temperature materials, composite materials, single crystal components, powder metallurgy, and flight controls. Spin offs to engineering and other sciences explored include NASTRAN, lubricants, and composites.

  12. SPENVIS Implementation of End-of-Life Solar Cell Calculations Using the Displacement Damage Dose Methodology

    NASA Technical Reports Server (NTRS)

    Walters, Robert; Summers, Geoffrey P.; Warmer. Keffreu J/; Messenger, Scott; Lorentzen, Justin R.; Morton, Thomas; Taylor, Stephen J.; Evans, Hugh; Heynderickx, Daniel; Lei, Fan

    2007-01-01

    This paper presents a method for using the SPENVIS on-line computational suite to implement the displacement damage dose (D(sub d)) methodology for calculating end-of-life (EOL) solar cell performance for a specific space mission. This paper builds on our previous work that has validated the D(sub d) methodology against both measured space data [1,2] and calculations performed using the equivalent fluence methodology developed by NASA JPL [3]. For several years, the space solar community has considered general implementation of the D(sub d) method, but no computer program exists to enable this implementation. In a collaborative effort, NRL, NASA and OAI have produced the Solar Array Verification and Analysis Tool (SAVANT) under NASA funding, but this program has not progressed beyond the beta-stage [4]. The SPENVIS suite with the Multi Layered Shielding Simulation Software (MULASSIS) contains all of the necessary components to implement the Dd methodology in a format complementary to that of SAVANT [5]. NRL is currently working with ESA and BIRA to include the Dd method of solar cell EOL calculations as an integral part of SPENVIS. This paper describes how this can be accomplished.

  13. Evolution of Software-Only-Simulation at NASA IV and V

    NASA Technical Reports Server (NTRS)

    McCarty, Justin; Morris, Justin; Zemerick, Scott

    2014-01-01

    Software-Only-Simulations have been an emerging but quickly developing field of study throughout NASA. The NASA Independent Verification Validation (IVV) Independent Test Capability (ITC) team has been rapidly building a collection of simulators for a wide range of NASA missions. ITC specializes in full end-to-end simulations that enable developers, VV personnel, and operators to test-as-you-fly. In four years, the team has delivered a wide variety of spacecraft simulations that have ranged from low complexity science missions such as the Global Precipitation Management (GPM) satellite and the Deep Space Climate Observatory (DSCOVR), to the extremely complex missions such as the James Webb Space Telescope (JWST) and Space Launch System (SLS).This paper describes the evolution of ITCs technologies and processes that have been utilized to design, implement, and deploy end-to-end simulation environments for various NASA missions. A comparison of mission simulators are discussed with focus on technology and lessons learned in complexity, hardware modeling, and continuous integration. The paper also describes the methods for executing the missions unmodified flight software binaries (not cross-compiled) for verification and validation activities.

  14. Highly Loaded Composite Strut Test Results

    NASA Technical Reports Server (NTRS)

    Wu, K. C.; Jegley, Dawn C.; Barnard, Ansley; Phelps, James E.; McKeney, Martin J.

    2011-01-01

    Highly loaded composite struts from a proposed truss-based Altair lunar lander descent stage concept were selected for development under NASA's Advanced Composites Technology program. Predicted compressive member forces during launch and ascent of over -100,000 lbs were much greater than the tensile loads. Therefore, compressive failure modes, including structural stability, were primary design considerations. NASA's industry partner designed and built highly loaded struts that were delivered to NASA for testing. Their design, fabricated on a washout mandrel, had a uniform-diameter composite tube with composite tapered ends. Each tapered end contained a titanium end fitting with facing conical ramps that are overlaid and overwrapped with composite materials. The highly loaded struts were loaded in both tension and compression, with ultimate failure produced in compression. Results for the two struts tested are presented and discussed, along with measured deflections, strains and observed failure mechanisms.

  15. Real-Time On-Board Airborne Demonstration of High-Speed On-Board Data Processing for Science Instruments (HOPS)

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Davis, Mitchell J.; Adams, James K.; Bowen, Stephen C.; Fay, James J.; Hutchinson, Mark A.

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program since April, 2012. The HOPS team recently completed two flight campaigns during the summer of 2014 on two different aircrafts with two different science instruments. The first flight campaign was in July, 2014 based at NASA Langley Research Center (LaRC) in Hampton, VA on the NASA's HU-25 aircraft. The science instrument that flew with HOPS was Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) funded by NASA's Instrument Incubator Program (IIP). The second campaign was in August, 2014 based at NASA Armstrong Flight Research Center (AFRC) in Palmdale, CA on the NASA's DC-8 aircraft. HOPS flew with the Multifunctional Fiber Laser Lidar (MFLL) instrument developed by Excelis Inc. The goal of the campaigns was to perform an end-to-end demonstration of the capabilities of the HOPS prototype system (HOPS COTS) while running the most computationally intensive part of the ASCENDS algorithm real-time on-board. The comparison of the two flight campaigns and the results of the functionality tests of the HOPS COTS are presented in this paper.

  16. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2005-01-01

    NASA (National Aeronautics and Space Administration) relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft (manned or unmanned) launched that did not have a computer on board that provided vital command and control services. Despite this growing dependence on software control and monitoring, there has been no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Led by the NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard (STD-18l9.13B) has recently undergone a significant update in an attempt to provide that consistency. This paper will discuss the key features of the new NASA Software Safety Standard. It will start with a brief history of the use and development of software in safety critical applications at NASA. It will then give a brief overview of the NASA Software Working Group and the approach it took to revise the software engineering process across the Agency.

  17. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  18. Exploratory Mixed-Method Study of End-User Computing within an Information Technology Infrastructure Library U.S. Army Service Delivery Environment

    ERIC Educational Resources Information Center

    Manzano, Sancho J., Jr.

    2012-01-01

    Empirical studies have been conducted on what is known as end-user computing from as early as the 1980s to present-day IT employees. There have been many studies on using quantitative instruments by Cotterman and Kumar (1989) and Rockart and Flannery (1983). Qualitative studies on end-user computing classifications have been conducted by…

  19. End-to-End Trajectory for Conjunction Class Mars Missions Using Hybrid Solar-Electric/Chemical Transportation System

    NASA Technical Reports Server (NTRS)

    Chai, Patrick R.; Merrill, Raymond G.; Qu, Min

    2016-01-01

    NASA's Human Spaceflight Architecture Team is developing a reusable hybrid transportation architecture in which both chemical and solar-electric propulsion systems are used to deliver crew and cargo to exploration destinations. By combining chemical and solar-electric propulsion into a single spacecraft and applying each where it is most effective, the hybrid architecture enables a series of Mars trajectories that are more fuel efficient than an all chemical propulsion architecture without significant increases to trip time. The architecture calls for the aggregation of exploration assets in cislunar space prior to departure for Mars and utilizes high energy lunar-distant high Earth orbits for the final staging prior to departure. This paper presents the detailed analysis of various cislunar operations for the EMC Hybrid architecture as well as the result of the higher fidelity end-to-end trajectory analysis to understand the implications of the design choices on the Mars exploration campaign.

  20. Comparison of High-Fidelity Computational Tools for Wing Design of a Distributed Electric Propulsion Aircraft

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Derlaga, Joseph M.; Stoll, Alex M.

    2017-01-01

    A variety of tools, from fundamental to high order, have been used to better understand applications of distributed electric propulsion to aid the wing and propulsion system design of the Leading Edge Asynchronous Propulsion Technology (LEAPTech) project and the X-57 Maxwell airplane. Three high-fidelity, Navier-Stokes computational fluid dynamics codes used during the project with results presented here are FUN3D, STAR-CCM+, and OVERFLOW. These codes employ various turbulence models to predict fully turbulent and transitional flow. Results from these codes are compared for two distributed electric propulsion configurations: the wing tested at NASA Armstrong on the Hybrid-Electric Integrated Systems Testbed truck, and the wing designed for the X-57 Maxwell airplane. Results from these computational tools for the high-lift wing tested on the Hybrid-Electric Integrated Systems Testbed truck and the X-57 high-lift wing presented compare reasonably well. The goal of the X-57 wing and distributed electric propulsion system design achieving or exceeding the required ?? (sub L) = 3.95 for stall speed was confirmed with all of the computational codes.

  1. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    DTIC Science & Technology

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  2. NASA software specification and evaluation system: Software verification/validation techniques

    NASA Technical Reports Server (NTRS)

    1977-01-01

    NASA software requirement specifications were used in the development of a system for validating and verifying computer programs. The software specification and evaluation system (SSES) provides for the effective and efficient specification, implementation, and testing of computer software programs. The system as implemented will produce structured FORTRAN or ANSI FORTRAN programs, but the principles upon which SSES is designed allow it to be easily adapted to other high order languages.

  3. The growth of the UniTree mass storage system at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen

    1993-01-01

    In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw the growth of near-online data from nil to nearly three terabytes, a doubling of the number of CPU's on the facility's Cray YMP (the primary data source for UniTree), and the necessity for an aggressive regimen for repacking sparse tapes and hierarchical 'vaulting' of old files to freestanding tape. Connectivity was enhanced as well with the addition of UltraNet HiPPI. This paper describes the increasing demands placed on the storage system's performance and throughput that resulted from the significant augmentation of compute-server processor power and network speed.

  4. Xenon Acquisition Strategies for High-Power Electric Propulsion NASA Missions

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.; Unfried, Kenneth G.

    2015-01-01

    The benefits of high-power solar electric propulsion (SEP) for both NASA's human and science exploration missions combined with the technology investment from the Space Technology Mission Directorate have enabled the development of a 50kW-class SEP mission. NASA mission concepts developed, including the Asteroid Redirect Robotic Mission, and those proposed by contracted efforts for the 30kW-class demonstration have a range of xenon propellant loads from 100's of kg up to 10,000 kg. A xenon propellant load of 10 metric tons represents greater than 10% of the global annual production rate of xenon. A single procurement of this size with short-term delivery can disrupt the xenon market, driving up pricing, making the propellant costs for the mission prohibitive. This paper examines the status of the xenon industry worldwide, including historical xenon supply and pricing. The paper discusses approaches for acquiring on the order of 10 MT of xenon propellant considering realistic programmatic constraints to support potential near-term NASA missions. Finally, the paper will discuss acquisitions strategies for mission campaigns utilizing multiple high-power solar electric propulsion vehicles requiring 100's of metric tons of xenon over an extended period of time where a longer term acquisition approach could be implemented.

  5. Data base management system analysis and performance testing with respect to NASA requirements

    NASA Technical Reports Server (NTRS)

    Martin, E. A.; Sylto, R. V.; Gough, T. L.; Huston, H. A.; Morone, J. J.

    1981-01-01

    Several candidate Data Base Management Systems (DBM's) that could support the NASA End-to-End Data System's Integrated Data Base Management System (IDBMS) Project, later rescoped and renamed the Packet Management System (PMS) were evaluated. The candidate DBMS systems which had to run on the Digital Equipment Corporation VAX 11/780 computer system were ORACLE, SEED and RIM. Oracle and RIM are both based on the relational data base model while SEED employs a CODASYL network approach. A single data base application which managed stratospheric temperature profiles was studied. The primary reasons for using this application were an insufficient volume of available PMS-like data, a mandate to use actual rather than simulated data, and the abundance of available temperature profile data.

  6. High-Productivity Computing in Computational Physics Education

    NASA Astrophysics Data System (ADS)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  7. Review of NASA's (National Aeronautics and Space Administration) Numerical Aerodynamic Simulation Program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    NASA has planned a supercomputer for computational fluid dynamics research since the mid-1970's. With the approval of the Numerical Aerodynamic Simulation Program as a FY 1984 new start, Congress requested an assessment of the program's objectives, projected short- and long-term uses, program design, computer architecture, user needs, and handling of proprietary and classified information. Specifically requested was an examination of the merits of proceeding with multiple high speed processor (HSP) systems contrasted with a single high speed processor system. The panel found NASA's objectives and projected uses sound and the projected distribution of users as realistic as possible at this stage. The multiple-HSP, whereby new, more powerful state-of-the-art HSP's would be integrated into a flexible network, was judged to present major advantages over any single HSP system.

  8. NASA Rotor 37 CFD Code Validation: Glenn-HT Code

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2010-01-01

    In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.

  9. NASA spinoffs to public service

    NASA Technical Reports Server (NTRS)

    Ault, L. A.; Cleland, J. G.

    1989-01-01

    The National Aeronautics and Space Administration (NASA) Technology Utilization (TU) Division of the Office of Commercial Programs has been quite successful in directing the transfer to technology into the public sector. NASA developments of particular interest have been those in the areas of aerodynamics and aviation transport, safety, sensors, electronics and computing, and satellites and remote sensing. NASA technology has helped law enforcement, firefighting, public transportation, education, search and rescue, and practically every other sector of activity serving the U.S. public. NASA works closely with public service agencies and associations, especially those serving local needs of citizens, to expedite technology transfer benefits. A number of examples exist to demonstrate the technology transfer method and opportunities of NASA spinoffs to public service.

  10. Overview of NASA communications infrastructure

    NASA Technical Reports Server (NTRS)

    Arnold, Ray J.; Fuechsel, Charles

    1991-01-01

    The infrastructure of NASA communications systems for effecting coordination across NASA offices and with the national and international research and technological communities is discussed. The offices and networks of the communication system include the Office of Space Science and Applications (OSSA), which manages all NASA missions, and the Office of Space Operations, which furnishes communication support through the NASCOM, the mission critical communications support network, and the Program Support Communications network. The NASA Science Internet was established by OSSA to centrally manage, develop, and operate an integrated computer network service dedicated to NASA's space science and application research. Planned for the future is the National Research and Education Network, which will provide communications infrastructure to enhance science resources at a national level.

  11. An Overview: NASA LeRC Structures Programs

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.

    1998-01-01

    A workshop on National Structures Programs was held, jointly sponsored by the AIAA Structures Technical Committee, the University of Virginia's Center for Advanced Computational Technology and NASA. The Objectives of the Workshop were to: provide a forum for discussion of current Government-sponsored programs in the structures area; identify high potential research areas for future aerospace systems; and initiate suitable interaction mechanisms with the managers of structures programs. The presentations covered structures programs at NASA, DOD (AFOSR, ONR, ARO and DARPA), and DOE. This publication is the presentation of the Structures and Acoustics Division of the NASA Lewis Research Center. The Structures and Acoustics Division has its genesis dating back to 1943. It is responsible for NASA research related to rotating structures and structural hot sections of both airbreathing and rocket engines. The work of the division encompasses but is not limited to aeroelasticity, structural life prediction and reliability, fatigue and fracture, mechanical components such as bearings, gears, and seals, and aeroacoustics. These programs are discussed and the names of responsible individuals are provided for future reference.

  12. NASA Information Technology Implementation Plan

    NASA Technical Reports Server (NTRS)

    2000-01-01

    NASA's Information Technology (IT) resources and IT support continue to be a growing and integral part of all NASA missions. Furthermore, the growing IT support requirements are becoming more complex and diverse. The following are a few examples of the growing complexity and diversity of NASA's IT environment. NASA is conducting basic IT research in the Intelligent Synthesis Environment (ISE) and Intelligent Systems (IS) Initiatives. IT security, infrastructure protection, and privacy of data are requiring more and more management attention and an increasing share of the NASA IT budget. Outsourcing of IT support is becoming a key element of NASA's IT strategy as exemplified by Outsourcing Desktop Initiative for NASA (ODIN) and the outsourcing of NASA Integrated Services Network (NISN) support. Finally, technology refresh is helping to provide improved support at lower cost. Recently the NASA Automated Data Processing (ADP) Consolidation Center (NACC) upgraded its bipolar technology computer systems with Complementary Metal Oxide Semiconductor (CMOS) technology systems. This NACC upgrade substantially reduced the hardware maintenance and software licensing costs, significantly increased system speed and capacity, and reduced customer processing costs by 11 percent.

  13. NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Carter, David; Wetzel, Scott

    2000-01-01

    The NASA SLR Operational Center is responsible for: 1) NASA SLR network control, sustaining engineering, and logistics; 2) ILRS mission operations; and 3) ILRS and NASA SLR data operations. NASA SLR network control and sustaining engineering tasks include technical support, daily system performance monitoring, system scheduling, operator training, station status reporting, system relocation, logistics and support of the ILRS Networks and Engineering Working Group. These activities ensure the NASA SLR systems are meeting ILRS and NASA mission support requirements. ILRS mission operations tasks include mission planning, mission analysis, mission coordination, development of mission support plans, and support of the ILRS Missions Working Group. These activities ensure than new mission and campaign requirements are coordinated with the ILRS. Global Normal Points (NP) data, NASA SLR FullRate (FR) data, and satellite predictions are managed as part of data operations. Part of this operation includes supporting the ILRS Data Formats and Procedures Working Group. Global NP data operations consist of receipt, format and data integrity verification, archiving and merging. This activity culminates in the daily electronic transmission of NP files to the CDDIS. Currently of all these functions are automated. However, to ensure the timely and accurate flow of data, regular monitoring and maintenance of the operational software systems, computer systems and computer networking are performed. Tracking statistics between the stations and the data centers are compared periodically to eliminate lost data. Future activities in this area include sub-daily (i.e., hourly) NP data management, more stringent data integrity tests, and automatic station notification of format and data integrity issues.

  14. Center for Advanced Computational Technology

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  15. U.S. Supersonic Commercial Aircraft: Assessing NASA's High Speed Research Program

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The legislatively mandated objectives of the National Aeronautics and Space Administration (NASA) include "the improvement of the usefulness, performance, speed, safety, and efficiency of aeronautical and space vehicles" and "preservation of the United States' preeminent position in aeronautics and space through research and technology development related to associated manufacturing processes." Most of NASA's activities are focused on the space-related aspects of these objectives. However, NASA also conducts important work related to aeronautics. NASA's High Speed Research (HSR) Program is a focused technology development program intended to enable the commercial development of a high speed (i.e., supersonic) civil transport (HSCT). However, the HSR Program will not design or test a commercial airplane (i.e., an HSCT); it is industry's responsibility to use the results of the HSR Program to develop an HSCT. An HSCT would be a second generation aircraft with much better performance than first generation supersonic transports (i.e., the Concorde and the Soviet Tu-144). The HSR Program is a high risk effort: success requires overcoming many challenging technical problems involving the airframe, propulsion system, and integrated aircraft. The ability to overcome all of these problems to produce an affordable HSCT is far from certain. Phase I of the HSR Program was completed in fiscal year 1995; it produced critical information about the ability of an HSCT to satisfy environmental concerns (i-e., noise and engine emissions). Phase II (the final phase according to current plans) is scheduled for completion in 2002. Areas of primary emphasis are propulsion, airframe materials and structures, flight deck systems, aerodynamic performance, and systems integration.

  16. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    NASA Astrophysics Data System (ADS)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  17. Computational Structures Technology for Airframes and Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Housner, Jerrold M. (Compiler); Starnes, James H., Jr. (Compiler); Hopkins, Dale A. (Compiler); Chamis, Christos C. (Compiler)

    1992-01-01

    This conference publication contains the presentations and discussions from the joint University of Virginia (UVA)/NASA Workshops. The presentations included NASA Headquarters perspectives on High Speed Civil Transport (HSCT), goals and objectives of the UVA Center for Computational Structures Technology (CST), NASA and Air Force CST activities, CST activities for airframes and propulsion systems in industry, and CST activities at Sandia National Laboratory.

  18. High-Speed Jet Noise Reduction NASA Perspective

    NASA Technical Reports Server (NTRS)

    Huff, Dennis L.; Handy, J. (Technical Monitor)

    2001-01-01

    History shows that the problem of high-speed jet noise reduction is difficult to solve. the good news is that high performance military aircraft noise is dominated by a single source called 'jet noise' (commercial aircraft have several sources). The bad news is that this source has been the subject of research for the past 50 years and progress has been incremental. Major jet noise reduction has been achieved through changing the cycle of the engine to reduce the jet exit velocity. Smaller reductions have been achieved using suppression devices like mixing enhancement and acoustic liners. Significant jet noise reduction without any performance loss is probably not possible! Recent NASA Noise Reduction Research Programs include the High Speed Research Program, Advanced Subsonic Technology Noise Reduction Program, Aerospace Propulsion and Power Program - Fundamental Noise, and Quiet Aircraft Technology Program.

  19. NASA directives master list and index

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This Handbook sets forth in two parts the information for the guidance of users of the NASA Management Directives System. Complementary to this Handbook is the NASA Online Directives Information System (NODIS), an electronic computer text retrieval system. The first part contains the Master List of Management Directives in force as of 30 Sep. 1993. The second part contains an Index to NASA Management Directives in force as of 30 Sep. 1993.

  20. Cold-end Subsystem Testing for the Fission Power System Technology Demonstration Unit

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell; Gibson, Marc; Ellis, David; Sanzi, James

    2013-01-01

    The Fission Power System (FPS) Technology Demonstration Unit (TDU) consists of a pumped sodium-potassium (NaK) loop that provides heat to a Stirling Power Conversion Unit (PCU), which converts some of that heat into electricity and rejects the waste heat to a pumped water loop. Each of the TDU subsystems is being tested independently prior to full system testing at the NASA Glenn Research Center. The pumped NaK loop is being tested at NASA Marshall Space Flight Center; the Stirling PCU and electrical controller are being tested by Sunpower Inc.; and the pumped water loop is being tested at Glenn. This paper describes cold-end subsystem setup and testing at Glenn. The TDU cold end has been assembled in Vacuum Facility 6 (VF 6) at Glenn, the same chamber that will be used for TDU testing. Cold-end testing in VF 6 will demonstrate functionality; validated cold-end fill, drain, and emergency backup systems; and generated pump performance and system pressure drop data used to validate models. In addition, a low-cost proof-of concept radiator has been built and tested at Glenn, validating the design and demonstrating the feasibility of using low-cost metal radiators as an alternative to high-cost composite radiators in an end-to-end TDU test.

  1. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Director of NASA's Planetary Science Division, Jim Green, left, Cassini program manager at JPL, Earl Maize, second from right, Cassini project scientist at JPL, Linda Spilker, second from right, and principle investigator for the Neutral Mass Spectrometer (INMS) at the Southwest Research Institute, Hunter Waite, right, are seen during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  2. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Principle investigator for the Neutral Mass Spectrometer (INMS) at the Southwest Research Institute, Hunter Waite, right, speaks during a press conference previewing Cassini's End of Mission as director of NASA's Planetary Science Division, Jim Green, left, Cassini program manager at JPL, Earl Maize, second from left, and Cassini project scientist at JPL, Linda Spilker, second from right, look on, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  3. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Director of NASA's Planetary Science Division, Jim Green, left, speaks during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Also participating in the press conference were Cassini program manager at JPL, Earl Maize, second from right, Cassini project scientist at JPL, Linda Spilker, second from left, and principle investigator for the Ion and Neutral Mass Spectrometer (INMS) at the Southwest Research Institute, Hunter Waite, right. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  4. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  5. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  6. Activities of the Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph

    1994-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

  7. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  8. Technology transfer at NASA - A librarian's view

    NASA Technical Reports Server (NTRS)

    Buchan, Ronald L.

    1991-01-01

    The NASA programs, publications, and services promoting the transfer and utilization of aerospace technology developed by and for NASA are briefly surveyed. Topics addressed include the corporate sources of NASA technical information and its interest for corporate users of information services; the IAA and STAR abstract journals; NASA/RECON, NTIS, and the AIAA Aerospace Database; the RECON Space Commercialization file; the Computer Software Management and Information Center file; company information in the RECON database; and services to small businesses. Also discussed are the NASA publications Tech Briefs and Spinoff, the Industrial Applications Centers, NASA continuing bibliographies on management and patent abstracts (indexed using the NASA Thesaurus), the Index to NASA News Releases and Speeches, and the Aerospace Research Information Network (ARIN).

  9. Optical Computers and Space Technology

    NASA Technical Reports Server (NTRS)

    Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela

    1995-01-01

    The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.

  10. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    NASA Astrophysics Data System (ADS)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the

  11. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chokchai "Box" Leangsuksun

    2011-05-31

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  12. Cyberinfrastructure for End-to-End Environmental Explorations

    NASA Astrophysics Data System (ADS)

    Merwade, V.; Kumar, S.; Song, C.; Zhao, L.; Govindaraju, R.; Niyogi, D.

    2007-12-01

    The design and implementation of a cyberinfrastructure for End-to-End Environmental Exploration (C4E4) is presented. The C4E4 framework addresses the need for an integrated data/computation platform for studying broad environmental impacts by combining heterogeneous data resources with state-of-the-art modeling and visualization tools. With Purdue being a TeraGrid Resource Provider, C4E4 builds on top of the Purdue TeraGrid data management system and Grid resources, and integrates them through a service-oriented workflow system. It allows researchers to construct environmental workflows for data discovery, access, transformation, modeling, and visualization. Using the C4E4 framework, we have implemented an end-to-end SWAT simulation and analysis workflow that connects our TeraGrid data and computation resources. It enables researchers to conduct comprehensive studies on the impact of land management practices in the St. Joseph watershed using data from various sources in hydrologic, atmospheric, agricultural, and other related disciplines.

  13. Experimental and computational investigation of the NASA low-speed centrifugal compressor flow field

    NASA Technical Reports Server (NTRS)

    Hathaway, Michael D.; Chriss, Randall M.; Wood, Jerry R.; Strazisar, Anthony J.

    1993-01-01

    An experimental and computational investigation of the NASA Lewis Research Center's low-speed centrifugal compressor (LSCC) flow field was conducted using laser anemometry and Dawes' three-dimensional viscous code. The experimental configuration consisted of a backswept impeller followed by a vaneless diffuser. Measurements of the three-dimensional velocity field were acquired at several measurement planes through the compressor. The measurements describe both the throughflow and secondary velocity field along each measurement plane. In several cases the measurements provide details of the flow within the blade boundary layers. Insight into the complex flow physics within centrifugal compressors is provided by the computational fluid dynamics analysis (CFD), and assessment of the CFD predictions is provided by comparison with the measurements. Five-hole probe and hot-wire surveys at the inlet and exit to the impeller as well as surface flow visualization along the impeller blade surfaces provided independent confirmation of the laser measurement technique. The results clearly document the development of the throughflow velocity wake that is characteristic of unshrouded centrifugal compressors.

  14. Cassini End of Mission

    NASA Image and Video Library

    2017-09-15

    Associate administrator for NASA's Science Mission Directorate Thomas Zurbuchen, left, Cassini project scientist at JPL, Linda Spilker, second from left, director of NASA's Jet Propulsion Laboratory, Michael Watkins, center, director of NASA's Planetary Science Division, Jim Green, second from right, and director of the interplanetary network directorate at NASA's Jet Propulsion Laboratory, Keyur Patel, left, are seen in mission control, Friday, Sept. 15, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators deliberately plunged the spacecraft into Saturn, as Cassini gathered science until the end. Loss of contact with the Cassini spacecraft occurred at 7:55 a.m. EDT (4:55 a.m. PDT). The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  15. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  16. NASA Lewis' IITA K-12 Program

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The NASA Lewis Research Center's Information Infrastructure Technology and Applications for Kindergarten to 12th Grade (IITA K-12) Program is designed to introduce into school systems computing and communications technology that benefits math and science studies. By incorporating this technology into K-12 curriculums, we hope to increase the proficiency and interest in math and science subjects by K-12 students so that they continue to study technical subjects after their high school careers are over.

  17. Cassini End of Mission Press Conference

    NASA Image and Video Library

    2017-09-15

    Director of NASA's Jet Propulsion Laboratory, Michael Watkins speaks during a press conference held after the end of the Cassini mission, Friday, Sept. 15, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators deliberately plunged the spacecraft into Saturn, as Cassini gathered science until the end. Loss of contact with the Cassini spacecraft occurred at 7:55 a.m. EDT (4:55 a.m. PDT). The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  18. Cassini End of Mission Press Conference

    NASA Image and Video Library

    2017-09-15

    Associate administrator for NASA's Science Mission Directorate Thomas Zurbuchen speaks during a press conference held after the end of the Cassini mission, Friday, Sept. 15, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators deliberately plunged the spacecraft into Saturn, as Cassini gathered science until the end. Loss of contact with the Cassini spacecraft occurred at 7:55 a.m. EDT (4:55 a.m. PDT). The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  19. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    NASA Social attendees are seen during a science panel discussion with Cassini project scientist at JPL, Linda Spilker, Cassini interdisciplinary Titan scientist at Cornell University, Jonathan Lunine, Cassini Composite Infrared Spectrometer(CIRS) Instrument deputy principle investigator Connor Nixon, and Cassini assistant project science systems engineer Morgan Cable, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  20. End-to-end plasma bubble PIC simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Matteucci, Jackson; Bhattacharjee, Amitava

    2017-10-01

    Accelerator technologies play a crucial role in eventually achieving exascale computing capabilities. The current and upcoming leadership machines at ORNL (Titan and Summit) employ Nvidia GPUs, which provide vast computational power but also need specifically adapted computational kernels to fully exploit them. In this work, we will show end-to-end particle-in-cell simulations of the formation, evolution and coalescence of laser-generated plasma bubbles. This work showcases the GPU capabilities of the PSC particle-in-cell code, which has been adapted for this problem to support particle injection, a heating operator and a collision operator on GPUs.

  1. NASA Briefing for Unidata

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher

    2016-01-01

    The NASA representative to the Unidata Strategic Committee presented a semiannual update on NASAs work with and use of Unidata technologies. The talk covered the program of cloud computing prototypes being undertaken for the Earth Observing System Data and Information System (EOSDIS). Also discussed were dataset interoperability recommendations ratified via the EOSDIS Standards Office and the HDF Product Designer tool with respect to its possible applicability to data in network Common Data Form (NetCDF) version 4.

  2. An inside look at NASA planetology

    NASA Technical Reports Server (NTRS)

    Dwornik, S. E.

    1976-01-01

    Staffing, financing and budget controls, and research grant allocations of NASA are reviewed with emphasis on NASA-supported research in planetary geological sciences: studies of the composition, structure, and history of solar system planets. Programs, techniques, and research grants for studies of Mars photographs acquired through Mariner 6-10 flights are discussed at length, and particularly the handling of computer-enhanced photographic data. Scheduled future NASA-sponsored planet exploration missions (to Mars, Jupiter, Saturn, Uranus) are mentioned.

  3. NASA Releases New High-Resolution Earthrise Image

    NASA Image and Video Library

    2017-12-08

    must be rolled to the side (in this case 67 degrees), then the spacecraft slews with the direction of travel to maximize the width of the lunar horizon in LROC's Narrow Angle Camera image. All this takes place while LRO is traveling faster than 3,580 miles per hour (over 1,600 meters per second) relative to the lunar surface below the spacecraft! The high-resolution Narrow Angle Camera (NAC) on LRO takes black-and-white images, while the lower resolution Wide Angle Camera (WAC) takes color images, so you might wonder how we got a high-resolution picture of the Earth in color. Since the spacecraft, Earth, and moon are all in motion, we had to do some special processing to create an image that represents the view of the Earth and moon at one particular time. The final Earth image contains both WAC and NAC information. WAC provides the color, and the NAC provides high-resolution detail. "From the Earth, the daily moonrise and moonset are always inspiring moments," said Mark Robinson of Arizona State University in Tempe, principal investigator for LROC. "However, lunar astronauts will see something very different: viewed from the lunar surface, the Earth never rises or sets. Since the moon is tidally locked, Earth is always in the same spot above the horizon, varying only a small amount with the slight wobble of the moon. The Earth may not move across the 'sky', but the view is not static. Future astronauts will see the continents rotate in and out of view and the ever-changing pattern of clouds will always catch one's eye, at least on the nearside. The Earth is never visible from the farside; imagine a sky with no Earth or moon - what will farside explorers think with no Earth overhead?" NASA's first Earthrise image was taken with the Lunar Orbiter 1 spacecraft in 1966. Perhaps NASA's most iconic Earthrise photo was taken by the crew of the Apollo 8 mission as the spacecraft entered lunar orbit on Christmas Eve Dec. 24, 1968. That evening, the astronauts -- Commander

  4. NASA Shared Services Center breaks ground

    NASA Image and Video Library

    2006-02-24

    NASA officials and elected leaders were on hand for the groundbreaking ceremony of the NASA Shared Services Center Feb. 24, 2006, on the grounds of Stennis Space Center. The NSSC provides agency centralized administrative processing, human resources, procurement and financial services. From left, Louisiana Economic Development Secretary Mike Olivier, Stennis Space Center Director Rick Gilbrech, Computer Sciences Corp. President Michael Laphen, NASA Deputy Administrator Shana Dale, Rep. Gene Taylor, Sen. Trent Lott, Mississippi Gov. Haley Barbour, NASA Administrator Mike Griffin and Shared Services Center Executive Director Arbuthnot use golden shovels to break ground at the site.

  5. NASA Shared Services Center breaks ground

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA officials and elected leaders were on hand for the groundbreaking ceremony of the NASA Shared Services Center Feb. 24, 2006, on the grounds of Stennis Space Center. The NSSC provides agency centralized administrative processing, human resources, procurement and financial services. From left, Louisiana Economic Development Secretary Mike Olivier, Stennis Space Center Director Rick Gilbrech, Computer Sciences Corp. President Michael Laphen, NASA Deputy Administrator Shana Dale, Rep. Gene Taylor, Sen. Trent Lott, Mississippi Gov. Haley Barbour, NASA Administrator Mike Griffin and Shared Services Center Executive Director Arbuthnot use golden shovels to break ground at the site.

  6. High-school Student Teams in a National NASA Microgravity Science Competition

    NASA Technical Reports Server (NTRS)

    DeLombard, Richard; Hodanbosi, Carol; Stocker, Dennis

    2003-01-01

    The Dropping In a Microgravity Environment or DIME competition for high-school-aged student teams has completed the first year for nationwide eligibility after two regional pilot years. With the expanded geographic participation and increased complexity of experiments, new lessons were learned by the DIME staff. A team participating in DIME will research the field of microgravity, develop a hypothesis, and prepare a proposal for an experiment to be conducted in a NASA microgravity drop tower. A team of NASA scientists and engineers will select the top proposals and then the selected teams will design and build their experiment apparatus. When completed, team representatives will visit NASA Glenn in Cleveland, Ohio to operate their experiment in the 2.2 Second Drop Tower and participate in workshops and center tours. NASA participates in a wide variety of educational activities including competitive events. There are competitive events sponsored by NASA (e.g. NASA Student Involvement Program) and student teams mentored by NASA centers (e.g. For Inspiration and Recognition of Science and Technology Robotics Competition). This participation by NASA in these public forums serves to bring the excitement of aerospace science to students and educators.Researchers from academic institutions, NASA, and industry utilize the 2.2 Second Drop Tower at NASA Glenn Research Center in Cleveland, Ohio for microgravity research. The researcher may be able to complete the suite of experiments in the drop tower but many experiments are precursor experiments for spaceflight experiments. The short turnaround time for an experiment's operations (45 minutes) and ready access to experiment carriers makes the facility amenable for use in a student program. The pilot year for DIME was conducted during the 2000-2001 school year with invitations sent out to Ohio- based schools and organizations. A second pilot year was conducted during the 2001-2002 school year for teams in the six-state region

  7. NASA FY 2000 Accountability Report

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This Accountability Report consolidates reports required by various statutes and summarizes NASA's program accomplishments and its stewardship over budget and financial resources. It is a culmination of NASA's management process, which begins with mission definition and program planning, continues with the formulation and justification of budgets for the President and Congress, and ends with scientific and engineering program accomplishments. The report covers activities from October 1, 1999, through September 30, 2000. Achievements are highlighted in the Statement of the Administrator and summarized in the Report.

  8. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  9. NASA/BAE SYSTEMS SpaceWire Effort

    NASA Technical Reports Server (NTRS)

    Rakow, Glenn Parker; Schnurr, Richard G.; Kapcio, Paul

    2003-01-01

    This paper discusses the state of the NASA and BAE SYSTEMS developments of SpaceWire. NASA has developed intellectual property that implements SpaceWire in Register Transfer Level (RTL) VHDL for a SpaceWire link and router. This design has been extensively verified using directed tests from the SpaceWire Standard and design specification, as well as being randomly tested to flush out hard to find bugs in the code. The high level features of the design will be discussed, including the support for multiple time code masters, which will be useful for the James Webb Space Telescope electrical architecture. This design is now ready to be targeted to FPGA's and ASICs. Target utilization and performance information will be presented for Spaceflight worthy FPGA's and a discussion of the ASIC implementations will be addressed. In particular, the BAE SYSTEMS ASIC will be highlighted which will be implemented on their .25micron rad-hard line. The chip will implement a 4-port router with the ability to tie chips together to make larger routers without external glue logic. This part will have integrated LVDS drivers/receivers, include a PLL and include skew control logic. It will be targeted to run at greater than 300 MHz and include the implementation for the proposed SpaceWire transport layer. The need to provide a reliable transport mechanism for SpaceWire has been identified by both NASA And ESA, who are attempting to define a transport layer standard that utilizes a low overhead, low latency connection oriented approach that works end-to-end. This layer needs to be implemented in hardware to prevent bottlenecks.

  10. Supporting NASA Facilities Through GIS

    NASA Technical Reports Server (NTRS)

    Ingham, Mary E.

    2000-01-01

    The NASA GIS Team supports NASA facilities and partners in the analysis of spatial data. Geographic Information System (G[S) is an integration of computer hardware, software, and personnel linking topographic, demographic, utility, facility, image, and other geo-referenced data. The system provides a graphic interface to relational databases and supports decision making processes such as planning, design, maintenance and repair, and emergency response.

  11. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Cassini program manager at JPL, Earl Maize is seen during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  12. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Cassini program manager at JPL, Earl Maize speaks during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  13. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    A model of the Cassini-Huygens spacecraft is seen during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  14. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Cassini project scientist at JPL, Linda Spilker speaks during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  15. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Cassini program manager at JPL, Earl Maize, speaks during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  16. Exploring Cognition Using Software Defined Radios for NASA Missions

    NASA Technical Reports Server (NTRS)

    Mortensen, Dale J.; Reinhart, Richard C.

    2016-01-01

    NASA missions typically operate using a communication infrastructure that requires significant schedule planning with limited flexibility when the needs of the mission change. Parameters such as modulation, coding scheme, frequency, and data rate are fixed for the life of the mission. This is due to antiquated hardware and software for both the space and ground assets and a very complex set of mission profiles. Automated techniques in place by commercial telecommunication companies are being explored by NASA to determine their usability by NASA to reduce cost and increase science return. Adding cognition the ability to learn from past decisions and adjust behavior is also being investigated. Software Defined Radios are an ideal way to implement cognitive concepts. Cognition can be considered in many different aspects of the communication system. Radio functions, such as frequency, modulation, data rate, coding and filters can be adjusted based on measurements of signal degradation. Data delivery mechanisms and route changes based on past successes and failures can be made to more efficiently deliver the data to the end user. Automated antenna pointing can be added to improve gain, coverage, or adjust the target. Scheduling improvements and automation to reduce the dependence on humans provide more flexible capabilities. The Cognitive Communications project, funded by the Space Communication and Navigation Program, is exploring these concepts and using the SCaN Testbed on board the International Space Station to implement them as they evolve. The SCaN Testbed contains three Software Defined Radios and a flight computer. These four computing platforms, along with a tracking antenna system and the supporting ground infrastructure, will be used to implement various concepts in a system similar to those used by missions. Multiple universities and SBIR companies are supporting this investigation. This paper will describe the cognitive system ideas under consideration and

  17. A large-scale computer facility for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F., Jr.

    1985-01-01

    As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.

  18. NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report

    NASA Technical Reports Server (NTRS)

    Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ

    2013-01-01

    The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities

  19. The use of computer models to predict temperature and smoke movement in high bay spaces

    NASA Technical Reports Server (NTRS)

    Notarianni, Kathy A.; Davis, William D.

    1993-01-01

    The Building and Fire Research Laboratory (BFRL) was given the opportunity to make measurements during fire calibration tests of the heat detection system in an aircraft hangar with a nominal 30.4 (100 ft) ceiling height near Dallas, TX. Fire gas temperatures resulting from an approximately 8250 kW isopropyl alcohol pool fire were measured above the fire and along the ceiling. The results of the experiments were then compared to predictions from the computer fire models DETACT-QS, FPETOOL, and LAVENT. In section A of the analysis conducted, DETACT-QS AND FPETOOL significantly underpredicted the gas temperature. LAVENT at the position below the ceiling corresponding to maximum temperature and velocity provided better agreement with the data. For large spaces, hot gas transport time and an improved fire plume dynamics model should be incorporated into the computer fire model activation routines. A computational fluid dynamics (CFD) model, HARWELL FLOW3D, was then used to model the hot gas movement in the space. Reasonable agreement was found between the temperatures predicted from the CFD calculations and the temperatures measured in the aircraft hangar. In section B, an existing NASA high bay space was modeled using the CFD model. The NASA space was a clean room, 27.4 m (90 ft) high with forced horizontal laminar flow. The purpose of this analysis is to determine how the existing fire detection devices would respond to various size fires in the space. The analysis was conducted for 32 MW, 400 kW, and 40 kW fires.

  20. Overhauling, updating and augmenting NASA spacelink electronic information system

    NASA Technical Reports Server (NTRS)

    Blake, Jean A.

    1991-01-01

    NASA/Spacelink is a collection of NASA information and educational materials stored on a computer at the MSFC. It is provided by the NASA Educational Affairs Division and is operated by the Education Branch of the Marshall Center Public Affairs Office. It is designed to communicate with a wide variety of computers and modems, especially those most commonly found in classrooms and homes. It was made available to the public in February, 1988. The system may be accessed by educators and the public over regular telephone lines. NASA/Spacelink is free except for the cost of long distance calls. Overhauling and updating Spacelink was done to refurbish NASA/Spacelink, a very valuable resource medium. Several new classroom activities and miscellaneous topics were edited and entered into Spacelink. One of the areas that received a major overhaul (under the guidance of Amos Crisp) was the SPINOFFS BENEFITS, the great benefits resulting from America's space explorations. The Spinoff Benefits include information on a variety of topics including agriculture, communication, the computer, consumer, energy, equipment and materials, food, health, home, industry, medicine, natural resources, public services, recreation, safety, sports, and transportation. In addition to the Space Program Spinoff Benefits, the following is a partial list of some of the material updated and introduced: Astronaut Biographies, Miscellaneous Aeronautics Classroom Activities, Miscellaneous Astronomy Classroom Activities, Miscellaneous Rocketry Classroom Activities, Miscellaneous Classroom Activities, NASA and Its Center, NASA Areas of Research, NASA Patents, Licensing, NASA Technology Transfer, Pictures from Space Classroom Activities, Status of Current NASA Projects, Using Art to Teach Science, and Word Puzzles for Use in the Classroom.

  1. High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems

    NASA Technical Reports Server (NTRS)

    Makivic, Miloje S.

    1996-01-01

    This is the final technical report for the project entitled: "High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems", funded at NPAC by the DAO at NASA/GSFC. First, the motivation for the project is given in the introductory section, followed by the executive summary of major accomplishments and the list of project-related publications. Detailed analysis and description of research results is given in subsequent chapters and in the Appendix.

  2. NASA Computational Fluid Dynamics Conference. Volume 2: Sessions 7-12

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The objectives of the conference were to disseminate CFD research results to industry and university CFD researchers, to promote synergy among NASA CFD researchers, and to permit feedback from researchers outside of NASA on issues pacing the discipline of CFD. The focus of the conference was on the application of CFD technology but also included fundamental activities.

  3. NASA's Microgravity Research Program

    NASA Technical Reports Server (NTRS)

    Woodard, Dan R. (Editor); Henderson, Robin N. (Technical Monitor)

    2000-01-01

    The Fiscal Year 1999 Annual Report describes key elements of the NASA Microgravity Research Program. The Program's goals, approach taken to achieve those goals, and program resources are summarized. A review of the Program's status at the end of FY1999 and highlights of the ground-and-flight research are provided.

  4. End-to-End QoS for Differentiated Services and ATM Internetworking

    NASA Technical Reports Server (NTRS)

    Su, Hongjun; Atiquzzaman, Mohammed

    2001-01-01

    The Internet was initially design for non real-time data communications and hence does not provide any Quality of Service (QoS). The next generation Internet will be characterized by high speed and QoS guarantee. The aim of this paper is to develop a prioritized early packet discard (PEPD) scheme for ATM switches to provide service differentiation and QoS guarantee to end applications running over next generation Internet. The proposed PEPD scheme differs from previous schemes by taking into account the priority of packets generated from different application. We develop a Markov chain model for the proposed scheme and verify the model with simulation. Numerical results show that the results from the model and computer simulation are in close agreement. Our PEPD scheme provides service differentiation to the end-to-end applications.

  5. Recommended Computer End-User Skills for Business Students by Inc. 500 Executives and Office Systems Educators.

    ERIC Educational Resources Information Center

    Zhao, Jensen J.; Ray, Charles M.; Dye, Lee J.; Davis, Rodney

    1998-01-01

    Executives (n=63) and office-systems educators (n=88) recommended for workers the following categories of computer end-user skills: hardware, operating systems, word processing, spreadsheets, database, desktop publishing, and presentation. (SK)

  6. Development of a Dynamic, End-to-End Free Piston Stirling Convertor Model

    NASA Astrophysics Data System (ADS)

    Regan, Timothy F.; Gerber, Scott S.; Roth, Mary Ellen

    2003-01-01

    A dynamic model for a free-piston Stirling convertor is being developed at the NASA Glenn Research Center. The model is an end-to-end system model that includes the cycle thermodynamics, the dynamics, and electrical aspects of the system. The subsystems of interest are the heat source, the springs, the moving masses, the linear alternator, the controller and the end-user load. The envisioned use of the model will be in evaluating how changes in a subsystem could affect the operation of the convertor. The model under development will speed the evaluation of improvements to a subsystem and aid in determining areas in which most significant improvements may be found. One of the first uses of the end-to-end model will be in the development of controller architectures. Another related area is in evaluating changes to details in the linear alternator.

  7. Computers in Spaceflight: the NASA Experience

    NASA Technical Reports Server (NTRS)

    Tomayko, James E.

    1988-01-01

    This book examines the computer systems used in actual spaceflight or in close support of it. Each chapter deals with either a specific program, such as Gemini or Apollo onboard computers, or a closely related set of systems, such as launch processing or mission control. A glossary of computer terms is included.

  8. 1995 NASA High-Speed Research Program Sonic Boom Workshop. Volume 2; Configuration Design, Analysis, and Testing

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Sonic Boom Workshop on September 12-13, 1995. The workshop was designed to bring together NASAs scientists and engineers and their counterparts in industry, other Government agencies, and academia working together in the sonic boom element of NASAs High-Speed Research Program. Specific objectives of this workshop were to: (1) report the progress and status of research in sonic boom propagation, acceptability, and design; (2) promote and disseminate this technology within the appropriate technical communities; (3) help promote synergy among the scientists working in the Program; and (4) identify technology pacing, the development C, of viable reduced-boom High-Speed Civil Transport concepts. The Workshop was organized in four sessions: Sessions 1 Sonic Boom Propagation (Theoretical); Session 2 Sonic Boom Propagation (Experimental); Session 3 Acceptability Studies-Human and Animal; and Session 4 - Configuration Design, Analysis, and Testing.

  9. Strategic plan : providing high precision search to NASA employees using the NASA engineering network

    NASA Technical Reports Server (NTRS)

    Dutra, Jayne E.; Smith, Lisa

    2006-01-01

    The goal of this plan is to briefly describe new technologies available to us in the arenas of information discovery and discuss the strategic value they have for the NASA enterprise with some considerations and suggestions for near term implementations using the NASA Engineering Network (NEN) as a delivery venue.

  10. Research in parallel computing

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Henderson, Charles

    1994-01-01

    This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

  11. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  12. High speed jet noise research at NASA Lewis

    NASA Astrophysics Data System (ADS)

    Krejsa, Eugene A.; Cooper, B. A.; Kim, C. M.; Khavaran, Abbas

    1992-04-01

    The source noise portion of the High Speed Research Program at NASA LeRC is focused on jet noise reduction. A number of jet noise reduction concepts are being investigated. These include two concepts, the Pratt & Whitney ejector suppressor nozzle and the General Electric (GE) 2D-CD mixer ejector nozzle, that rely on ejectors to entrain significant amounts of ambient air to mix with the engine exhaust to reduce the final exhaust velocity. Another concept, the GE 'Flade Nozzle' uses fan bypass air at takeoff to reduce the mixed exhaust velocity and to create a fluid shield around a mixer suppressor. Additional concepts are being investigated at Georgia Tech Research Institute and at NASA LeRC. These will be discussed in more detail in later figures. Analytical methods for jet noise prediction are also being developed. Efforts in this area include upgrades to the GE MGB jet mixing noise prediction procedure, evaluation of shock noise prediction procedures, and efforts to predict jet noise directly from the unsteady Navier-Stokes equation.

  13. High speed jet noise research at NASA Lewis

    NASA Technical Reports Server (NTRS)

    Krejsa, Eugene A.; Cooper, B. A.; Kim, C. M.; Khavaran, Abbas

    1992-01-01

    The source noise portion of the High Speed Research Program at NASA LeRC is focused on jet noise reduction. A number of jet noise reduction concepts are being investigated. These include two concepts, the Pratt & Whitney ejector suppressor nozzle and the General Electric (GE) 2D-CD mixer ejector nozzle, that rely on ejectors to entrain significant amounts of ambient air to mix with the engine exhaust to reduce the final exhaust velocity. Another concept, the GE 'Flade Nozzle' uses fan bypass air at takeoff to reduce the mixed exhaust velocity and to create a fluid shield around a mixer suppressor. Additional concepts are being investigated at Georgia Tech Research Institute and at NASA LeRC. These will be discussed in more detail in later figures. Analytical methods for jet noise prediction are also being developed. Efforts in this area include upgrades to the GE MGB jet mixing noise prediction procedure, evaluation of shock noise prediction procedures, and efforts to predict jet noise directly from the unsteady Navier-Stokes equation.

  14. A radiation-hardened, computer for satellite applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaona, J.I. Jr.

    1996-08-01

    This paper describes high reliability radiation hardened computers built by Sandia for application aboard DOE satellite programs requiring 32 bit processing. The computers highlight a radiation hardened (10 kGy(Si)) R3000 executing up to 10 million reduced instruction set instructions (RISC) per second (MIPS), a dual purpose module control bus used for real-time default and power management which allows for extended mission operation on as little as 1.2 watts, and a local area network capable of 480 Mbits/s. The central processing unit (CPU) is the NASA Goddard R3000 nicknamed the ``Mongoose or Mongoose 1``. The Sandia Satellite Computer (SSC) uses Rational`smore » Ada compiler, debugger, operating system kernel, and enhanced floating point emulation library targeted at the Mongoose. The SSC gives Sandia the capability of processing complex types of spacecraft attitude determination and control algorithms and of modifying programmed control laws via ground command. And in general, SSC offers end users the ability to process data onboard the spacecraft that would normally have been sent to the ground which allows reconsideration of traditional space-grounded partitioning options.« less

  15. Understanding the Manager of the Project Front-End

    NASA Technical Reports Server (NTRS)

    Mulenburg, Gerald M.; Imprescia, Cliff (Technical Monitor)

    2000-01-01

    Historical data and new findings from interviews with managers of major National Aeronautics and Space Administration (NASA) projects confirm literature reports about the criticality of the front-end phase of project development, where systems engineering plays such a key role. Recent research into the management of ten contemporary NASA projects, combined with personal experience of the author in NASA, provide some insight into the relevance and importance of the project manager in this initial part of the project life cycle. The research findings provide evidence of similar approaches taken by the NASA project manager.

  16. Exploration of operator method digital optical computers for application to NASA

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Digital optical computer design has been focused primarily towards parallel (single point-to-point interconnection) implementation. This architecture is compared to currently developing VHSIC systems. Using demonstrated multichannel acousto-optic devices, a figure of merit can be formulated. The focus is on a figure of merit termed Gate Interconnect Bandwidth Product (GIBP). Conventional parallel optical digital computer architecture demonstrates only marginal competitiveness at best when compared to projected semiconductor implements. Global, analog global, quasi-digital, and full digital interconnects are briefly examined as alternative to parallel digital computer architecture. Digital optical computing is becoming a very tough competitor to semiconductor technology since it can support a very high degree of three dimensional interconnect density and high degrees of Fan-In without capacitive loading effects at very low power consumption levels.

  17. NASA / Pratt and Whitney Collaborative Partnership Research in Ultra High Bypass Cycle Propulsion Concepts

    NASA Technical Reports Server (NTRS)

    Hughes, Chris; Lord, Wed

    2008-01-01

    Current collaborative research with Pratt & Whitney on Ultra High Bypass Engine Cycle noise, performance and emissions improvements as part of the Subsonic Fixed Wing Project Ultra High Bypass Engine Partnership Element is discussed. The Subsonic Fixed Wing Project goals are reviewed, as well as their relative technology level compared to previous NASA noise program goals. Progress toward achieving the Subsonic Fixed Wing Project goals over the 2008 fiscal year by the UHB Partnership in this area of research are reviewed. The current research activity in Ultra High Bypass Engine Cycle technology, specifically the Pratt & Whitney Geared Turbofan, at NASA and Pratt & Whitney are discussed including the contributions each entity bring toward the research project, and technical plans and objectives. Pratt & Whitney Geared Turbofan current and future technology and business plans are also discussed, including the role the NASA SFW UHB partnership plays toward achieving those goals.

  18. NASA Microgravity Research Program

    NASA Technical Reports Server (NTRS)

    Woodard, Dan

    1999-01-01

    The Fiscal Year 1998 Annual Report describes key elements of the NASA Microgravity Research Program. The Program's goals, approach taken to achieve those goals, and program resources are summarized. A review of the Program's status at the end of FY1998 and highlights of the ground- and-flight-based research are provided.

  19. NASA thesaurus. Volume 3: Definitions

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Publication of NASA Thesaurus definitions began with Supplement 1 to the 1985 NASA Thesaurus. The definitions given here represent the complete file of over 3,200 definitions, complimented by nearly 1,000 use references. Definitions of more common or general scientific terms are given a NASA slant if one exists. Certain terms are not defined as a matter of policy: common names, chemical elements, specific models of computers, and nontechnical terms. The NASA Thesaurus predates by a number of years the systematic effort to define terms, therefore not all Thesaurus terms have been defined. Nevertheless, definitions of older terms are continually being added. The following data are provided for each entry: term in uppercase/lowercase form, definition, source, and year the term (not the definition) was added to the NASA Thesaurus. The NASA History Office is the authority for capitalization in satellite and spacecraft names. Definitions with no source given were constructed by lexicographers at the NASA Scientific and Technical Information (STI) Facility who rely on the following sources for their information: experts in the field, literature searches from the NASA STI database, and specialized references.

  20. NASA and the National Climate Assessment: Promoting awareness of NASA Earth science

    NASA Astrophysics Data System (ADS)

    Leidner, A. K.

    2014-12-01

    NASA Earth science observations, models, analyses, and applications made significant contributions to numerous aspects of the Third National Climate Assessment (NCA) report and are contributing to sustained climate assessment activities. The agency's goal in participating in the NCA was to ensure that NASA scientific resources were made available to understand the current state of climate change science and climate change impacts. By working with federal agency partners and stakeholder communities to develop and write the report, the agency was able to raise awareness of NASA climate science with audiences beyond the traditional NASA community. To support assessment activities within the NASA community, the agency sponsored two competitive programs that not only funded research and tools for current and future assessments, but also increased capacity within our community to conduct assessment-relevant science and to participate in writing assessments. Such activities fostered the ability of graduate students, post-docs, and senior researchers to learn about the science needs of climate assessors and end-users, which can guide future research activities. NASA also contributed to developing the Global Change Information System, which deploys information from the NCA to scientists, decision makers, and the public, and thus contributes to climate literacy. Finally, NASA satellite imagery and animations used in the Third NCA helped the pubic and decision makers visualize climate changes and were frequently used in social media to communicate report key findings. These resources are also key for developing educational materials that help teachers and students explore regional climate change impacts and opportunities for responses.

  1. A review of high-speed, convective, heat-transfer computation methods

    NASA Technical Reports Server (NTRS)

    Tauber, Michael E.

    1989-01-01

    The objective of this report is to provide useful engineering formulations and to instill a modest degree of physical understanding of the phenomena governing convective aerodynamic heating at high flight speeds. Some physical insight is not only essential to the application of the information presented here, but also to the effective use of computer codes which may be available to the reader. A discussion is given of cold-wall, laminar boundary layer heating. A brief presentation of the complex boundary layer transition phenomenon follows. Next, cold-wall turbulent boundary layer heating is discussed. This topic is followed by a brief coverage of separated flow-region and shock-interaction heating. A review of heat protection methods follows, including the influence of mass addition on laminar and turbulent boundary layers. Also discussed are a discussion of finite-difference computer codes and a comparison of some results from these codes. An extensive list of references is also provided from sources such as the various AIAA journals and NASA reports which are available in the open literature.

  2. A review of high-speed, convective, heat-transfer computation methods

    NASA Technical Reports Server (NTRS)

    Tauber, Michael E.

    1989-01-01

    The objective is to provide useful engineering formulations and to instill a modest degree of physical understanding of the phenomena governing convective aerodynamic heating at high flight speeds. Some physical insight is not only essential to the application of the information presented here, but also to the effective use of computer codes which may be available to the reader. Given first is a discussion of cold-wall, laminar boundary layer heating. A brief presentation of the complex boundary layer transition phenomenon follows. Next, cold-wall turbulent boundary layer heating is discussed. This topic is followed by a brief coverage of separated flow-region and shock-interaction heating. A review of heat protection methods follows, including the influence of mass addition on laminar and turbulent boundary layers. Next is a discussion of finite-difference computer codes and a comparison of some results from these codes. An extensive list of references is also provided from sources such as the various AIAA journals and NASA reports which are available in the open literature.

  3. Cassini End of Mission

    NASA Image and Video Library

    2017-09-15

    Spacecraft operations team manager for the Cassini mission at Saturn, Julie Webster is seen after the end of the Cassini mission, Friday, Sept. 15, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators deliberately plunged the spacecraft into Saturn, as Cassini gathered science until the end. Loss of contact with the Cassini spacecraft occurred at 7:55 a.m. EDT (4:55 a.m. PDT). The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  4. Cassini End of Mission

    NASA Image and Video Library

    2017-09-15

    Cassini program manager at JPL, Earl Maize packs up his workspace in mission control after the end of the Cassini mission, Friday, Sept. 15, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators deliberately plunged the spacecraft into Saturn, as Cassini gathered science until the end. Loss of contact with the Cassini spacecraft occurred at 7:55 a.m. EDT (4:55 a.m. PDT). The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  5. Analysis of wavelet technology for NASA applications

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    The purpose of this grant was to introduce a broad group of NASA researchers and administrators to wavelet technology and to determine its future role in research and development at NASA JSC. The activities of several briefings held between NASA JSC scientists and Rice University researchers are discussed. An attached paper, 'Recent Advances in Wavelet Technology', summarizes some aspects of these briefings. Two proposals submitted to NASA reflect the primary areas of common interest. They are image analysis and numerical solutions of partial differential equations arising in computational fluid dynamics and structural mechanics.

  6. NASA / GE Aviation Collaborative Partnership Research in Ultra High Bypass Cycle Propulsion Concepts

    NASA Technical Reports Server (NTRS)

    Hughes, Christopher E.; Zeug, Theresa

    2008-01-01

    Current collaborative research with General Electric Aviation on Open Rotor propulsion as part of the Subsonic Fixed Wing Project Ultra High Bypass Engine Partnership Element is discussed. The Subsonic Fixed Wing Project goals are reviewed, as well as their relative technology level compared to previous NASA noise program goals. The current Open Rotor propulsion research activity at NASA and GE are discussed including the contributions each entity bring toward the research project, and technical plans and objectives. GE Open Rotor propulsion technology and business plans currently and toward the future are also discussed, including the role the NASA SFW UHB partnership plays toward achieving those goals.

  7. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  8. System and Propagation Availability Analysis for NASA's Advanced Air Transportation Technologies

    NASA Technical Reports Server (NTRS)

    Ugweje, Okechukwu C.

    2000-01-01

    This report summarizes the research on the System and Propagation Availability Analysis for NASA's project on Advanced Air Transportation Technologies (AATT). The objectives of the project were to determine the communication systems requirements and architecture, and to investigate the effect of propagation on the transmission of space information. In this report, results from the first year investigation are presented and limitations are highlighted. To study the propagation links, an understanding of the total system architecture is necessary since the links form the major component of the overall architecture. This study was conducted by way of analysis, modeling and simulation on the system communication links. The overall goals was to develop an understanding of the space communication requirements relevant to the AATT project, and then analyze the links taking into consideration system availability under adverse atmospheric weather conditions. This project began with a preliminary study of the end-to-end system architecture by modeling a representative communication system in MATLAB SIMULINK. Based on the defining concepts, the possibility of computer modeling was determined. The investigations continue with the parametric studies of the communication system architecture. These studies were also carried out with SIMULINK modeling and simulation. After a series of modifications, two end-to-end communication links were identified as the most probable models for the communication architecture. Link budget calculations were then performed in MATHCAD and MATLAB for the identified communication scenarios. A remarkable outcome of this project is the development of a graphic user interface (GUI) program for the computation of the link budget parameters in real time. Using this program, one can interactively compute the link budget requirements after supplying a few necessary parameters. It provides a framework for the eventual automation of several computations

  9. Support System Effects on the NASA Common Research Model

    NASA Technical Reports Server (NTRS)

    Rivers, S. Melissa B.; Hunter, Craig A.

    2012-01-01

    An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-Foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experimental and the computational data from the 4th Drag Prediction Workshop. This difference led to the present work. In this study, a computational assessment has been undertaken to investigate model support system interference effects on the Common Research Model. The configurations computed during this investigation were the wing/body/tail=0deg without the support system and the wing/body/tail=0deg with the support system. The results from this investigation confirm that the addition of the support system to the computational cases does shift the pitching moment in the direction of the experimental results.

  10. Machine-aided indexing for NASA STI

    NASA Technical Reports Server (NTRS)

    Wilson, John

    1987-01-01

    One of the major components of the NASA/STI processing system is machine-aided indexing (MAI). MAI is a computer process that generates a set of indexing terms selected from NASA's thesaurus, is used for indexing technical reports, is based on text, and is reviewed by indexers. This paper summarizes the MAI objectives and discusses the NASA Lexical Dictionary, subject switching, and phrase matching or natural languages. The benefits of using MAI are mentioned, and MAI production improvement and the future of MAI are briefly addressed.

  11. NASA Automated Fiber Placement Capabilities: Similar Systems, Complementary Purposes

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Jackson, Justin R.; Pelham, Larry I.; Stewart, Brian K.

    2015-01-01

    New automated fiber placement systems at the NASA Langley Research Center and NASA Marshall Space Flight Center provide state-of-art composites capabilities to these organizations. These systems support basic and applied research at Langley, complementing large-scale manufacturing and technology development at Marshall. These systems each consist of a multi-degree of freedom mobility platform including a commercial robot, a commercial tool changer mechanism, a bespoke automated fiber placement end effector, a linear track, and a rotational tool support structure. In addition, new end effectors with advanced capabilities may be either bought or developed with partners in industry and academia to extend the functionality of these systems. These systems will be used to build large and small composite parts in support of the ongoing NASA Composites for Exploration Upper Stage Project later this year.

  12. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    A model of the Cassini spacecraft is seen during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Participants in the press conference were: Director of NASA's Planetary Science Division, Jim Green, left, Cassini program manager at JPL, Earl Maize, second from right, Cassini project scientist at JPL, Linda Spilker, second from right, and principle investigator for the Neutral Mass Spectrometer (INMS) at the Southwest Research Institute, Hunter Waite, right. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  13. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Cassini project scientist at JPL, Linda Spilker answers questions from members of the media during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  14. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Cassini program manager at JPL, Earl Maize, center, answers questions from members of the media during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  15. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    A model of the Cassini-Huygens spacecraft is seen in the von Kármán Auditorium during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  16. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Principle investigator for the Neutral Mass Spectrometer (INMS) at the Southwest Research Institute, Hunter Waites, peaks during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  17. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Cassini project scientist at JPL, Linda Spilker, right, looks on as Cassini program manager at JPL, Earl Maize speaks during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  18. NASA GRC's High Pressure Burner Rig Facility and Materials Test Capabilities

    NASA Technical Reports Server (NTRS)

    Robinson, R. Craig

    1999-01-01

    The High Pressure Burner Rig (HPBR) at NASA Glenn Research Center is a high-velocity. pressurized combustion test rig used for high-temperature environmental durability studies of advanced materials and components. The facility burns jet fuel and air in controlled ratios, simulating combustion gas chemistries and temperatures that are realistic to those in gas turbine engines. In addition, the test section is capable of simulating the pressures and gas velocities representative of today's aircraft. The HPBR provides a relatively inexpensive. yet sophisticated means for researchers to study the high-temperature oxidation of advanced materials. The facility has the unique capability of operating under both fuel-lean and fuel-rich gas mixtures. using a fume incinerator to eliminate any harmful byproduct emissions (CO, H2S) of rich-burn operation. Test samples are easily accessible for ongoing inspection and documentation of weight change, thickness, cracking, and other metrics. Temperature measurement is available in the form of both thermocouples and optical pyrometery. and the facility is equipped with quartz windows for observation and video taping. Operating conditions include: (1) 1.0 kg/sec (2.0 lbm/sec) combustion and secondary cooling airflow capability: (2) Equivalence ratios of 0.5- 1.0 (lean) to 1.5-2.0 (rich), with typically 10% H2O vapor pressure: (3) Gas temperatures ranging 700-1650 C (1300-3000 F): (4) Test pressures ranging 4-12 atmospheres: (5) Gas flow velocities ranging 10-30 m/s (50-100) ft/sec.: and (6) Cyclic and steady-state exposure capabilities. The facility has historically been used to test coupon-size materials. including metals and ceramics. However complex-shaped components have also been tested including cylinders, airfoils, and film-cooled end walls. The facility has also been used to develop thin-film temperature measurement sensors.

  19. Analysis of Computational Fluid Dynamics and Particle Image Velocimetry Models of Distal-End Side-to-Side and End-to-Side Anastomoses for Coronary Artery Bypass Grafting in a Pulsatile Flow.

    PubMed

    Shintani, Yoshiko; Iino, Kenji; Yamamoto, Yoshitaka; Kato, Hiroki; Takemura, Hirofumi; Kiwata, Takahiro

    2017-12-25

    Intimal hyperplasia (IH) is a major cause of graft failure. Hemodynamic factors such as stagnation and disturbed blood flow are involved in IH formation. The aim of this study is to perform a comparative analysis of distal-end side-to-side (deSTS) and end-to-side (ETS) anastomoses using computational fluid dynamics (CFD) after validating the results via particle image velocimetry (PIV).Methods and Results:We investigated the characteristics of our target flow fields using CFD under steady and pulsatile flows. CFD via PIV under steady flow in a 10-times-actual-size model was validated. The CFD analysis revealed a recirculation zone in the heel region in the deSTS and ETS anastomoses and at the distal end of the graft, and just distal to the toe of the host artery in the deSTS anastomoses. The recirculation zone sizes changed with the phase shift. We found regions of low wall shear stress and high oscillating shear index in the same areas. The PIV and CFD results were similar. It was demonstrated that the hemodynamic characteristics of CFD and PIV is the difference between the deSTS and ETS anastomoses; that is, the deSTS flow peripheral to the distal end of the graft, at the distal end and just distal to the toe of the host artery is involved in the IH formation.

  20. Flight Test 4 Preliminary Results: NASA Ames SSI

    NASA Technical Reports Server (NTRS)

    Isaacson, Doug; Gong, Chester; Reardon, Scott; Santiago, Confesor

    2016-01-01

    Realization of the expected proliferation of Unmanned Aircraft System (UAS) operations in the National Airspace System (NAS) depends on the development and validation of performance standards for UAS Detect and Avoid (DAA) Systems. The RTCA Special Committee 228 is charged with leading the development of draft Minimum Operational Performance Standards (MOPS) for UAS DAA Systems. NASA, as a participating member of RTCA SC-228 is committed to supporting the development and validation of draft requirements as well as the safety substantiation and end-to-end assessment of DAA system performance. The Unmanned Aircraft System (UAS) Integration into the National Airspace System (NAS) Project conducted flight test program, referred to as Flight Test 4, at Armstrong Flight Research Center from April -June 2016. Part of the test flights were dedicated to the NASA Ames-developed Detect and Avoid (DAA) System referred to as JADEM (Java Architecture for DAA Extensibility and Modeling). The encounter scenarios, which involved NASA's Ikhana UAS and a manned intruder aircraft, were designed to collect data on DAA system performance in real-world conditions and uncertainties with four different surveillance sensor systems. Flight test 4 has four objectives: (1) validate DAA requirements in stressing cases that drive MOPS requirements, including: high-speed cooperative intruder, low-speed non-cooperative intruder, high vertical closure rate encounter, and Mode CS-only intruder (i.e. without ADS-B), (2) validate TCASDAA alerting and guidance interoperability concept in the presence of realistic sensor, tracking and navigational errors and in multiple-intruder encounters against both cooperative and non-cooperative intruders, (3) validate Well Clear Recovery guidance in the presence of realistic sensor, tracking and navigational errors, and (4) validate DAA alerting and guidance requirements in the presence of realistic sensor, tracking and navigational errors. The results will be

  1. NASA Update for Unidata Stratcomm

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris

    2017-01-01

    The NASA representative to the Unidata Strategic Committee presented a semiannual update on NASAs work with and use of Unidata technologies. The talk updated Unidata on the program of cloud computing prototypes underway for the Earth Observing System Data and Information System (EOSDIS). Also discussed was a trade study on the use of the Open source Project for a Network Data Access Protocol (OPeNDAP) with Web Object Storage in the cloud.

  2. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  3. NASA CORE (Central Operation of Resources for Educators) Educational Materials Catalog

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This educational materials catalog presents NASA CORE (Central Operation of Resources for Educators). The topics include: 1) Videocassettes (Aeronautics, Earth Resources, Weather, Space Exploration/Satellites, Life Sciences, Careers); 2) Slide Programs; 3) Computer Materials; 4) NASA Memorabilia/Miscellaneous; 5) NASA Educator Resource Centers; 6) and NASA Resources.

  4. Data Mining and Knowledge Discover - IBM Cognitive Alternatives for NASA KSC

    NASA Technical Reports Server (NTRS)

    Velez, Victor Hugo

    2016-01-01

    Skillful tools in cognitive computing to transform industries have been found favorable and profitable for different Directorates at NASA KSC. In this study is shown how cognitive computing systems can be useful for NASA when computers are trained in the same way as humans are to gain knowledge over time. Increasing knowledge through senses, learning and a summation of events is how the applications created by the firm IBM empower the artificial intelligence in a cognitive computing system. NASA has explored and applied for the last decades the artificial intelligence approach specifically with cognitive computing in few projects adopting similar models proposed by IBM Watson. However, the usage of semantic technologies by the dedicated business unit developed by IBM leads these cognitive computing applications to outperform the functionality of the inner tools and present outstanding analysis to facilitate the decision making for managers and leads in a management information system.

  5. High Energy Astrophysics and Cosmology from Space: NASA's Physics of the Cosmos Program

    NASA Astrophysics Data System (ADS)

    Bautz, Marshall

    2017-01-01

    We summarize currently-funded NASA activities in high energy astrophysics and cosmology embodied in the NASA Physics of the Cosmos program, including updates on technology development and mission studies. The portfolio includes participation in a space mission to measure gravitational waves from a variety of astrophysical sources, including binary black holes, throughout most of cosmic history, and in another to map the evolution of black hole accretion by means of the accompanying X-ray emission. These missions are envisioned as collaborations with the European Space Agency's Large 3 (L3) and Athena programs, respectively. It also features definition of a large, NASA-led X-ray Observatory capable of tracing the surprisingly rapid growth of supermassive black holes during the first billion years of cosmic history. The program also includes the study of cosmic rays and high-energy gamma-ray photons resulting from range of physical processes, and efforts to characterize both the physics of inflation associated with the birth of the universe and the nature of the dark energy that dominates its mass-energy content today. Finally, we describe the activities of the Physics of the Cosmos Program Analysis Group, which serves as a forum for community analysis and input to NASA.

  6. Computer Programs (Turbomachinery)

    NASA Technical Reports Server (NTRS)

    1978-01-01

    NASA computer programs are extensively used in design of industrial equipment. Available from the Computer Software Management and Information Center (COSMIC) at the University of Georgia, these programs are employed as analysis tools in design, test and development processes, providing savings in time and money. For example, two NASA computer programs are used daily in the design of turbomachinery by Delaval Turbine Division, Trenton, New Jersey. The company uses the NASA splint interpolation routine for analysis of turbine blade vibration and the performance of compressors and condensers. A second program, the NASA print plot routine, analyzes turbine rotor response and produces graphs for project reports. The photos show examples of Delaval test operations in which the computer programs play a part. In the large photo below, a 24-inch turbine blade is undergoing test; in the smaller photo, a steam turbine rotor is being prepared for stress measurements under actual operating conditions; the "spaghetti" is wiring for test instrumentation

  7. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    Spacecraft operations team manager for the Cassini mission at Saturn, Julie Webster, second from right, talks about her experiences with Cassini during the Cassini NASA Social, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Also participating in the engineering panel was Cassini program manager at JPL, Earl Maize, right, guidance and control engineer for the Cassini mission at Saturn, Luis Andrade, second from left, and mission planner for the Cassini mission at Saturn, Molly Bittner, left. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  8. Cassini NASA Social

    NASA Image and Video Library

    2017-09-14

    Cassini project scientist at JPL, Linda Spilker, left, Cassini interdisciplinary Titan scientist at Cornell University, Jonathan Lunine, second from left, Cassini Composite Infrared Spectrometer(CIRS) Instrument deputy principle investigator Connor Nixon, second from right, and Cassini assistant project science systems engineer Morgan Cable, right, participate in a Cassini science panel discussion during the Cassini NASA Social, Thursday, Sept. 14, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  9. NASA's Earth Science Enterprise's Water and Energy Cycle Focus Area

    NASA Astrophysics Data System (ADS)

    Entin, J. K.

    2004-05-01

    Understanding the Water and Energy cycles is critical towards improving our understanding of climate change, as well as the consequences of climate change. In addition, using results from water and energy cycle research can help improve water resource management, agricultural efficiency, disaster management, and public health. To address this, NASA's Earth Science Enterprise (ESE) has an end-to-end Water and Energy Cycle Focus Area, which along with the ESE's other five focus areas will help NASA answer key Earth Science questions. In an effort to build upon the pre-existing discipline programs, which focus on precipitation, radiation sciences, and terrestrial hydrology, NASA has begun planning efforts to create an implementation plan for integrative research to improve our understanding of the water and energy cycles. The basics of this planning process and the core aspects of the implementation plan will be discussed. Roadmaps will also be used to show the future direction for the entire focus area. Included in the discussion, will be aspects of the end-to-end nature of the Focus Area that encompass current and potential actives to extend research results to operational agencies to enable improved performance of policy and management decision support systems.

  10. NASA Applications of Molecular Nanotechnology

    NASA Technical Reports Server (NTRS)

    Globus, Al; Bailey, David; Han, Jie; Jaffe, Richard; Levit, Creon; Merkle, Ralph; Srivastava, Deepak

    1998-01-01

    Laboratories throughout the world are rapidly gaining atomically precise control over matter. As this control extends to an ever wider variety of materials, processes and devices, opportunities for applications relevant to NASA's missions will be created. This document surveys a number of future molecular nanotechnology capabilities of aerospace interest. Computer applications, launch vehicle improvements, and active materials appear to be of particular interest. We also list a number of applications for each of NASA's enterprises. If advanced molecular nanotechnology can be developed, almost all of NASA's endeavors will be radically improved. In particular, a sufficiently advanced molecular nanotechnology can arguably bring large scale space colonization within our grasp.

  11. Twenty-first Century Space Science in The Urban High School Setting: The NASA/John Dewey High School Educational Outreach Partnership

    NASA Astrophysics Data System (ADS)

    Fried, B.; Levy, M.; Reyes, C.; Austin, S.

    2003-05-01

    A unique and innovative partnership has recently developed between NASA and John Dewey High School, infusing Space Science into the curriculum. This partnership builds on an existing relationship with MUSPIN/NASA and their regional center at the City University of New York based at Medgar Evers College. As an outgrowth of the success and popularity of our Remote Sensing Research Program, sponsored by the New York State Committee for the Advancement of Technology Education (NYSCATE), and the National Science Foundation and stimulated by MUSPIN-based faculty development workshops, our science department has branched out in a new direction - the establishment of a Space Science Academy. John Dewey High School, located in Brooklyn, New York, is an innovative inner city public school with students of a diverse multi-ethnic population and a variety of economic backgrounds. Students were recruited from this broad spectrum, which covers the range of learning styles and academic achievement. This collaboration includes students of high, average, and below average academic levels, emphasizing participation of students with learning disabilities. In this classroom without walls, students apply the strategies and methodologies of problem-based learning in solving complicated tasks. The cooperative learning approach simulates the NASA method of problem solving, as students work in teams, share research and results. Students learn to recognize the complexity of certain tasks as they apply Earth Science, Mathematics, Physics, Technology and Engineering to design solutions. Their path very much follows the NASA model as they design and build various devices. Our Space Science curriculum presently consists of a one-year sequence of elective classes taken in conjunction with Regents-level science classes. This sequence consists of Remote Sensing, Planetology, Mission to Mars (NASA sponsored research program), and Microbiology, where future projects will be astronomy related. This

  12. Nasa's Ant-Inspired Swarmie Robots

    NASA Technical Reports Server (NTRS)

    Leucht, Kurt W.

    2016-01-01

    As humans push further beyond the grasp of earth, robotic missions in advance of human missions will play an increasingly important role. These robotic systems will find and retrieve valuable resources as part of an in-situ resource utilization (ISRU) strategy. They will need to be highly autonomous while maintaining high task performance levels. NASA Kennedy Space Center has teamed up with the Biological Computation Lab at the University of New Mexico to create a swarm of small, low-cost, autonomous robots to be used as a ground-based research platform for ISRU missions. The behavior of the robot swarm mimics the central-place foraging strategy of ants to find and collect resources in a previously unmapped environment and return those resources to a central site. This talk will guide the audience through the Swarmie robot project from its conception by students in a New Mexico research lab to its robot trials in an outdoor parking lot at NASA. The software technologies and techniques used on the project will be discussed, as well as various challenges and solutions that were encountered by the development team along the way.

  13. Cyberinfrastructure to support Real-time, End-to-End, High Resolution, Localized Forecasting

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Lindholm, D.; Baltzer, T.; Domenico, B.

    2004-12-01

    From natural disasters such as flooding and forest fires to man-made disasters such as toxic gas releases, the impact of weather-influenced severe events on society can be profound. Understanding, predicting, and mitigating such local, mesoscale events calls for a cyberinfrastructure to integrate multidisciplinary data, tools, and services as well as the capability to generate and use high resolution data (such as wind and precipitation) from localized models. The need for such end to end systems -- including data collection, distribution, integration, assimilation, regionalized mesoscale modeling, analysis, and visualization -- has been realized to some extent in many academic and quasi-operational environments, especially for atmospheric sciences data. However, many challenges still remain in the integration and synthesis of data from multiple sources and the development of interoperable data systems and services across those disciplines. Over the years, the Unidata Program Center has developed several tools that have either directly or indirectly facilitated these local modeling activities. For example, the community is using Unidata technologies such as the Internet Data Distribution (IDD) system, Local Data Manger (LDM), decoders, netCDF libraries, Thematic Realtime Environmental Distributed Data Services (THREDDS), and the Integrated Data Viewer (IDV) in their real-time prediction efforts. In essence, these technologies for data reception and processing, local and remote access, cataloging, and analysis and visualization coupled with technologies from others in the community are becoming the foundation of a cyberinfrastructure to support an end-to-end regional forecasting system. To build on these capabilities, the Unidata Program Center is pleased to be a significant contributor to the Linked Environments for Atmospheric Discovery (LEAD) project, a NSF-funded multi-institutional large Information Technology Research effort. The goal of LEAD is to create an

  14. Programmatic status of NASA's CSTI high capacity power Stirling space power converter program

    NASA Technical Reports Server (NTRS)

    Dudenhoefer, James E.

    1990-01-01

    An overview is presented of the NASA Lewis Research Center Free-Piston Stirling Space Power Converter Technology Development Program. This work is being conducted under NASA's Civil Space Technology Initiative (CSTI). The goal of the CSTI High Capacity Power element is to develop the technology base needed to meet the long duration, high capacity power requirements for future NASA space initiatives. Efforts are focused upon increasing system thermal and electric energy conversion efficiency at least fivefold over current SP-100 technology, and on achieving systems that are compatible with space nuclear reactors. The status of test activities with the Space Power Research Engine (SPRE) is discussed. Design deficiencies are gradually being corrected and the power converter is now outputting 11.5 kWe at a temperature ratio of 2 (design output is 12.5 kWe). Detail designs were completed for the 1050 K Component Test Power Converter (CTPC). The success of these and future designs is dependent upon supporting research and technology efforts including heat pipes, gas bearings, superalloy joining technologies and high efficiency alternators. An update of progress in these technologies is provided.

  15. Initial Flight Test of the Production Support Flight Control Computers at NASA Dryden Flight Research Center

    NASA Technical Reports Server (NTRS)

    Carter, John; Stephenson, Mark

    1999-01-01

    The NASA Dryden Flight Research Center has completed the initial flight test of a modified set of F/A-18 flight control computers that gives the aircraft a research control law capability. The production support flight control computers (PSFCC) provide an increased capability for flight research in the control law, handling qualities, and flight systems areas. The PSFCC feature a research flight control processor that is "piggybacked" onto the baseline F/A-18 flight control system. This research processor allows for pilot selection of research control law operation in flight. To validate flight operation, a replication of a standard F/A-18 control law was programmed into the research processor and flight-tested over a limited envelope. This paper provides a brief description of the system, summarizes the initial flight test of the PSFCC, and describes future experiments for the PSFCC.

  16. NASA Data Archive Evaluation

    NASA Technical Reports Server (NTRS)

    Holley, Daniel C.; Haight, Kyle G.; Lindstrom, Ted

    1997-01-01

    The purpose of this study was to expose a range of naive individuals to the NASA Data Archive and to obtain feedback from them, with the goal of learning how useful people with varied backgrounds would find the Archive for research and other purposes. We processed 36 subjects in four experimental categories, designated in this report as C+R+, C+R-, C-R+ and C-R-, for computer experienced researchers, computer experienced non-researchers, non-computer experienced researchers, and non-computer experienced non-researchers, respectively. This report includes an assessment of general patterns of subject responses to the various aspects of the NASA Data Archive. Some of the aspects examined were interface-oriented, addressing such issues as whether the subject was able to locate information, figure out how to perform desired information retrieval tasks, etc. Other aspects were content-related. In doing these assessments, answers given to different questions were sometimes combined. This practice reflects the tendency of the subjects to provide answers expressing their experiences across question boundaries. Patterns of response are cross-examined by subject category in order to bring out deeper understandings of why subjects reacted the way they did to the archive. After the general assessment, there will be a more extensive summary of the replies received from the test subjects.

  17. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    PubMed

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  18. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology

    PubMed Central

    Deodhar, Suruchi; Bisset, Keith R.; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V.

    2014-01-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity. PMID:25530914

  19. Opera: reconstructing optimal genomic scaffolds with high-throughput paired-end sequences.

    PubMed

    Gao, Song; Sung, Wing-Kin; Nagarajan, Niranjan

    2011-11-01

    Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/ ).

  20. NASA Symposium 76. [opportunities for minorities and women in NASA programs

    NASA Technical Reports Server (NTRS)

    1976-01-01

    New Mexico State University and the National Aeronautics and Space Administration hosted a symposium to promote NASA's efforts to increase the available pool of minority and women scientists and engineers to meet affirmative hiring goals. The conferences also provided an opportunity for key NASA officials to meet with appropriate officials of participating institutions to stimulate greater academic interest (among professors and students) in NASA's research and development programs. Minority aerospace scientists and engineers had opportunity to interact with the minority community, particulary with young people at the junior high, high school, and college levels. One aim was to raise minority community's level of understanding regarding NASA's Regional Distribution System for storage and retrieval of scientific and technical information.

  1. Development of a Dynamic, End-to-End Free Piston Stirling Convertor Model

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Gerber, Scott S.; Roth, Mary Ellen

    2004-01-01

    A dynamic model for a free-piston Stirling convertor is being developed at the NASA Glenn Research Center. The model is an end-to-end system model that includes the cycle thermodynamics, the dynamics, and electrical aspects of the system. The subsystems of interest are the heat source, the springs, the moving masses, the linear alternator, the controller, and the end-user load. The envisioned use of the model will be in evaluating how changes in a subsystem could affect the operation of the convertor. The model under development will speed the evaluation of improvements to a subsystem and aid in determining areas in which most significant improvements may be found. One of the first uses of the end-toend model will be in the development of controller architectures. Another related area is in evaluating changes to details in the linear alternator.

  2. NASA Simulation Capabilities

    NASA Technical Reports Server (NTRS)

    Grabbe, Shon R.

    2017-01-01

    This presentation provides a high-level overview of NASA's Future ATM Concepts Evaluation Tool (FACET) with a high-level description of the system's inputs and outputs. This presentation is designed to support the joint simulations that NASA and the Chinese Aeronautical Establishment (CAE) will conduct under an existing Memorandum of Understanding.

  3. Establishing Esri ArcGIS Enterprise Platform Capabilities to Support Response Activities of the NASA Earth Science Disasters Program

    NASA Astrophysics Data System (ADS)

    Molthan, A.; Seepersad, J.; Shute, J.; Carriere, L.; Duffy, D.; Tisdale, B.; Kirschbaum, D.; Green, D. S.; Schwizer, L.

    2017-12-01

    NASA's Earth Science Disasters Program promotes the use of Earth observations to improve the prediction of, preparation for, response to, and recovery from natural and technological disasters. NASA Earth observations and those of domestic and international partners are combined with in situ observations and models by NASA scientists and partners to develop products supporting disaster mitigation, response, and recovery activities among several end-user partners. These products are accompanied by training to ensure proper integration and use of these materials in their organizations. Many products are integrated along with other observations available from other sources in GIS-capable formats to improve situational awareness and response efforts before, during and after a disaster. Large volumes of NASA observations support the generation of disaster response products by NASA field center scientists, partners in academia, and other institutions. For example, a prediction of high streamflows and inundation from a NASA-supported model may provide spatial detail of flood extent that can be combined with GIS information on population density, infrastructure, and land value to facilitate a prediction of who will be affected, and the economic impact. To facilitate the sharing of these outputs in a common framework that can be easily ingested by downstream partners, the NASA Earth Science Disasters Program partnered with Esri and the NASA Center for Climate Simulation (NCCS) to establish a suite of Esri/ArcGIS services to support the dissemination of routine and event-specific products to end users. This capability has been demonstrated to key partners including the Federal Emergency Management Agency using a case-study example of Hurricane Matthew, and will also help to support future domestic and international disaster events. The Earth Science Disasters Program has also established a longer-term vision to leverage scientists' expertise in the development and delivery of

  4. Alliance for Computational Science Collaboration: HBCU Partnership at Alabama A&M University Continuing High Performance Computing Research and Education at AAMU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Xiaoqing; Deng, Z. T.

    2009-11-10

    This is the final report for the Department of Energy (DOE) project DE-FG02-06ER25746, entitled, "Continuing High Performance Computing Research and Education at AAMU". This three-year project was started in August 15, 2006, and it was ended in August 14, 2009. The objective of this project was to enhance high performance computing research and education capabilities at Alabama A&M University (AAMU), and to train African-American and other minority students and scientists in the computational science field for eventual employment with DOE. AAMU has successfully completed all the proposed research and educational tasks. Through the support of DOE, AAMU was able tomore » provide opportunities to minority students through summer interns and DOE computational science scholarship program. In the past three years, AAMU (1). Supported three graduate research assistants in image processing for hypersonic shockwave control experiment and in computational science related area; (2). Recruited and provided full financial support for six AAMU undergraduate summer research interns to participate Research Alliance in Math and Science (RAMS) program at Oak Ridge National Lab (ORNL); (3). Awarded highly competitive 30 DOE High Performance Computing Scholarships ($1500 each) to qualified top AAMU undergraduate students in science and engineering majors; (4). Improved high performance computing laboratory at AAMU with the addition of three high performance Linux workstations; (5). Conducted image analysis for electromagnetic shockwave control experiment and computation of shockwave interactions to verify the design and operation of AAMU-Supersonic wind tunnel. The high performance computing research and education activities at AAMU created great impact to minority students. As praised by Accreditation Board for Engineering and Technology (ABET) in 2009, ?The work on high performance computing that is funded by the Department of Energy provides scholarships to undergraduate students

  5. Cassini End of Mission Preview

    NASA Image and Video Library

    2017-09-13

    Principle investigator for the Ion and Neutral Mass Spectrometer (INMS) at the Southwest Research Institute, Hunter Waite, points to the location of the INMS during a press conference previewing Cassini's End of Mission, Wednesday, Sept. 13, 2017 at NASA's Jet Propulsion Laboratory in Pasadena, California. Since its arrival in 2004, the Cassini-Huygens mission has been a discovery machine, revolutionizing our knowledge of the Saturn system and captivating us with data and images never before obtained with such detail and clarity. On Sept. 15, 2017, operators will deliberately plunge the spacecraft into Saturn, as Cassini gathered science until the end. The “plunge” ensures Saturn’s moons will remain pristine for future exploration. During Cassini’s final days, mission team members from all around the world gathered at NASA’s Jet Propulsion Laboratory, Pasadena, California, to celebrate the achievements of this historic mission. Photo Credit: (NASA/Joel Kowsky)

  6. Data handling and visualization for NASA's science programs

    NASA Technical Reports Server (NTRS)

    Bredekamp, Joseph H. (Editor)

    1995-01-01

    Advanced information systems capabilities are essential to conducting NASA's scientific research mission. Access to these capabilities is no longer a luxury for a select few within the science community, but rather an absolute necessity for carrying out scientific investigations. The dependence on high performance computing and networking, as well as ready and expedient access to science data, metadata, and analysis tools is the fundamental underpinning for the entire research endeavor. At the same time, advances in the whole range of information technologies continues on an almost explosive growth path, reaching beyond the research community to affect the population as a whole. Capitalizing on and exploiting these advances are critical to the continued success of space science investigations. NASA must remain abreast of developments in the field and strike an appropriate balance between being a smart buyer and a direct investor in the technology which serves its unique requirements. Another key theme deals with the need for the space and computer science communities to collaborate as partners to more fully realize the potential of information technology in the space science research environment.

  7. NASA Spacecraft Monitors Flooding in Algeria

    NASA Image and Video Library

    2012-03-09

    Extremely heavy rains fell at the end of February 2012 in the northern Algerian province of El Tarf, near the Tunisian border. The rainfall total was the greatest recorded in the last 30 years. This image is from NASA Terra spacecraft.

  8. The NASA Commercial Crew Program (CCP) Mission Assurance Process

    NASA Technical Reports Server (NTRS)

    Canfield, Amy

    2016-01-01

    In 2010, NASA established the Commercial Crew Program in order to provide human access to the International Space Station and low earth orbit via the commercial (non-governmental) sector. A particular challenge to NASA has been how to determine the commercial providers transportation system complies with Programmatic safety requirements. The process used in this determination is the Safety Technical Review Board which reviews and approves provider submitted Hazard Reports. One significant product of the review is a set of hazard control verifications. In past NASA programs, 100 percent of these safety critical verifications were typically confirmed by NASA. The traditional Safety and Mission Assurance (SMA) model does not support the nature of the Commercial Crew Program. To that end, NASA SMA is implementing a Risk Based Assurance (RBA) process to determine which hazard control verifications require NASA authentication. Additionally, a Shared Assurance Model is also being developed to efficiently use the available resources to execute the verifications. This paper will describe the evolution of the CCP Mission Assurance process from the beginning of the Program to its current incarnation. Topics to be covered include a short history of the CCP; the development of the Programmatic mission assurance requirements; the current safety review process; a description of the RBA process and its products and ending with a description of the Shared Assurance Model.

  9. Computer Interactives for the Mars Atmospheric and Volatile Evolution (MAVEN) Mission through NASA's "Project Spectra!"

    NASA Astrophysics Data System (ADS)

    Wood, E. L.

    2014-12-01

    "Project Spectra!" is a standards-based E-M spectrum and engineering program that includes paper and pencil activities as well as Flash-based computer games that help students solidify understanding of high-level planetary and solar physics. Using computer interactive games, students experience and manipulate information making abstract concepts accessible, solidifying understanding and enhancing retention of knowledge. Since students can choose what to watch and explore, the interactives accommodate a broad range of learning styles. Students can go back and forth through the interactives if they've missed a concept or wish to view something again. In the end, students are asked critical thinking questions and conduct web-based research. As part of the Mars Atmospheric and Volatile EvolutioN (MAVEN) mission education programming, we've developed two new interactives. The MAVEN mission will study volatiles in the upper atmosphere to help piece together Mars' climate history. In the first interactive, students explore black body radiation, albedo, and a simplified greenhouse effect to establish what factors contribute to overall planetary temperature. Students design a planet that is able to maintain liquid water on the surface. In the second interactive, students are asked to consider conditions needed for Mars to support water on the surface, keeping some variables fixed. Ideally, students will walk away with the very basic and critical elements required for climate studies, which has far-reaching implications beyond the study of Mars. These interactives were pilot tested at Arvada High School in Colorado.

  10. The IBM PC at NASA Ames

    NASA Technical Reports Server (NTRS)

    Peredo, James P.

    1988-01-01

    Like many large companies, Ames relies very much on its computing power to get work done. And, like many other large companies, finding the IBM PC a reliable tool, Ames uses it for many of the same types of functions as other companies. Presentation and clarification needs demand much of graphics packages. Programming and text editing needs require simpler, more-powerful packages. The storage space needed by NASA's scientists and users for the monumental amounts of data that Ames needs to keep demand the best database packages that are large and easy to use. Availability to the Micom Switching Network combines the powers of the IBM PC with the capabilities of other computers and mainframes and allows users to communicate electronically. These four primary capabilities of the PC are vital to the needs of NASA's users and help to continue and support the vast amounts of work done by the NASA employees.

  11. High performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  12. NASA Administrative Data Base Management Systems, 1984

    NASA Technical Reports Server (NTRS)

    Radosevich, J. D. (Editor)

    1984-01-01

    Strategies for converting to a data base management system (DBMS) and the implementation of the software packages necessary are discussed. Experiences with DBMS at various NASA centers are related including Langley's ADABAS/NATURAL and the NEMS subsystem of the NASA metrology informaton system. The value of the integrated workstation with a personal computer is explored.

  13. NASA/Army Rotorcraft Transmission Research, a Review of Recent Significant Accomplishments

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    1994-01-01

    A joint helicopter transmission research program between NASA Lewis Research Center and the U.S. Army Research Lab has existed since 1970. Research goals are to reduce weight and noise while increasing life, reliability, and safety. These research goals are achieved by the NASA/Army Mechanical Systems Technology Branch through both in-house research and cooperative research projects with university and industry partners. Some recent significant technical accomplishments produced by this cooperative research are reviewed. The following research projects are reviewed: oil-off survivability of tapered roller bearings, design and evaluation of high contact ratio gearing, finite element analysis of spiral bevel gears, computer numerical control grinding of spiral bevel gears, gear dynamics code validation, computer program for life and reliability of helicopter transmissions, planetary gear train efficiency study, and the Advanced Rotorcraft Transmission (ART) program.

  14. NASA Studies Lightning Storms Using High-Flying, Uninhabited Vehicle

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A NASA team studying the causes of electrical storms and their effects on our home planet achieved a milestone on August 21, 2002, completing the study's longest-duration research flight and monitoring four thunderstorms in succession. Based at the Naval Air Station Key West, Florida, researchers with the Altus Cumulus Electrification Study (ACES) used the Altus II remotely-piloted aircraft to study thunderstorms in the Atlantic Ocean off Key West and the west of the Everglades. The ACES lightning study used the Altus II twin turbo uninhabited aerial vehicle, built by General Atomics Aeronautical Systems, Inc. of San Diego. The Altus II was chosen for its slow flight speed of 75 to 100 knots (80 to 115 mph), long endurance, and high-altitude flight (up to 65,000 feet). These qualities gave the Altus II the ability to fly near and around thunderstorms for long periods of time, allowing investigations to be to be conducted over the entire life cycle of storms. The vehicle has a wing span of 55 feet and a payload capacity of over 300 lbs. With dual goals of gathering weather data safely and testing the adaptability of the uninhabited aircraft, the ACES study is a collaboration among the Marshall Space Flight Center, the University of Alabama in Huntsville, NASA,s Goddard Space Flight Center in Greenbelt, Maryland, Pernsylvania State University in University Park, and General Atomics Aeronautical Systems, Inc.

  15. Technological Innovations from NASA

    NASA Technical Reports Server (NTRS)

    Pellis, Neal R.

    2006-01-01

    The challenge of human space exploration places demands on technology that push concepts and development to the leading edge. In biotechnology and biomedical equipment development, NASA science has been the seed for numerous innovations, many of which are in the commercial arena. The biotechnology effort has led to rational drug design, analytical equipment, and cell culture and tissue engineering strategies. Biomedical research and development has resulted in medical devices that enable diagnosis and treatment advances. NASA Biomedical developments are exemplified in the new laser light scattering analysis for cataracts, the axial flow left ventricular-assist device, non contact electrocardiography, and the guidance system for LASIK surgery. Many more developments are in progress. NASA will continue to advance technologies, incorporating new approaches from basic and applied research, nanotechnology, computational modeling, and database analyses.

  16. New coplanar waveguide to rectangular waveguide end launcher

    NASA Technical Reports Server (NTRS)

    Simons, R. N.; Taub, S. R.

    1992-01-01

    A new coplanar waveguide to rectangular waveguide end launcher is experimentally demonstrated. The end launcher operates over the Ka-band frequencies that are designated for the NASA Advanced Communication Technology Satellite uplink. The measured insertion loss and return loss are better than 0.5 and -10 dB, respectively.

  17. Emerging and Future Computing Paradigms and Their Impact on the Research, Training, and Design Environments of the Aerospace Workforce

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    2003-01-01

    The document contains the proceedings of the training workshop on Emerging and Future Computing Paradigms and their impact on the Research, Training and Design Environments of the Aerospace Workforce. The workshop was held at NASA Langley Research Center, Hampton, Virginia, March 18 and 19, 2003. The workshop was jointly sponsored by Old Dominion University and NASA. Workshop attendees came from NASA, other government agencies, industry and universities. The objectives of the workshop were to a) provide broad overviews of the diverse activities related to new computing paradigms, including grid computing, pervasive computing, high-productivity computing, and the IBM-led autonomic computing; and b) identify future directions for research that have high potential for future aerospace workforce environments. The format of the workshop included twenty-one, half-hour overview-type presentations and three exhibits by vendors.

  18. Computer simulation of multiple pilots flying a modern high performance helicopter

    NASA Technical Reports Server (NTRS)

    Zipf, Mark E.; Vogt, William G.; Mickle, Marlin H.; Hoelzeman, Ronald G.; Kai, Fei; Mihaloew, James R.

    1988-01-01

    A computer simulation of a human response pilot mechanism within the flight control loop of a high-performance modern helicopter is presented. A human response mechanism, implemented by a low order, linear transfer function, is used in a decoupled single variable configuration that exploits the dominant vehicle characteristics by associating cockpit controls and instrumentation with specific vehicle dynamics. Low order helicopter models obtained from evaluations of the time and frequency domain responses of a nonlinear simulation model, provided by NASA Lewis Research Center, are presented and considered in the discussion of the pilot development. Pilot responses and reactions to test maneuvers are presented and discussed. Higher level implementation, using the pilot mechanisms, are discussed and considered for their use in a comprehensive control structure.

  19. Space Images for NASA/JPL

    NASA Technical Reports Server (NTRS)

    Boggs, Karen; Gutheinz, Sandy C.; Watanabe, Susan M.; Oks, Boris; Arca, Jeremy M.; Stanboli, Alice; Peez, Martin; Whatmore, Rebecca; Kang, Minliang; Espinoza, Luis A.

    2010-01-01

    Space Images for NASA/JPL is an Apple iPhone application that allows the general public to access featured images from the Jet Propulsion Laboratory (JPL). A back-end infrastructure stores, tracks, and retrieves space images from the JPL Photojournal Web server, and catalogs the information into a streamlined rating infrastructure.

  20. NASA Environmentally Responsible Aviation's Highly-Loaded Front Block Compressor Demonstration

    NASA Technical Reports Server (NTRS)

    Celestina, Mark

    2017-01-01

    The ERA project was created in 2009 as part of NASAs Aeronautics Research Mission Directorates (ARMD) Integrated Systems Aviation Program (IASP). The purpose of the ERA project was to explore and document the feasibility, benefit, and technical risk of vehicles concepts and enabling technologies to reduce aviations impact on the environment. The metrics for this technology is given in Figure 1 with the N+2 metrics highlighted in green. It is anticipated that the United States air transportation system will continue to expand significantly over the next few decades thus adversely impacting the environment unless new technology is incorporated to simultaneously reduce nitrous oxides (NOx), noise and fuel consumption. In order to achieve the overall goals and meet the technology insertion challenges, these goals were divided into technical challenges that were to be achieved during the execution of the ERA project. Technical challenges were accomplished through test campaigns conducted by Integrated Technology Demonstration (ITDs). ERAs technical performance period ended in 2015.