Sample records for computer storage space

  1. Shared Storage Usage Policy | High-Performance Computing | NREL

    Science.gov Websites

    Shared Storage Usage Policy Shared Storage Usage Policy To use NREL's high-performance computing (HPC) systems, you must abide by the Shared Storage Usage Policy. /projects NREL HPC allocations include storage space in the /projects filesystem. However, /projects is a shared resource and project

  2. Storage and network bandwidth requirements through the year 2000 for the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen

    1996-01-01

    The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.

  3. Space shuttle propulsion parameter estimation using optional estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A regression analyses on tabular aerodynamic data provided. A representative aerodynamic model for coefficient estimation. It also reduced the storage requirements for the "normal' model used to check out the estimation algorithms. The results of the regression analyses are presented. The computer routines for the filter portion of the estimation algorithm and the :"bringing-up' of the SRB predictive program on the computer was developed. For the filter program, approximately 54 routines were developed. The routines were highly subsegmented to facilitate overlaying program segments within the partitioned storage space on the computer.

  4. A Science Cloud: OneSpaceNet

    NASA Astrophysics Data System (ADS)

    Morikawa, Y.; Murata, K. T.; Watari, S.; Kato, H.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Shimojo, S.

    2010-12-01

    Main methodologies of Solar-Terrestrial Physics (STP) so far are theoretical, experimental and observational, and computer simulation approaches. Recently "informatics" is expected as a new (fourth) approach to the STP studies. Informatics is a methodology to analyze large-scale data (observation data and computer simulation data) to obtain new findings using a variety of data processing techniques. At NICT (National Institute of Information and Communications Technology, Japan) we are now developing a new research environment named "OneSpaceNet". The OneSpaceNet is a cloud-computing environment specialized for science works, which connects many researchers with high-speed network (JGN: Japan Gigabit Network). The JGN is a wide-area back-born network operated by NICT; it provides 10G network and many access points (AP) over Japan. The OneSpaceNet also provides with rich computer resources for research studies, such as super-computers, large-scale data storage area, licensed applications, visualization devices (like tiled display wall: TDW), database/DBMS, cluster computers (4-8 nodes) for data processing and communication devices. What is amazing in use of the science cloud is that a user simply prepares a terminal (low-cost PC). Once connecting the PC to JGN2plus, the user can make full use of the rich resources of the science cloud. Using communication devices, such as video-conference system, streaming and reflector servers, and media-players, the users on the OneSpaceNet can make research communications as if they belong to a same (one) laboratory: they are members of a virtual laboratory. The specification of the computer resources on the OneSpaceNet is as follows: The size of data storage we have developed so far is almost 1PB. The number of the data files managed on the cloud storage is getting larger and now more than 40,000,000. What is notable is that the disks forming the large-scale storage are distributed to 5 data centers over Japan (but the storage system performs as one disk). There are three supercomputers allocated on the cloud, one from Tokyo, one from Osaka and the other from Nagoya. One's simulation job data on any supercomputers are saved on the cloud data storage (same directory); it is a kind of virtual computing environment. The tiled display wall has 36 panels acting as one display; the pixel (resolution) size of it is as large as 18000x4300. This size is enough to preview or analyze the large-scale computer simulation data. It also allows us to take a look of multiple (e.g., 100 pictures) on one screen together with many researchers. In our talk we also present a brief report of the initial results using the OneSpaceNet for Global MHD simulations as an example of successful use of our science cloud; (i) Ultra-high time resolution visualization of Global MHD simulations on the large-scale storage and parallel processing system on the cloud, (ii) Database of real-time Global MHD simulation and statistic analyses of the data, and (iii) 3D Web service of Global MHD simulations.

  5. Data systems and computer science space data systems: Onboard networking and testbeds

    NASA Technical Reports Server (NTRS)

    Dalton, Dan

    1991-01-01

    The technical objectives are to develop high-performance, space-qualifiable, onboard computing, storage, and networking technologies. The topics are presented in viewgraph form and include the following: justification; technology challenges; program description; and state-of-the-art assessment.

  6. Mass Storage System Upgrades at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Tarshish, Adina; Salmon, Ellen; Macie, Medora; Saletta, Marty

    2000-01-01

    The NASA Center for Computational Sciences (NCCS) provides supercomputing and mass storage services to over 1200 Earth and space scientists. During the past two years, the mass storage system at the NCCS went through a great deal of changes both major and minor. Tape drives, silo control software, and the mass storage software itself were upgraded, and the mass storage platform was upgraded twice. Some of these upgrades were aimed at achieving year-2000 compliance, while others were simply upgrades to newer and better technologies. In this paper we will describe these upgrades.

  7. Square Footage Requirements for Use in Developing the Local Facilities Plans and State Capital Outlay Applications for Funding.

    ERIC Educational Resources Information Center

    Georgia State Dept. of Education, Atlanta. Facilities Services Unit.

    This document presents the space requirements for Georgia's elementary, middle, and high schools. All square footage requirements are computed by using inside dimensions of a room; the square footage of support spaces in suites may be included when computing the square footage of the suite. Examples of support spaces include storage rooms,…

  8. Space station data system analysis/architecture study. Task 2: Options development DR-5. Volume 1: Technology options

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The second task in the Space Station Data System (SSDS) Analysis/Architecture Study is the development of an information base that will support the conduct of trade studies and provide sufficient data to make key design/programmatic decisions. This volume identifies the preferred options in the technology category and characterizes these options with respect to performance attributes, constraints, cost, and risk. The technology category includes advanced materials, processes, and techniques that can be used to enhance the implementation of SSDS design structures. The specific areas discussed are mass storage, including space and round on-line storage and off-line storage; man/machine interface; data processing hardware, including flight computers and advanced/fault tolerant computer architectures; and software, including data compression algorithms, on-board high level languages, and software tools. Also discussed are artificial intelligence applications and hard-wire communications.

  9. Data systems and computer science space data systems: Onboard memory and storage

    NASA Technical Reports Server (NTRS)

    Shull, Tom

    1991-01-01

    The topics are presented in viewgraph form and include the following: technical objectives; technology challenges; state-of-the-art assessment; mass storage comparison; SODR drive and system concepts; program description; vertical Bloch line (VBL) device concept; relationship to external programs; and backup charts for memory and storage.

  10. Modular thermal analyzer routine, volume 1

    NASA Technical Reports Server (NTRS)

    Oren, J. A.; Phillips, M. A.; Williams, D. R.

    1972-01-01

    The Modular Thermal Analyzer Routine (MOTAR) is a general thermal analysis routine with strong capabilities for performing thermal analysis of systems containing flowing fluids, fluid system controls (valves, heat exchangers, etc.), life support systems, and thermal radiation situations. Its modular organization permits the analysis of a very wide range of thermal problems for simple problems containing a few conduction nodes to those containing complicated flow and radiation analysis with each problem type being analyzed with peak computational efficiency and maximum ease of use. The organization and programming methods applied to MOTAR achieved a high degree of computer utilization efficiency in terms of computer execution time and storage space required for a given problem. The computer time required to perform a given problem on MOTAR is approximately 40 to 50 percent that required for the currently existing widely used routines. The computer storage requirement for MOTAR is approximately 25 percent more than the most commonly used routines for the most simple problems but the data storage techniques for the more complicated options should save a considerable amount of space.

  11. Production planning, production systems for flexible automation

    NASA Astrophysics Data System (ADS)

    Spur, G.; Mertins, K.

    1982-09-01

    Trends in flexible manufacturing system (FMS) applications are reviewed. Machining systems contain machines which complement each other and can replace each other. Computer controlled storage systems are widespread, with central storage capacity ranging from 20 pallet spaces to 200 magazine spaces. Handling function is fulfilled by pallet chargers in over 75% of FMS's. Data system degree of automation varies considerably. No trends are noted for transport systems.

  12. Research on phone contacts online status based on mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Wang, Wen-jinga; Ge, Weib

    2013-03-01

    Because the limited ability of storage space, CPU processing on mobile phone, it is difficult to realize complex applications on mobile phones, but along with the development of cloud computing, we can place the computing and storage in the clouds, provide users with rich cloud services, helping users complete various function through the browser has become the trend for future mobile communication. This article is taking the mobile phone contacts online status as an example to analysis the development and application of mobile cloud computing.

  13. Data management applications

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Kennedy Space Center's primary institutional computer is a 4 megabyte IBM 4341 with 3.175 billion characters of IBM 3350 disc storage. This system utilizes the Software AG product known as ADABAS with the on line user oriented features of NATURAL and COMPLETE as a Data Base Management System (DBMS). It is operational under the OS/VSI and is currently supporting batch/on line applications such as Personnel, Training, Physical Space Management, Procurement, Office Equipment Maintenance, and Equipment Visibility. A third and by far the largest DBMS application is known as the Shuttle Inventory Management System (SIMS) which is operational on a Honeywell 6660 (dedicated) computer system utilizing Honeywell Integrated Data Storage I (IDSI) as the DBMS. The SIMS application is designed to provide central supply system acquisition, inventory control, receipt, storage, and issue of spares, supplies, and materials.

  14. Mass storage system experiences and future needs at the National Center for Atmospheric Research

    NASA Technical Reports Server (NTRS)

    Olear, Bernard T.

    1991-01-01

    A summary and viewgraphs of a discussion presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. Some of the experiences of the Scientific Computing Division at the National Center for Atmospheric Research (NCAR) dealing the the 'data problem' are discussed. A brief history and a development of some basic mass storage system (MSS) principles are given. An attempt is made to show how these principles apply to the integration of various components into NCAR's MSS. Future MSS needs for future computing environments is discussed.

  15. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  16. SANs and Large Scale Data Migration at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2004-01-01

    Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.

  17. New Information Dispersal Techniques for Trustworthy Computing

    ERIC Educational Resources Information Center

    Parakh, Abhishek

    2011-01-01

    Information dispersal algorithms (IDA) are used for distributed data storage because they simultaneously provide security, reliability and space efficiency, constituting a trustworthy computing framework for many critical applications, such as cloud computing, in the information society. In the most general sense, this is achieved by dividing data…

  18. System Resource Allocations | High-Performance Computing | NREL

    Science.gov Websites

    Allocations System Resource Allocations To use NREL's high-performance computing (HPC) resources : Compute hours on NREL HPC Systems including Peregrine and Eagle Storage space (in Terabytes) on Peregrine , Eagle and Gyrfalcon. Allocations are principally done in response to an annual call for allocation

  19. Experimental Results from the Thermal Energy Storage-1 (TES-1) Flight Experiment

    NASA Technical Reports Server (NTRS)

    Wald, Lawrence W.; Tolbert, Carol; Jacqmin, David

    1995-01-01

    The Thermal Energy Storage-1 (TES-1) is a flight experiment that flew on the Space Shuttle Columbia (STS-62), in March 1994, as part of the OAST-2 mission. TES-1 is the first experiment in a four experiment suite designed to provide data for understanding the long duration microgravity behavior of thermal energy storage fluoride salts that undergo repeated melting and freezing. Such data have never been obtained before and have direct application for the development of space-based solar dynamic (SD) power systems. These power systems will store solar energy in a thermal energy salt such as lithium fluoride or calcium fluoride. The stored energy is extracted during the shade portion of the orbit. This enables the solar dynamic power system to provide constant electrical power over the entire orbit. Analytical computer codes have been developed for predicting performance of a spaced-based solar dynamic power system. Experimental verification of the analytical predictions is needed prior to using the analytical results for future space power design applications. The four TES flight experiments will be used to obtain the needed experimental data. This paper will focus on the flight results from the first experiment, TES-1, in comparison to the predicted results from the Thermal Energy Storage Simulation (TESSIM) analytical computer code. The TES-1 conceptual development, hardware design, final development, and system verification testing were accomplished at the NASA lewis Research Center (LeRC). TES-1 was developed under the In-Space Technology Experiment Program (IN-STEP), which sponsors NASA, industry, and university flight experiments designed to enable and enhance space flight technology. The IN-STEP Program is sponsored by the Office of Space Access and Technology (OSAT).

  20. A Comprehensive Study on Energy Efficiency and Performance of Flash-based SSD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seon-Yeon; Kim, Youngjae; Urgaonkar, Bhuvan

    2011-01-01

    Use of flash memory as a storage medium is becoming popular in diverse computing environments. However, because of differences in interface, flash memory requires a hard-disk-emulation layer, called FTL (flash translation layer). Although the FTL enables flash memory storages to replace conventional hard disks, it induces significant computational and space overhead. Despite the low power consumption of flash memory, this overhead leads to significant power consumption in an overall storage system. In this paper, we analyze the characteristics of flash-based storage devices from the viewpoint of power consumption and energy efficiency by using various methodologies. First, we utilize simulation tomore » investigate the interior operation of flash-based storage of flash-based storages. Subsequently, we measure the performance and energy efficiency of commodity flash-based SSDs by using microbenchmarks to identify the block-device level characteristics and macrobenchmarks to reveal their filesystem level characteristics.« less

  1. Applying a cloud computing approach to storage architectures for spacecraft

    NASA Astrophysics Data System (ADS)

    Baldor, Sue A.; Quiroz, Carlos; Wood, Paul

    As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.

  2. Storage Information Management System (SIMS) Spaceflight Hardware Warehousing at Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Kubicko, Richard M.; Bingham, Lindy

    1995-01-01

    Goddard Space Flight Center (GSFC) on site and leased warehouses contain thousands of items of ground support equipment (GSE) and flight hardware including spacecraft, scaffolding, computer racks, stands, holding fixtures, test equipment, spares, etc. The control of these warehouses, and the management, accountability, and control of the items within them, is accomplished by the Logistics Management Division. To facilitate this management and tracking effort, the Logistics and Transportation Management Branch, is developing a system to provide warehouse personnel, property owners, and managers with storage and inventory information. This paper will describe that PC-based system and address how it will improve GSFC warehouse and storage management.

  3. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  4. Pen-based computers: Computers without keys

    NASA Technical Reports Server (NTRS)

    Conklin, Cheryl L.

    1994-01-01

    The National Space Transportation System (NSTS) is comprised of many diverse and highly complex systems incorporating the latest technologies. Data collection associated with ground processing of the various Space Shuttle system elements is extremely challenging due to the many separate processing locations where data is generated. This presents a significant problem when the timely collection, transfer, collation, and storage of data is required. This paper describes how new technology, referred to as Pen-Based computers, is being used to transform the data collection process at Kennedy Space Center (KSC). Pen-Based computers have streamlined procedures, increased data accuracy, and now provide more complete information than previous methods. The end results is the elimination of Shuttle processing delays associated with data deficiencies.

  5. Understanding I/O workload characteristics of a Peta-scale storage system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gunasekaran, Raghul

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less

  6. Computational complexities and storage requirements of some Riccati equation solvers

    NASA Technical Reports Server (NTRS)

    Utku, Senol; Garba, John A.; Ramesh, A. V.

    1989-01-01

    The linear optimal control problem of an nth-order time-invariant dynamic system with a quadratic performance functional is usually solved by the Hamilton-Jacobi approach. This leads to the solution of the differential matrix Riccati equation with a terminal condition. The bulk of the computation for the optimal control problem is related to the solution of this equation. There are various algorithms in the literature for solving the matrix Riccati equation. However, computational complexities and storage requirements as a function of numbers of state variables, control variables, and sensors are not available for all these algorithms. In this work, the computational complexities and storage requirements for some of these algorithms are given. These expressions show the immensity of the computational requirements of the algorithms in solving the Riccati equation for large-order systems such as the control of highly flexible space structures. The expressions are also needed to compute the speedup and efficiency of any implementation of these algorithms on concurrent machines.

  7. Performance Analysis and Parametric Study of a Natural Convection Solar Air Heater With In-built Oil Storage

    NASA Astrophysics Data System (ADS)

    Dhote, Yogesh; Thombre, Shashikant

    2016-10-01

    This paper presents the thermal performance of the proposed double flow natural convection solar air heater with in-built liquid (oil) sensible heat storage. Unused engine oil was used as thermal energy storage medium due to its good heat retaining capacity even at high temperatures without evaporation. The performance evaluation was carried out for a day of the month March for the climatic conditions of Nagpur (India). A self reliant computational model was developed using computational tool as C++. The program developed was self reliant and compute the performance parameters for any day of the year and would be used for major cities in India. The effect of change in storage oil quantity and the inclination (tilt angle) on the overall efficiency of the solar air heater was studied. The performance was tested initially at different storage oil quantities as 25, 50, 75 and 100 l for a plate spacing of 0.04 m with an inclination of 36o. It has been found that the solar air heater gives the best performance at a storage oil quantity of 50 l. The performance of the proposed solar air heater is further tested for various combinations of storage oil quantity (50, 75 and 100 l) and the inclination (0o, 15o, 30o, 45o, 60o, 75o, 90o). It has been found that the proposed solar air heater with in-built oil storage shows its best performance for the combination of 50 l storage oil quantity and 60o inclination. Finally the results of the parametric study was also presented in the form of graphs carried out for a fixed storage oil quantity of 25 l, plate spacing of 0.03 m and at an inclination of 36o to study the behaviour of various heat transfer and fluid flow parameters of the solar air heater.

  8. NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report

    NASA Technical Reports Server (NTRS)

    Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ

    2013-01-01

    The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities

  9. A Queue Simulation Tool for a High Performance Scientific Computing Center

    NASA Technical Reports Server (NTRS)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  10. Proposed CMG momentum management scheme for space station

    NASA Technical Reports Server (NTRS)

    Bishop, L. R.; Bishop, R. H.; Lindsay, K. L.

    1987-01-01

    A discrete control moment gyro (CMG) momentum management scheme (MMS) applicable to spacecraft with principal axes misalignments, such as the proposed NASA dual keel space station, is presented in this paper. The objective of the MMS is to minmize CMG angular momentum storage requirements for maintaining the space station near local vertical in the presence of environmental disturbances. It utilizes available environmental disturbances, namely gravity gradient torques, to minimize CMG momentum storage. The MMS is executed once per orbit and generates a commanded torque equilibrium attitude (TEA) time history which consists of a yaw, pitch and roll angle command profile. Although the algorithm is called only once per orbit to compute the TEA profile, the space station will maneuver several discrete times each orbit.

  11. A new technique in reference based DNA sequence compression algorithm: Enabling partial decompression

    NASA Astrophysics Data System (ADS)

    Banerjee, Kakoli; Prasad, R. A.

    2014-10-01

    The whole gamut of Genetic data is ever increasing exponentially. The human genome in its base format occupies almost thirty terabyte of data and doubling its size every two and a half year. It is well-know that computational resources are limited. The most important resource which genetic data requires in its collection, storage and retrieval is its storage space. Storage is limited. Computational performance is also dependent on storage and execution time. Transmission capabilities are also directly dependent on the size of the data. Hence Data compression techniques become an issue of utmost importance when we confront with the task of handling such giganticdatabases like GenBank. Decompression is also an issue when such huge databases are being handled. This paper is intended not only to provide genetic data compression but also partially decompress the genetic sequences.

  12. A Framework for Managing Inter-Site Storage Area Networks using Grid Technologies

    NASA Technical Reports Server (NTRS)

    Kobler, Ben; McCall, Fritz; Smorul, Mike

    2006-01-01

    The NASA Goddard Space Flight Center and the University of Maryland Institute for Advanced Computer Studies are studying mechanisms for installing and managing Storage Area Networks (SANs) that span multiple independent collaborating institutions using Storage Area Network Routers (SAN Routers). We present a framework for managing inter-site distributed SANs that uses Grid Technologies to balance the competing needs to control local resources, share information, delegate administrative access, and manage the complex trust relationships between the participating sites.

  13. A brief description of the Medical Information Computer System (MEDICS). [real time minicomputer system

    NASA Technical Reports Server (NTRS)

    Moseley, E. C.

    1974-01-01

    The Medical Information Computer System (MEDICS) is a time shared, disk oriented minicomputer system capable of meeting storage and retrieval needs for the space- or non-space-related applications of at least 16 simultaneous users. At the various commercially available low cost terminals, the simple command and control mechanism and the generalized communication activity of the system permit multiple form inputs, real-time updating, and instantaneous retrieval capability with a full range of options.

  14. Demonstration of NICT Space Weather Cloud --Integration of Supercomputer into Analysis and Visualization Environment--

    NASA Astrophysics Data System (ADS)

    Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.

    2010-12-01

    In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on JGN2plus, and they constitute 1PB (physical size) virtual storage by Gfarm v2. These disk servers are connected with supercomputers of NICT and Osaka University. A system that data output from the supercomputers are automatically transferred to the virtual storage had been built up. Transfer rate is about 50 GB/hrs by actual measurement. It is estimated that the performance is reasonable for a certain simulation and analysis for reconstruction of coronal magnetic field. This research is assumed an experiment of the system, and the verification of practicality is advanced at the same time. Herein we introduce an overview of the space weather cloud system so far we have developed. We also demonstrate several scientific results using the space weather cloud system. We also introduce several web applications of the cloud as a service of the space weather cloud, which is named as "e-SpaceWeather" (e-SW). The e-SW provides with a variety of space weather online services from many aspects.

  15. Storage media for computers in radiology.

    PubMed

    Dandu, Ravi Varma

    2008-11-01

    The introduction and wide acceptance of digital technology in medical imaging has resulted in an exponential increase in the amount of data produced by the radiology department. There is an insatiable need for storage space to archive this ever-growing volume of image data. Healthcare facilities should plan the type and size of the storage media that they needed, based not just on the volume of data but also on considerations such as the speed and ease of access, redundancy, security, costs, as well as the longevity of the archival technology. This article reviews the various digital storage media and compares their merits and demerits.

  16. Storages Are Not Forever

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cambria, Erik; Chattopadhyay, Anupam; Linn, Eike

    Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. Here, we chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering outmore » noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.« less

  17. Storages Are Not Forever

    DOE PAGES

    Cambria, Erik; Chattopadhyay, Anupam; Linn, Eike; ...

    2017-05-27

    Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. Here, we chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering outmore » noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.« less

  18. Grid data access on widely distributed worker nodes using scalla and SRM

    NASA Astrophysics Data System (ADS)

    Jakl, P.; Lauret, J.; Hanushevsky, A.; Shoshani, A.; Sim, A.; Gu, J.

    2008-07-01

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.

  19. Local active information storage as a tool to understand distributed neural information processing

    PubMed Central

    Wibral, Michael; Lizier, Joseph T.; Vögler, Sebastian; Priesemann, Viola; Galuske, Ralf

    2013-01-01

    Every act of information processing can in principle be decomposed into the component operations of information storage, transfer, and modification. Yet, while this is easily done for today's digital computers, the application of these concepts to neural information processing was hampered by the lack of proper mathematical definitions of these operations on information. Recently, definitions were given for the dynamics of these information processing operations on a local scale in space and time in a distributed system, and the specific concept of local active information storage was successfully applied to the analysis and optimization of artificial neural systems. However, no attempt to measure the space-time dynamics of local active information storage in neural data has been made to date. Here we measure local active information storage on a local scale in time and space in voltage sensitive dye imaging data from area 18 of the cat. We show that storage reflects neural properties such as stimulus preferences and surprise upon unexpected stimulus change, and in area 18 reflects the abstract concept of an ongoing stimulus despite the locally random nature of this stimulus. We suggest that LAIS will be a useful quantity to test theories of cortical function, such as predictive coding. PMID:24501593

  20. Engineering study for the functional design of a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Miller, J. S.; Vandever, W. H.; Stanten, S. F.; Avakian, A. E.; Kosmala, A. L.

    1972-01-01

    The results are presented of a study to generate a functional system design of a multiprocessing computer system capable of satisfying the computational requirements of a space station. These data management system requirements were specified to include: (1) real time control, (2) data processing and storage, (3) data retrieval, and (4) remote terminal servicing.

  1. Consumer Security Perceptions and the Perceived Influence on Adopting Cloud Computing: A Quantitative Study Using the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Paquet, Katherine G.

    2013-01-01

    Cloud computing may provide cost benefits for organizations by eliminating the overhead costs of software, hardware, and maintenance (e.g., license renewals, upgrading software, servers and their physical storage space, administration along with funding a large IT department). In addition to the promised savings, the organization may require…

  2. Data systems and computer science programs: Overview

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  3. What CFOs should know before venturing into the cloud.

    PubMed

    Rajendran, Janakan

    2013-05-01

    There are three major trends in the use of cloud-based services for healthcare IT: Cloud computing involves the hosting of health IT applications in a service provider cloud. Cloud storage is a data storage service that can involve, for example, long-term storage and archival of information such as clinical data, medical images, and scanned documents. Data center colocation involves rental of secure space in the cloud from a vendor, an approach that allows a hospital to share power capacity and proven security protocols, reducing costs.

  4. Experimental Results From the Thermal Energy Storage-1 (TES-1) Flight Experiment

    NASA Technical Reports Server (NTRS)

    Jacqmin, David

    1995-01-01

    The Thermal Energy Storage (TES) experiments are designed to provide data to help researchers understand the long-duration microgravity behavior of thermal energy storage fluoride salts that undergo repeated melting and freezing. Such data, which have never been obtained before, have direct application to space-based solar dynamic power systems. These power systems will store solar energy in a thermal energy salt, such as lithium fluoride (LiF) or a eutectic of lithium fluoride/calcium difluoride (LiF-CaF2) (which melts at a lower temperature). The energy will be stored as the latent heat of fusion when the salt is melted by absorbing solar thermal energy. The stored energy will then be extracted during the shade portion of the orbit, enabling the solar dynamic power system to provide constant electrical power over the entire orbit. Analytical computer codes have been developed to predict the performance of a spacebased solar dynamic power system. However, the analytical predictions must be verified experimentally before the analytical results can be used for future space power design applications. Four TES flight experiments will be used to obtain the needed experimental data. This article focuses on the flight results from the first experiment, TES-1, in comparison to the predicted results from the Thermal Energy Storage Simulation (TESSIM) analytical computer code.

  5. Storage media for computers in radiology

    PubMed Central

    Dandu, Ravi Varma

    2008-01-01

    The introduction and wide acceptance of digital technology in medical imaging has resulted in an exponential increase in the amount of data produced by the radiology department. There is an insatiable need for storage space to archive this ever-growing volume of image data. Healthcare facilities should plan the type and size of the storage media that they needed, based not just on the volume of data but also on considerations such as the speed and ease of access, redundancy, security, costs, as well as the longevity of the archival technology. This article reviews the various digital storage media and compares their merits and demerits. PMID:19774182

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    Novel implementations based on dense tensor storage are presented here for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the numbermore » of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (C nH n+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H 50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.« less

  7. Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakl, Pavel; /Prague, Inst. Phys.; Lauret, Jerome

    2011-11-10

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of themore » largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.« less

  8. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    PubMed

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  9. Storage quality-of-service in cloud-based scientific environments: a standardization approach

    NASA Astrophysics Data System (ADS)

    Millar, Paul; Fuhrmann, Patrick; Hardt, Marcus; Ertl, Benjamin; Brzezniak, Maciej

    2017-10-01

    When preparing the Data Management Plan for larger scientific endeavors, PIs have to balance between the most appropriate qualities of storage space along the line of the planned data life-cycle, its price and the available funding. Storage properties can be the media type, implicitly determining access latency and durability of stored data, the number and locality of replicas, as well as available access protocols or authentication mechanisms. Negotiations between the scientific community and the responsible infrastructures generally happen upfront, where the amount of storage space, media types, like: disk, tape and SSD and the foreseeable data life-cycles are negotiated. With the introduction of cloud management platforms, both in computing and storage, resources can be brokered to achieve the best price per unit of a given quality. However, in order to allow the platform orchestrator to programmatically negotiate the most appropriate resources, a standard vocabulary for different properties of resources and a commonly agreed protocol to communicate those, has to be available. In order to agree on a basic vocabulary for storage space properties, the storage infrastructure group in INDIGO-DataCloud together with INDIGO-associated and external scientific groups, created a working group under the umbrella of the Research Data Alliance (RDA). As communication protocol, to query and negotiate storage qualities, the Cloud Data Management Interface (CDMI) has been selected. Necessary extensions to CDMI are defined in regular meetings between INDIGO and the Storage Network Industry Association (SNIA). Furthermore, INDIGO is contributing to the SNIA CDMI reference implementation as the basis for interfacing the various storage systems in INDIGO to the agreed protocol and to provide an official Open-Source skeleton for systems not being maintained by INDIGO partners.

  10. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele

    2012-12-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  11. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems

    NASA Astrophysics Data System (ADS)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-01

    We report a new limitation on the ability of physical systems to perform computation—one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system—such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  12. An effective and secure key-management scheme for hierarchical access control in E-medicine system.

    PubMed

    Odelu, Vanga; Das, Ashok Kumar; Goswami, Adrijit

    2013-04-01

    Recently several hierarchical access control schemes are proposed in the literature to provide security of e-medicine systems. However, most of them are either insecure against 'man-in-the-middle attack' or they require high storage and computational overheads. Wu and Chen proposed a key management method to solve dynamic access control problems in a user hierarchy based on hybrid cryptosystem. Though their scheme improves computational efficiency over Nikooghadam et al.'s approach, it suffers from large storage space for public parameters in public domain and computational inefficiency due to costly elliptic curve point multiplication. Recently, Nikooghadam and Zakerolhosseini showed that Wu-Chen's scheme is vulnerable to man-in-the-middle attack. In order to remedy this security weakness in Wu-Chen's scheme, they proposed a secure scheme which is again based on ECC (elliptic curve cryptography) and efficient one-way hash function. However, their scheme incurs huge computational cost for providing verification of public information in the public domain as their scheme uses ECC digital signature which is costly when compared to symmetric-key cryptosystem. In this paper, we propose an effective access control scheme in user hierarchy which is only based on symmetric-key cryptosystem and efficient one-way hash function. We show that our scheme reduces significantly the storage space for both public and private domains, and computational complexity when compared to Wu-Chen's scheme, Nikooghadam-Zakerolhosseini's scheme, and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against different attacks and also man-in-the-middle attack. Moreover, dynamic access control problems in our scheme are also solved efficiently compared to other related schemes, making our scheme is much suitable for practical applications of e-medicine systems.

  13. Impact of the Columbia Supercomputer on NASA Space and Exploration Mission

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott

    2006-01-01

    NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.

  14. Selected Mechanized Scientific and Technical Information Systems.

    ERIC Educational Resources Information Center

    Ackerman, Lynn, Ed.; And Others

    The publication describes the following thirteen computer-based, operational systems designed primarily for the announcement, storage, retrieval and secondary distribution of scientific and technical reports: Defense Documentation Center; Highway Research Board; National Aeronautics and Space Administration; National Library of Medicine; U.S.…

  15. Enabling Co-Design of Multi-Layer Exascale Storage Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carothers, Christopher

    Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less

  16. Cost-effective description of strong correlation: Efficient implementations of the perfect quadruples and perfect hextuples models

    DOE PAGES

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2016-10-07

    Novel implementations based on dense tensor storage are presented here for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the numbermore » of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (C nH n+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H 50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.« less

  17. Cost-effective description of strong correlation: Efficient implementations of the perfect quadruples and perfect hextuples models

    NASA Astrophysics Data System (ADS)

    Lehtola, Susi; Parkhill, John; Head-Gordon, Martin

    2016-10-01

    Novel implementations based on dense tensor storage are presented for the singlet-reference perfect quadruples (PQ) [J. A. Parkhill et al., J. Chem. Phys. 130, 084101 (2009)] and perfect hextuples (PH) [J. A. Parkhill and M. Head-Gordon, J. Chem. Phys. 133, 024103 (2010)] models. The methods are obtained as block decompositions of conventional coupled-cluster theory that are exact for four electrons in four orbitals (PQ) and six electrons in six orbitals (PH), but that can also be applied to much larger systems. PQ and PH have storage requirements that scale as the square, and as the cube of the number of active electrons, respectively, and exhibit quartic scaling of the computational effort for large systems. Applications of the new implementations are presented for full-valence calculations on linear polyenes (CnHn+2), which highlight the excellent computational scaling of the present implementations that can routinely handle active spaces of hundreds of electrons. The accuracy of the models is studied in the π space of the polyenes, in hydrogen chains (H50), and in the π space of polyacene molecules. In all cases, the results compare favorably to density matrix renormalization group values. With the novel implementation of PQ, active spaces of 140 electrons in 140 orbitals can be solved in a matter of minutes on a single core workstation, and the relatively low polynomial scaling means that very large systems are also accessible using parallel computing.

  18. A Collection of Technical Papers

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Papers presented at the 6th Space Logistics Symposium covered such areas as: The International Space Station; The Hubble Space Telescope; Launch site computer simulation; Integrated logistics support; The Baikonur Cosmodrome; Probabalistic tools for high confidence repair; A simple space station rescue vehicle; Integrated Traffic Model for the International Space Station; Packaging the maintenance shop; Leading edge software support; Storage information management system; Consolidated maintenance inventory logistics planning; Operation concepts for a single stage to orbit vehicle; Mission architecture for human lunar exploration; Logistics of a lunar based solar power satellite scenario; Just in time in space; NASA acquisitions/logistics; Effective transition management; Shuttle logistics; and Revitalized space operations through total quality control management.

  19. 2001 Research Reports NASA/ASEE Summer Faculty Fellowship Program

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This document is a collection of technical reports on research conducted by the participants in the 2001 NASA/ASEE Summer Faculty Fellowship Program at the Kennedy Space Center (KSC). Research areas are broad. Some of the topics addressed include: project management, space shuttle safety risks induced by human factor errors, body wearable computers as a feasible delivery system for 'work authorization documents', gas leak detection using remote sensing technologies, a history of the Kennedy Space Center, and design concepts for collabsible cyrogenic storage vessels.

  20. Hardware implementation of CMAC neural network with reduced storage requirement.

    PubMed

    Ker, J S; Kuo, Y H; Wen, R C; Liu, B D

    1997-01-01

    The cerebellar model articulation controller (CMAC) neural network has the advantages of fast convergence speed and low computation complexity. However, it suffers from a low storage space utilization rate on weight memory. In this paper, we propose a direct weight address mapping approach, which can reduce the required weight memory size with a utilization rate near 100%. Based on such an address mapping approach, we developed a pipeline architecture to efficiently perform the addressing operations. The proposed direct weight address mapping approach also speeds up the computation for the generation of weight addresses. Besides, a CMAC hardware prototype used for color calibration has been implemented to confirm the proposed approach and architecture.

  1. A modeling of dynamic storage assignment for order picking in beverage warehousing with Drive-in Rack system

    NASA Astrophysics Data System (ADS)

    Hadi, M. Z.; Djatna, T.; Sugiarto

    2018-04-01

    This paper develops a dynamic storage assignment model to solve storage assignment problem (SAP) for beverages order picking in a drive-in rack warehousing system to determine the appropriate storage location and space for each beverage products dynamically so that the performance of the system can be improved. This study constructs a graph model to represent drive-in rack storage position then combine association rules mining, class-based storage policies and an arrangement rule algorithm to determine an appropriate storage location and arrangement of the product according to dynamic orders from customers. The performance of the proposed model is measured as rule adjacency accuracy, travel distance (for picking process) and probability a product become expiry using Last Come First Serve (LCFS) queue approach. Finally, the proposed model is implemented through computer simulation and compare the performance for different storage assignment methods as well. The result indicates that the proposed model outperforms other storage assignment methods.

  2. CSUNSat-1 CubeSat – ELaNa XVII

    NASA Image and Video Library

    2017-04-04

    The primary mission of CSUNSat1 is to space test an innovative low temperature capable energy storage system developed by the Jet Propulsion Laboratory, raising its TRL level to 7 from 4 to 5. The success of this energy storage system will enable future missions, especially those in deep space to do more science while requiring less energy, mass and volume. This CubeSat was designed, built, programmed, and tested by a team of over 70 engineering and computer science students at CSUN.  The primary source of funding for CSUNSat1 comes from NASA’s Smallest Technology Partnership program. Launched by NASA’s CubeSat Launch Initiative on the NET April 18, 2017 ELaNa XVII mission on the seventh Orbital-ATK Cygnus Commercial Resupply Services (OA-7) to the International Space Station and deployed on tbd.

  3. Using Archives for Education.

    ERIC Educational Resources Information Center

    MacKenzie, Douglas

    1996-01-01

    Discusses the use of computer systems for archival applications based on experiences at the Demarco European Arts Foundation (Scotland) and the TAMH Project, an attempt to build a virtual museum of Tay Valley maritime history. Highlights include hardware; development software; data representation, including storage space versus quality;…

  4. A new scheme for perturbative triples correction to (0,1) sector of Fock space multi-reference coupled cluster method: theory, implementation, and examples.

    PubMed

    Dutta, Achintya Kumar; Vaval, Nayana; Pal, Sourav

    2015-01-28

    We propose a new elegant strategy to implement third order triples correction in the light of many-body perturbation theory to the Fock space multi-reference coupled cluster method for the ionization problem. The computational scaling as well as the storage requirement is of key concerns in any many-body calculations. Our proposed approach scales as N(6) does not require the storage of triples amplitudes and gives superior agreement over all the previous attempts made. This approach is capable of calculating multiple roots in a single calculation in contrast to the inclusion of perturbative triples in the equation of motion variant of the coupled cluster theory, where each root needs to be computed in a state-specific way and requires both the left and right state vectors together. The performance of the newly implemented scheme is tested by applying to methylene, boron nitride (B2N) anion, nitrogen, water, carbon monoxide, acetylene, formaldehyde, and thymine monomer, a DNA base.

  5. Design Considerations for Computer-Based Interactive Map Display Systems

    DTIC Science & Technology

    1979-02-01

    11 Five Dimensions for Map Display System Options . . . . . . . . . . . . . . . 12 Summary of...most advanced and exotic technologies- space , optical, computer, and graphic pro- duction; the focusing of vast organizational efforts; and the results...Information retrieval: "Where are all the radar sites in sector 12 ?," "What’s the name of this hill?," "Where’s the hill named B243?" Information storage

  6. Vent System Analysis for the Cryogenic Propellant Storage Transfer Ground Test Article

    NASA Technical Reports Server (NTRS)

    Hedayat, A

    2013-01-01

    To test and validate key capabilities and technologies required for future exploration elements such as large cryogenic propulsion stages and propellant depots, NASA is leading the efforts to develop and design the Cryogenic Propellant Storage and Transfer (CPST) Cryogenic Fluid Management (CFM) payload. The primary objectives of CPST payload are to demonstrate: 1) in-space storage of cryogenic propellants for long duration applications; and 2) in-space transfer of cryogenic propellants. The Ground Test Article (GTA) is a technology development version of the CPST payload. The GTA consists of flight-sized and flight-like storage and transfer tanks, liquid acquisition devices, transfer, and pressurization systems with all of the CPST functionality. The GTA is designed to perform integrated passive and active thermal storage and transfer performance testing with liquid hydrogen (LH2) in a vacuum environment. The GTA storage tank is designed to store liquid hydrogen and the transfer tank is designed to be 5% of the storage tank volume. The LH2 transfer subsystem is designed to transfer propellant from one tank to the other utilizing pressure or a pump. The LH2 vent subsystem is designed to prevent over-pressurization of the storage and transfer tanks. An in-house general-purpose computer program was utilized to model and simulate the vent subsystem operation. The modeling, analysis, and the results will be presented in the final paper.

  7. Techniques for increasing the efficiency of Earth gravity calculations for precision orbit determination

    NASA Technical Reports Server (NTRS)

    Smith, R. L.; Lyubomirsky, A. S.

    1981-01-01

    Two techniques were analyzed. The first is a representation using Chebyshev expansions in three-dimensional cells. The second technique employs a temporary file for storing the components of the nonspherical gravity force. Computer storage requirements and relative CPU time requirements are presented. The Chebyshev gravity representation can provide a significant reduction in CPU time in precision orbit calculations, but at the cost of a large amount of direct-access storage space, which is required for a global model.

  8. Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 2

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Results of a Space Station Data System Analysis/Architecture Study for the Goddard Space Flight Center are presented. This study, which emphasized a system engineering design for a complete, end-to-end data system, was divided into six tasks: (1); Functional requirements definition; (2) Options development; (3) Trade studies; (4) System definitions; (5) Program plan; and (6) Study maintenance. The Task inter-relationship and documentation flow are described. Information in volume 2 is devoted to Task 3: trade Studies. Trade Studies have been carried out in the following areas: (1) software development test and integration capability; (2) fault tolerant computing; (3) space qualified computers; (4) distributed data base management system; (5) system integration test and verification; (6) crew workstations; (7) mass storage; (8) command and resource management; and (9) space communications. Results are presented for each task.

  9. Really big data: Processing and analysis of large datasets

    USDA-ARS?s Scientific Manuscript database

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  10. Fast localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates.

    PubMed

    Subotnik, Joseph E; Dutoi, Anthony D; Head-Gordon, Martin

    2005-09-15

    We present here an algorithm for computing stable, well-defined localized orthonormal virtual orbitals which depend smoothly on nuclear coordinates. The algorithm is very fast, limited only by diagonalization of two matrices with dimension the size of the number of virtual orbitals. Furthermore, we require no more than quadratic (in the number of electrons) storage. The basic premise behind our algorithm is that one can decompose any given atomic-orbital (AO) vector space as a minimal basis space (which includes the occupied and valence virtual spaces) and a hard-virtual (HV) space (which includes everything else). The valence virtual space localizes easily with standard methods, while the hard-virtual space is constructed to be atom centered and automatically local. The orbitals presented here may be computed almost as quickly as projecting the AO basis onto the virtual space and are almost as local (according to orbital variance), while our orbitals are orthonormal (rather than redundant and nonorthogonal). We expect this algorithm to find use in local-correlation methods.

  11. Hemispherical reflectance model for passive images in an outdoor environment.

    PubMed

    Kim, Charles C; Thai, Bea; Yamaoka, Neil; Aboutalib, Omar

    2015-05-01

    We present a hemispherical reflectance model for simulating passive images in an outdoor environment where illumination is provided by natural sources such as the sun and the clouds. While the bidirectional reflectance distribution function (BRDF) accurately produces radiance from any objects after the illumination, using the BRDF in calculating radiance requires double integration. Replacing the BRDF by hemispherical reflectance under the natural sources transforms the double integration into a multiplication. This reduces both storage space and computation time. We present the formalism for the radiance of the scene using hemispherical reflectance instead of BRDF. This enables us to generate passive images in an outdoor environment taking advantage of the computational and storage efficiencies. We show some examples for illustration.

  12. Integrating High-Throughput Parallel Processing Framework and Storage Area Network Concepts Into a Prototype Interactive Scientific Visualization Environment for Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Smuga-Otto, M. J.; Garcia, R. K.; Knuteson, R. O.; Martin, G. D.; Flynn, B. M.; Hackel, D.

    2006-12-01

    The University of Wisconsin-Madison Space Science and Engineering Center (UW-SSEC) is developing tools to help scientists realize the potential of high spectral resolution instruments for atmospheric science. Upcoming satellite spectrometers like the Cross-track Infrared Sounder (CrIS), experimental instruments like the Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and proposed instruments like the Hyperspectral Environmental Suite (HES) within the GOES-R project will present a challenge in the form of the overwhelmingly large amounts of continuously generated data. Current and near-future workstations will have neither the storage space nor computational capacity to cope with raw spectral data spanning more than a few minutes of observations from these instruments. Schemes exist for processing raw data from hyperspectral instruments currently in testing, that involve distributed computation across clusters. Data, which for an instrument like GIFTS can amount to over 1.5 Terabytes per day, is carefully managed on Storage Area Networks (SANs), with attention paid to proper maintenance of associated metadata. The UW-SSEC is preparing a demonstration integrating these back-end capabilities as part of a larger visualization framework, to assist scientists in developing new products from high spectral data, sourcing data volumes they could not otherwise manage. This demonstration focuses on managing storage so that only the data specifically needed for the desired product are pulled from the SAN, and on running computationally expensive intermediate processing on a back-end cluster, with the final product being sent to a visualization system on the scientist's workstation. Where possible, existing software and solutions are used to reduce cost of development. The heart of the computing component is the GIFTS Information Processing System (GIPS), developed at the UW- SSEC to allow distribution of processing tasks such as conversion of raw GIFTS interferograms into calibrated radiance spectra, and retrieving temperature and water vapor content atmospheric profiles from these spectra. The hope is that by demonstrating the capabilities afforded by a composite system like the one described here, scientists can be convinced to contribute further algorithms in support of this model of computing and visualization.

  13. Investigation of storage options for scientific computing on Grid and Cloud facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storagemore » server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on bare metal nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.« less

  14. The Need for Optical Means as an Alternative for Electronic Computing

    NASA Technical Reports Server (NTRS)

    Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.

  15. DNA MemoChip: Long-Term and High Capacity Information Storage and Select Retrieval.

    PubMed

    Stefano, George B; Wang, Fuzhou; Kream, Richard M

    2018-02-26

    Over the course of history, human beings have never stopped seeking effective methods for information storage. From rocks to paper, and through the past several decades of using computer disks, USB sticks, and on to the thin silicon "chips" and "cloud" storage of today, it would seem that we have reached an era of efficiency for managing innumerable and ever-expanding data. Astonishingly, when tracing this technological path, one realizes that our ancient methods of informational storage far outlast paper (10,000 vs. 1,000 years, respectively), let alone the computer-based memory devices that only last, on average, 5 to 25 years. During this time of fast-paced information generation, it becomes increasingly difficult for current storage methods to retain such massive amounts of data, and to maintain appropriate speeds with which to retrieve it, especially when in demand by a large number of users. Others have proposed that DNA-based information storage provides a way forward for information retention as a result of its temporal stability. It is now evident that DNA represents a potentially economical and sustainable mechanism for storing information, as demonstrated by its decoding from a 700,000 year-old horse genome. The fact that the human genome is present in a cell, containing also the varied mitochondrial genome, indicates DNA's great potential for large data storage in a 'smaller' space.

  16. DNA MemoChip: Long-Term and High Capacity Information Storage and Select Retrieval

    PubMed Central

    Wang, Fuzhou; Kream, Richard M.

    2018-01-01

    Over the course of history, human beings have never stopped seeking effective methods for information storage. From rocks to paper, and through the past several decades of using computer disks, USB sticks, and on to the thin silicon “chips” and “cloud” storage of today, it would seem that we have reached an era of efficiency for managing innumerable and ever-expanding data. Astonishingly, when tracing this technological path, one realizes that our ancient methods of informational storage far outlast paper (10,000 vs. 1,000 years, respectively), let alone the computer-based memory devices that only last, on average, 5 to 25 years. During this time of fast-paced information generation, it becomes increasingly difficult for current storage methods to retain such massive amounts of data, and to maintain appropriate speeds with which to retrieve it, especially when in demand by a large number of users. Others have proposed that DNA-based information storage provides a way forward for information retention as a result of its temporal stability. It is now evident that DNA represents a potentially economical and sustainable mechanism for storing information, as demonstrated by its decoding from a 700,000 year-old horse genome. The fact that the human genome is present in a cell, containing also the varied mitochondrial genome, indicates DNA’s great potential for large data storage in a ‘smaller’ space. PMID:29481548

  17. CESDIS

    NASA Technical Reports Server (NTRS)

    1994-01-01

    CESDIS, the Center of Excellence in Space Data and Information Sciences was developed jointly by NASA, Universities Space Research Association (USRA), and the University of Maryland in 1988 to focus on the design of advanced computing techniques and data systems to support NASA Earth and space science research programs. CESDIS is operated by USRA under contract to NASA. The Director, Associate Director, Staff Scientists, and administrative staff are located on-site at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The primary CESDIS mission is to increase the connection between computer science and engineering research programs at colleges and universities and NASA groups working with computer applications in Earth and space science. Research areas of primary interest at CESDIS include: 1) High performance computing, especially software design and performance evaluation for massively parallel machines; 2) Parallel input/output and data storage systems for high performance parallel computers; 3) Data base and intelligent data management systems for parallel computers; 4) Image processing; 5) Digital libraries; and 6) Data compression. CESDIS funds multiyear projects at U. S. universities and colleges. Proposals are accepted in response to calls for proposals and are selected on the basis of peer reviews. Funds are provided to support faculty and graduate students working at their home institutions. Project personnel visit Goddard during academic recess periods to attend workshops, present seminars, and collaborate with NASA scientists on research projects. Additionally, CESDIS takes on specific research tasks of shorter duration for computer science research requested by NASA Goddard scientists.

  18. From sequencer to supercomputer: an automatic pipeline for managing and processing next generation sequencing data.

    PubMed

    Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun

    2012-01-01

    Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.

  19. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  20. High Definition Information Systems. Hearings before the Subcommittee on Technology and Competitiveness of the Committee on Science, Space, and Technology. U.S. House of Representatives, One Hundred Second Congress, First Session (May 14, 21, 1991).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.

    The report of these two hearings on high definition information systems begins by noting that they are digital, and that they are likely to handle computing, telecommunications, home security, computer imaging, storage, fiber optics networks, multi-dimensional libraries, and many other local, national, and international systems. (It is noted that…

  1. A data-management system for detailed areal interpretive data

    USGS Publications Warehouse

    Ferrigno, C.F.

    1986-01-01

    A data storage and retrieval system has been developed to organize and preserve areal interpretive data. This system can be used by any study where there is a need to store areal interpretive data that generally is presented in map form. This system provides the capability to grid areal interpretive data for input to groundwater flow models at any spacing and orientation. The data storage and retrieval system is designed to be used for studies that cover small areas such as counties. The system is built around a hierarchically structured data base consisting of related latitude-longitude blocks. The information in the data base can be stored at different levels of detail, with the finest detail being a block of 6 sec of latitude by 6 sec of longitude (approximately 0.01 sq mi). This system was implemented on a mainframe computer using a hierarchical data base management system. The computer programs are written in Fortran IV and PL/1. The design and capabilities of the data storage and retrieval system, and the computer programs that are used to implement the system are described. Supplemental sections contain the data dictionary, user documentation of the data-system software, changes that would need to be made to use this system for other studies, and information on the computer software tape. (Lantz-PTT)

  2. Replacing the Measles Ten-Dose Vaccine Presentation with the Single-Dose Presentation in Thailand

    PubMed Central

    Lee, Bruce Y.; Assi, Tina-Marie; Rookkapan, Korngamon; Connor, Diana L.; Rajgopal, Jayant; Sornsrivichai, Vorasith; Brown, Shawn T.; Welling, Joel S.; Norman, Bryan A.; Chen, Sheng-I; Bailey, Rachel R.; Wiringa, Ann E.; Wateska, Angela R.; Jana, Anirban; Van Panhuis, Willem G.; Burke, Donald S.

    2011-01-01

    Introduced to minimize open vial wastage, single-dose vaccine vials require more storage space and therefore may affect vaccine supply chains (i.e., the series of steps and processes entailed to deliver vaccines from manufacturers to patients). We developed a computational model of Thailand’s Trang province vaccine supply chain to analyze the effects of switching from a ten-dose measles vaccine presentation to each of the following: a single-dose Measles-Mumps-Rubella vaccine (which Thailand is currently considering) and a single-dose measles vaccine. While the Trang province vaccine supply chain would generally have enough storage and transport capacity to accommodate the switches, the added volume could push some locations’ storage and transport space utilization close to their limits. Single-dose vaccines would allow for more precise ordering and decrease open vial waste, but decrease reserves for unanticipated demand. Moreover, the added disposal and administration costs could far outweigh the costs saved from preventing open vial wastage. PMID:21439313

  3. Research and Development Annual Report, 1992

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Issued as a companion to Johnson Space Center's Research and Technology Annual Report, which reports JSC accomplishments under NASA Research and Technology Operating Plan (RTOP) funding, this report describes 42 additional JSC projects that are funded through sources other than the RTOP. Emerging technologies in four major disciplines are summarized: space systems technology, medical and life sciences, mission operations, and computer systems. Although these projects focus on support of human spacecraft design, development, and safety, most have wide civil and commercial applications in areas such as advanced materials, superconductors, advanced semiconductors, digital imaging, high density data storage, high performance computers, optoelectronics, artificial intelligence, robotics and automation, sensors, biotechnology, medical devices and diagnosis, and human factors engineering.

  4. The JSC Research and Development Annual Report 1993

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Issued as a companion to Johnson Space Center's Research and Technology Annual Report, which reports JSC accomplishments under NASA Research and Technology Operating Plan (RTOP) funding, this report describes 47 additional projects that are funded through sources other than the RTOP. Emerging technologies in four major disciplines are summarized: space systems technology, medical and life sciences, mission operations, and computer systems. Although these projects focus on support of human spacecraft design, development, and safety, most have wide civil and commercial applications in areas such as advanced materials, superconductors, advanced semiconductors, digital imaging, high density data storage, high performance computers, optoelectronics, artificial intelligence, robotics and automation, sensors, biotechnology, medical devices and diagnosis, and human factors engineering.

  5. Experimental and Computational Investigations of Phase Change Thermal Energy Storage Canisters

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir; Kerslake, Thomas; Sokolov, Pavel; Tolbert, Carol

    1996-01-01

    Two sets of experimental data are examined in this paper, ground and space experiments, for cylindrical canisters with thermal energy storage applications. A 2-D computational model was developed for unsteady heat transfer (conduction and radiation) with phase-change. The radiation heat transfer employed a finite volume method. The following was found in this study: (1) Ground Experiments: the convection heat transfer is equally important to that of the radiation heat transfer; radiation heat transfer in the liquid is found to be more significant than that in the void; including the radiation heat transfer in the liquid resulted in lower temperatures (about 15 K) and increased the melting time (about 10 min.); generally, most of the heat flow takes place in the radial direction. (2) Space Experiments: radiation heat transfer in the void is found to be more significant than that in the liquid (exactly the opposite to the Ground Experiments); accordingly, the location and size of the void affects the performance considerably; including the radiation heat transfer in the void resulted in lower temperatures (about 40 K).

  6. Nano Goes Magnetic to Attract Big Business

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Glenn Research Center has combined state-of-the-art electrical designs with complex, computer-aided analyses to develop some of today s most advanced power systems, in space and on Earth. The center s Power and On-Board Propulsion Technology Division is the brain behind many of these power systems. For space, this division builds technologies that help power the International Space Station, the Hubble Space Telescope, and Earth-orbiting satellites. For Earth, it has woven advanced aerospace power concepts into commercial energy applications that include solar and nuclear power generation, battery and fuel cell energy storage, communications and telecommunications satellites, cryocoolers, hybrid and electric vehicles, and heating and air-conditioning systems.

  7. Void space inside the developing seed of Brassica napus and the modelling of its function

    PubMed Central

    Verboven, Pieter; Herremans, Els; Borisjuk, Ljudmilla; Helfen, Lukas; Ho, Quang Tri; Tschiersch, Henning; Fuchs, Johannes; Nicolaï, Bart M; Rolletschek, Hardy

    2013-01-01

    The developing seed essentially relies on external oxygen to fuel aerobic respiration, but it is currently unknown how oxygen diffuses into and within the seed, which structural pathways are used and what finally limits gas exchange. By applying synchrotron X-ray computed tomography to developing oilseed rape seeds we uncovered void spaces, and analysed their three-dimensional assembly. Both the testa and the hypocotyl are well endowed with void space, but in the cotyledons, spaces were small and poorly inter-connected. In silico modelling revealed a three orders of magnitude range in oxygen diffusivity from tissue to tissue, and identified major barriers to gas exchange. The oxygen pool stored in the voids is consumed about once per minute. The function of the void space was related to the tissue-specific distribution of storage oils, storage protein and starch, as well as oxygen, water, sugars, amino acids and the level of respiratory activity, analysed using a combination of magnetic resonance imaging, specific oxygen sensors, laser micro-dissection, biochemical and histological methods. We conclude that the size and inter-connectivity of void spaces are major determinants of gas exchange potential, and locally affect the respiratory activity of a developing seed. PMID:23692271

  8. A Stochastic Dynamic Programming Model With Fuzzy Storage States Applied to Reservoir Operation Optimization

    NASA Astrophysics Data System (ADS)

    Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas

    2004-08-01

    Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.

  9. Incorporating Oracle on-line space management with long-term archival technology

    NASA Technical Reports Server (NTRS)

    Moran, Steven M.; Zak, Victor J.

    1996-01-01

    The storage requirements of today's organizations are exploding. As computers continue to escalate in processing power, applications grow in complexity and data files grow in size and in number. As a result, organizations are forced to procure more and more megabytes of storage space. This paper focuses on how to expand the storage capacity of a Very Large Database (VLDB) cost-effectively within a Oracle7 data warehouse system by integrating long term archival storage sub-systems with traditional magnetic media. The Oracle architecture described in this paper was based on an actual proof of concept for a customer looking to store archived data on optical disks yet still have access to this data without user intervention. The customer had a requirement to maintain 10 years worth of data on-line. Data less than a year old still had the potential to be updated thus will reside on conventional magnetic disks. Data older than a year will be considered archived and will be placed on optical disks. The ability to archive data to optical disk and still have access to that data provides the system a means to retain large amounts of data that is readily accessible yet significantly reduces the cost of total system storage. Therefore, the cost benefits of archival storage devices can be incorporated into the Oracle storage medium and I/O subsystem without loosing any of the functionality of transaction processing, yet at the same time providing an organization access to all their data.

  10. CLOCS (Computer with Low Context-Switching Time) Operating System Reference Documents

    DTIC Science & Technology

    1988-05-06

    system are met. In sum, real-time constraints make programming harder in genera420], because they add a whole new dimension - the time dimension - to ...be preempted until it allows itself to be. More is Stored; Less is Computed Alan Jay Smith, of Berkeley, has said that any program can be made five...times as swift to run, at the expense of five times the storage space. While his numbers may be questioned, his premise may not: programs can be made

  11. 1999 NCCS Highlights

    NASA Technical Reports Server (NTRS)

    Bennett, Jerome (Technical Monitor)

    2002-01-01

    The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.

  12. Actual versus predicted performance of an active solar heating system - A comparison using FCHART 4.0

    NASA Astrophysics Data System (ADS)

    Wetzel, P. E.

    1981-11-01

    The performance of an active solar heating system added to a house in Denver, CO was compared with predictions made by the FCHART 4.0 computer program. The house featured 43.23 sq m of collectors with an ethylene-glycol/water heat transfer fluid, and a 3.23 cu m storage tank. The house hot water was preheated in the storage tank, and home space heat was furnished whenever the storage water was above 32 C. Actual meteorological and heating demand data were used for the comparison, rather than long-term averages. Although monthly predictions by the FCHART program were found to diverge from measured data, the annual demand and supply predictions provided a good fit, i.e. within 9%, and were within 1% of the measured solar energy contributed to storage.

  13. Nonlinear Analysis of a Bolted Marine Riser Connector Using NASTRAN Substructuring

    NASA Technical Reports Server (NTRS)

    Fox, G. L.

    1984-01-01

    Results of an investigation of the behavior of a bolted, flange type marine riser connector is reported. The method used to account for the nonlinear effect of connector separation due to bolt preload and axial tension load is described. The automated multilevel substructing capability of COSMIC/NASTRAN was employed at considerable savings in computer run time. Simplified formulas for computer resources, i.e., computer run times for modules SDCOMP, FBS, and MPYAD, as well as disk storage space, are presented. Actual run time data on a VAX-11/780 is compared with the formulas presented.

  14. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  15. Data Processing Center of Radioastron Project: 3 years of operation.

    NASA Astrophysics Data System (ADS)

    Shatskaya, Marina

    ASC DATA PROCESSING CENTER (DPC) of Radioastron Project is a fail-safe complex centralized system of interconnected software/ hardware components along with organizational procedures. Tasks facing of the scientific data processing center are organization of service information exchange, collection of scientific data, storage of all of scientific data, data science oriented processing. DPC takes part in the informational exchange with two tracking stations in Pushchino (Russia) and Green Bank (USA), about 30 ground telescopes, ballistic center, tracking headquarters and session scheduling center. Enormous flows of information go to Astro Space Center. For the inquiring of enormous data volumes we develop specialized network infrastructure, Internet channels and storage. The computer complex has been designed at the Astro Space Center (ASC) of Lebedev Physical Institute and includes: - 800 TB on-line storage, - 2000 TB hard drive archive, - backup system on magnetic tapes (2000 TB); - 24 TB redundant storage at Pushchino Radio Astronomy Observatory; - Web and FTP servers, - DPC management and data transmission networks. The structure and functions of ASC Data Processing Center are fully adequate to the data processing requirements of the Radioastron Mission and has been successfully confirmed during Fringe Search, Early Science Program and first year of Key Science Program.

  16. Towards Efficient Scientific Data Management Using Cloud Storage

    NASA Technical Reports Server (NTRS)

    He, Qiming

    2013-01-01

    A software prototype allows users to backup and restore data to/from both public and private cloud storage such as Amazon's S3 and NASA's Nebula. Unlike other off-the-shelf tools, this software ensures user data security in the cloud (through encryption), and minimizes users operating costs by using space- and bandwidth-efficient compression and incremental backup. Parallel data processing utilities have also been developed by using massively scalable cloud computing in conjunction with cloud storage. One of the innovations in this software is using modified open source components to work with a private cloud like NASA Nebula. Another innovation is porting the complex backup to- cloud software to embedded Linux, running on the home networking devices, in order to benefit more users.

  17. Study of data entry requirements at Marshall Space Flight Computation Center

    NASA Technical Reports Server (NTRS)

    Sherman, G. R.

    1975-01-01

    An economic and systems analysis of a data center was conducted. Current facilities for data storage of documentation are shown to be inadequate and outmoded for efficient data handling. Redesign of documents, condensation of the keypunching operation, upgrading of hardware, and retraining of personnel are the solutions proposed to improve the present data system.

  18. Development of an automated electrical power subsystem testbed for large spacecraft

    NASA Technical Reports Server (NTRS)

    Hall, David K.; Lollar, Louis F.

    1990-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed two autonomous electrical power system breadboards. The first breadboard, the autonomously managed power system (AMPS), is a two power channel system featuring energy generation and storage and 24-kW of switchable loads, all under computer control. The second breadboard, the space station module/power management and distribution (SSM/PMAD) testbed, is a two-bus 120-Vdc model of the Space Station power subsystem featuring smart switchgear and multiple knowledge-based control systems. NASA/MSFC is combining these two breadboards to form a complete autonomous source-to-load power system called the large autonomous spacecraft electrical power system (LASEPS). LASEPS is a high-power, intelligent, physical electrical power system testbed which can be used to derive and test new power system control techniques, new power switching components, and new energy storage elements in a more accurate and realistic fashion. LASEPS has the potential to be interfaced with other spacecraft subsystem breadboards in order to simulate an entire space vehicle. The two individual systems, the combined systems (hardware and software), and the current and future uses of LASEPS are described.

  19. CSUNSat-1 Team working on their CubeSat at California State University Northridge

    NASA Image and Video Library

    2015-03-02

    CSUNSat-1 Team (Adam Kaplan, James Flynn, Donald Eckels) working on their CubeSat at California State University Northridge. The primary mission of CSUNSat1 is to space test an innovative low temperature capable energy storage system developed by the Jet Propulsion Laboratory, raising its TRL level to 7 from 4 to 5. The success of this energy storage system will enable future missions, especially those in deep space to do more science while requiring less energy, mass and volume. This CubeSat was designed, built, programmed, and tested by a team of over 70 engineering and computer science students at CSUN.  The primary source of funding for CSUNSat1 comes from NASA’s Smallest Technology Partnership program. Launched by NASA’s CubeSat Launch Initiative NET April 18, 2017 ELaNa XVII mission on the seventh Orbital-ATK Cygnus Commercial Resupply Services (OA-7) to the International Space Station and deployed on tbd.

  20. Computational design of molecules for an all-quinone redox flow battery.

    PubMed

    Er, Süleyman; Suh, Changwon; Marshak, Michael P; Aspuru-Guzik, Alán

    2015-02-01

    Inspired by the electron transfer properties of quinones in biological systems, we recently showed that quinones are also very promising electroactive materials for stationary energy storage applications. Due to the practically infinite chemical space of organic molecules, the discovery of additional quinones or other redox-active organic molecules for energy storage applications is an open field of inquiry. Here, we introduce a high-throughput computational screening approach that we applied to an accelerated study of a total of 1710 quinone (Q) and hydroquinone (QH 2 ) ( i.e. , two-electron two-proton) redox couples. We identified the promising candidates for both the negative and positive sides of organic-based aqueous flow batteries, thus enabling an all-quinone battery. To further aid the development of additional interesting electroactive small molecules we also provide emerging quantitative structure-property relationships.

  1. Computational design of molecules for an all-quinone redox flow battery† †Electronic supplementary information (ESI) available: The list of computationally predicted candidate quinone molecules with interesting redox properties. See DOI: 10.1039/c4sc03030c Click here for additional data file.

    PubMed Central

    Er, Süleyman; Suh, Changwon; Marshak, Michael P.

    2015-01-01

    Inspired by the electron transfer properties of quinones in biological systems, we recently showed that quinones are also very promising electroactive materials for stationary energy storage applications. Due to the practically infinite chemical space of organic molecules, the discovery of additional quinones or other redox-active organic molecules for energy storage applications is an open field of inquiry. Here, we introduce a high-throughput computational screening approach that we applied to an accelerated study of a total of 1710 quinone (Q) and hydroquinone (QH2) (i.e., two-electron two-proton) redox couples. We identified the promising candidates for both the negative and positive sides of organic-based aqueous flow batteries, thus enabling an all-quinone battery. To further aid the development of additional interesting electroactive small molecules we also provide emerging quantitative structure-property relationships. PMID:29560173

  2. Space Science Cloud: a Virtual Space Science Research Platform Based on Cloud Model

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoyan; Tong, Jizhou; Zou, Ziming

    Through independent and co-operational science missions, Strategic Pioneer Program (SPP) on Space Science, the new initiative of space science program in China which was approved by CAS and implemented by National Space Science Center (NSSC), dedicates to seek new discoveries and new breakthroughs in space science, thus deepen the understanding of universe and planet earth. In the framework of this program, in order to support the operations of space science missions and satisfy the demand of related research activities for e-Science, NSSC is developing a virtual space science research platform based on cloud model, namely the Space Science Cloud (SSC). In order to support mission demonstration, SSC integrates interactive satellite orbit design tool, satellite structure and payloads layout design tool, payload observation coverage analysis tool, etc., to help scientists analyze and verify space science mission designs. Another important function of SSC is supporting the mission operations, which runs through the space satellite data pipelines. Mission operators can acquire and process observation data, then distribute the data products to other systems or issue the data and archives with the services of SSC. In addition, SSC provides useful data, tools and models for space researchers. Several databases in the field of space science are integrated and an efficient retrieve system is developing. Common tools for data visualization, deep processing (e.g., smoothing and filtering tools), analysis (e.g., FFT analysis tool and minimum variance analysis tool) and mining (e.g., proton event correlation analysis tool) are also integrated to help the researchers to better utilize the data. The space weather models on SSC include magnetic storm forecast model, multi-station middle and upper atmospheric climate model, solar energetic particle propagation model and so on. All the services above-mentioned are based on the e-Science infrastructures of CAS e.g. cloud storage and cloud computing. SSC provides its users with self-service storage and computing resources at the same time.At present, the prototyping of SSC is underway and the platform is expected to be put into trial operation in August 2014. We hope that as SSC develops, our vision of Digital Space may come true someday.

  3. KSC-06pd0545

    NASA Image and Video Library

    2006-03-24

    KENNEDY SPACE CENTER, FLA. -- Kennedy Space Center Deputy Director Bill Parsons explains the significance of the Operations Support Building II (behind him) to guests at the ribbon-cutting ceremony. The Operations Support Building II is an Agency safety and health initiative project to replace 198,466 square feet of substandard modular housing and trailers in the Launch Complex 39 area at Kennedy Space Center. The five-story building, which sits south of the Vehicle Assembly Building and faces the launch pads, includes 960 office spaces, 16 training rooms, computer and multimedia conference rooms, a Mission Conference Center with an observation deck, technical libraries, an Exchange store, storage, break areas, and parking. Photo credit: NASA/George Shelton

  4. An adaptive process-based cloud infrastructure for space situational awareness applications

    NASA Astrophysics Data System (ADS)

    Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce

    2014-06-01

    Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.

  5. Fuel cell energy storage for Space Station enhancement

    NASA Technical Reports Server (NTRS)

    Stedman, J. K.

    1990-01-01

    Viewgraphs on fuel cell energy storage for space station enhancement are presented. Topics covered include: power profile; solar dynamic power system; photovoltaic battery; space station energy demands; orbiter fuel cell power plant; space station energy storage; fuel cell system modularity; energy storage system development; and survival power supply.

  6. Performance/price estimates for cortex-scale hardware: a design space exploration.

    PubMed

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression

    NASA Astrophysics Data System (ADS)

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-08-01

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.

  8. Reducing Disk Storage of Full-3D Seismic Waveform Tomography (F3DT) Through Lossy Online Compression

    DOE PAGES

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-05-05

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithmmore » into our F3DT SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.« less

  9. MARC ES: a computer program for estimating medical information storage requirements.

    PubMed

    Konoske, P J; Dobbins, R W; Gauker, E D

    1998-01-01

    During combat, documentation of medical treatment information is critical for maintaining continuity of patient care. However, knowledge of prior status and treatment of patients is limited to the information noted on a paper field medical card. The Multi-technology Automated Reader Card (MARC), a smart card, has been identified as a potential storage mechanism for casualty medical information. Focusing on data capture and storage technology, this effort developed a Windows program, MARC ES, to estimate storage requirements for the MARC. The program calculates storage requirements for a variety of scenarios using medical documentation requirements, casualty rates, and casualty flows and provides the user with a tool to estimate the space required to store medical data at each echelon of care for selected operational theaters. The program can also be used to identify the point at which data must be uploaded from the MARC if size constraints are imposed. Furthermore, this model can be readily extended to other systems that store or transmit medical information.

  10. Computer-operated analytical platform for the determination of nutrients in hydroponic systems.

    PubMed

    Rius-Ruiz, F Xavier; Andrade, Francisco J; Riu, Jordi; Rius, F Xavier

    2014-03-15

    Hydroponics is a water, energy, space, and cost efficient system for growing plants in constrained spaces or land exhausted areas. Precise control of hydroponic nutrients is essential for growing healthy plants and producing high yields. In this article we report for the first time on a new computer-operated analytical platform which can be readily used for the determination of essential nutrients in hydroponic growing systems. The liquid-handling system uses inexpensive components (i.e., peristaltic pump and solenoid valves), which are discretely computer-operated to automatically condition, calibrate and clean a multi-probe of solid-contact ion-selective electrodes (ISEs). These ISEs, which are based on carbon nanotubes, offer high portability, robustness and easy maintenance and storage. With this new computer-operated analytical platform we performed automatic measurements of K(+), Ca(2+), NO3(-) and Cl(-) during tomato plants growth in order to assure optimal nutritional uptake and tomato production. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Feasibility of using Extreme Ultraviolet Explorer (EUVE) reaction wheels to satisfy Space Infrared Telescope Facility (SIRTF) maneuver requirements

    NASA Technical Reports Server (NTRS)

    Lightsey, W. D.

    1990-01-01

    A digital computer simulation is used to determine if the extreme ultraviolet explorer (EUVE) reaction wheels can provide sufficient torque and momentum storage capability to meet the space infrared telescope facility (SIRTF) maneuver requirements. A brief description of the pointing control system (PCS) and the sensor and actuator dynamic models used in the simulation is presented. A model to represent a disturbance such as fluid sloshing is developed. Results developed with the simulation, and a discussion of these results are presented.

  12. Geocoded data structures and their applications to Earth science investigations

    NASA Technical Reports Server (NTRS)

    Goldberg, M.

    1984-01-01

    A geocoded data structure is a means for digitally representing a geographically referenced map or image. The characteristics of representative cellular, linked, and hybrid geocoded data structures are reviewed. The data processing requirements of Earth science projects at the Goddard Space Flight Center and the basic tools of geographic data processing are described. Specific ways that new geocoded data structures can be used to adapt these tools to scientists' needs are presented. These include: expanding analysis and modeling capabilities; simplifying the merging of data sets from diverse sources; and saving computer storage space.

  13. Efficient Computation of Coherent Synchrotron Radiation Taking into Account 6D Phase Space Distribution of Emitting Electrons

    NASA Astrophysics Data System (ADS)

    Chubar, O.; Couprie, M.-E.

    2007-01-01

    CPU-efficient method for calculation of the frequency domain electric field of Coherent Synchrotron Radiation (CSR) taking into account 6D phase space distribution of electrons in a bunch is proposed. As an application example, calculation results of the CSR emitted by an electron bunch with small longitudinal and large transverse sizes are presented. Such situation can be realized in storage rings or ERLs by transverse deflection of the electron bunches in special crab-type RF cavities, i.e. using the technique proposed for the generation of femtosecond X-ray pulses (A. Zholents et. al., 1999). The computation, performed for the parameters of the SOLEIL storage ring, shows that if the transverse size of electron bunch is larger than the diffraction limit for single-electron SR at a given wavelength — this affects the angular distribution of the CSR at this wavelength and reduces the coherent flux. Nevertheless, for transverse bunch dimensions up to several millimeters and a longitudinal bunch size smaller than hundred micrometers, the resulting CSR flux in the far infrared spectral range is still many orders of magnitude higher than the flux of incoherent SR, and therefore can be considered for practical use.

  14. Efficiently mapping structure-property relationships of gas adsorption in porous materials: application to Xe adsorption.

    PubMed

    Kaija, A R; Wilmer, C E

    2017-09-08

    Designing better porous materials for gas storage or separations applications frequently leverages known structure-property relationships. Reliable structure-property relationships, however, only reveal themselves when adsorption data on many porous materials are aggregated and compared. Gathering enough data experimentally is prohibitively time consuming, and even approaches based on large-scale computer simulations face challenges. Brute force computational screening approaches that do not efficiently sample the space of porous materials may be ineffective when the number of possible materials is too large. Here we describe a general and efficient computational method for mapping structure-property spaces of porous materials that can be useful for adsorption related applications. We describe an algorithm that generates random porous "pseudomaterials", for which we calculate structural characteristics (e.g., surface area, pore size and void fraction) and also gas adsorption properties via molecular simulations. Here we chose to focus on void fraction and Xe adsorption at 1 bar, 5 bar, and 10 bar. The algorithm then identifies pseudomaterials with rare combinations of void fraction and Xe adsorption and mutates them to generate new pseudomaterials, thereby selectively adding data only to those parts of the structure-property map that are the least explored. Use of this method can help guide the design of new porous materials for gas storage and separations applications in the future.

  15. Transitioning to digital radiography.

    PubMed

    Drost, Wm Tod

    2011-04-01

    To describe the different forms of digital radiography (DR), image file formats, supporting equipment and services required for DR, storage of digital images, and teleradiology. Purchasing a DR system is a major investment for a veterinary practice. Types of DR systems include computed radiography, charge coupled devices, and direct or indirect DR. Comparison of workflow for analog and DR is presented. On the surface, switching to DR involves the purchase of DR acquisition hardware. The X-ray machine, table and grids used in analog radiography are the same for DR. Realistically, a considerable infrastructure supports the image acquisition hardware. This infrastructure includes monitors, computer workstations, a robust computer network and internet connection, a plan for storage and back up of images, and service contracts. Advantages of DR compared with analog radiography include improved image quality (when used properly), ease of use (more forgiving to the errors of radiographic technique), speed of making a complete study (important for critically ill patients), fewer repeat radiographs, less time looking for imaging studies, less physical storage space, and the ability to easily send images for consultation. With an understanding of the infrastructure requirements, capabilities and limitations of DR, an informed veterinary practice should be better able to make a sound decision about transitioning to DR. © Veterinary Emergency and Critical Care Society 2011.

  16. Sparse Matrices in MATLAB: Design and Implementation

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Moler, Cleve; Schreiber, Robert

    1992-01-01

    The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.

  17. Efficient proof of ownership for cloud storage systems

    NASA Astrophysics Data System (ADS)

    Zhong, Weiwei; Liu, Zhusong

    2017-08-01

    Cloud storage system through the deduplication technology to save disk space and bandwidth, but the use of this technology has appeared targeted security attacks: the attacker can deceive the server to obtain ownership of the file by get the hash value of original file. In order to solve the above security problems and the different security requirements of the files in the cloud storage system, an efficient and information-theoretical secure proof of ownership sceme is proposed to support the file rating. Through the K-means algorithm to implement file rating, and use random seed technology and pre-calculation method to achieve safe and efficient proof of ownership scheme. Finally, the scheme is information-theoretical secure, and achieve better performance in the most sensitive areas of client-side I/O and computation.

  18. Performance of evacuated tubular solar collectors in a residential heating and cooling system. Final report, 1 October 1978-30 September 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duff, W.S.; Loef, G.O.G.

    1981-03-01

    Operation of CSU Solar House I during the heating season of 1978-1979 and during the 1979 cooling season was based on the use of systems comprising an experimental evacuated tubular solar collector, a non-freezing aqueous collection medium, heat exchange to an insulated conventional vertical cylindrical storage tank and to a built-up rectangular insulated storage tank, heating of circulating air by solar heated water and by electric auxiliary in an off-peak heat storage unit, space cooling by lithium bromide absorption chiller, and service water heating by solar exchange and electric auxiliary. Automatic system control and automatic data acquisition and computation aremore » provided. This system is compared with others evaluated in CSU Solar Houses I, II and III, and with computer predictions based on mathematical models. Of the 69,513 MJ total energy requirement for space heating and hot water during a record cold winter, solar provided 33,281 MJ equivalent to 48 percent. Thirty percent of the incident solar energy was collected and 29 percent was delivered and used for heating and hot water. Of 33,320 MJ required for cooling and hot water during the summer, 79 percent or 26,202 MJ were supplied by solar. Thirty-five percent of the incident solar energy was collected and 26 percent was used for hot water and cooling in the summer. Although not as efficient as the Corning evacuated tube collector previously used, the Philips experimental collector provides solar heating and cooling with minimum operational problems. Improved performance, particularly for cooling, resulted from the use of a very well-insulated heat storage tank. Day time (on-peak) electric auxiliary heating was completely avoided by use of off-peak electric heat storage. A well-designed and operated solar heating and cooling system provided 56 percent of the total energy requirements for heating, cooling, and hot water.« less

  19. HFEM3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, Chester J

    Software solves the three-dimensional Poisson equation div(k(grad(u)) = f, by the finite element method for the case when material properties, k, are distributed over hierarchy of edges, facets and tetrahedra in the finite element mesh. Method is described in Weiss, CJ, Finite element analysis for model parameters distributed on a hierarchy of geometric simplices, Geophysics, v82, E155-167, doi:10.1190/GEO2017-0058.1 (2017). A standard finite element method for solving Poisson’s equation is augmented by including in the 3D stiffness matrix additional 2D and 1D stiffness matrices representing the contributions from material properties associated with mesh faces and edges, respectively. The resulting linear systemmore » is solved iteratively using the conjugate gradient method with Jacobi preconditioning. To minimize computer storage for program execution, the linear solver computes matrix-vector contractions element-by-element over the mesh, without explicit storage of the global stiffness matrix. Program output vtk compliant for visualization and rendering by 3rd party software. Program uses dynamic memory allocation and as such there are no hard limits on problem size outside of those imposed by the operating system and configuration on which the software is run. Dimension, N, of the finite element solution vector is constrained by the the addressable space in 32-vs-64 bit operating systems. Total storage requirements for the problem. Total working space required for the program is approximately 13*N double precision words.« less

  20. A computing method for sound propagation through a nonuniform jet stream

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Liu, C. H.

    1974-01-01

    The classical formulation of sound propagation through a jet flow was found to be inadequate for computer solutions. Previous investigations selected the phase and amplitude of the acoustic pressure as dependent variables requiring the solution of a system of nonlinear algebraic equations. The nonlinearities complicated both the analysis and the computation. A reformulation of the convective wave equation in terms of a new set of dependent variables is developed with a special emphasis on its suitability for numerical solutions on fast computers. The technique is very attractive because the resulting equations are linear in nonwaving variables. The computer solution to such a linear system of algebraic equations may be obtained by well-defined and direct means which are conservative of computer time and storage space. Typical examples are illustrated and computational results are compared with available numerical and experimental data.

  1. Heliophysics Legacy Data Restoration

    NASA Astrophysics Data System (ADS)

    Candey, R. M.; Bell, E. V., II; Bilitza, D.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Grayzeck, E. J.; Harris, B. T.; Hills, H. K.; Johnson, R. C.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; McCaslin, P. W.; McGuire, R. E.; Papitashvili, N. E.; Rhodes, S. A.; Roberts, D. A.; Yurow, R. E.

    2016-12-01

    The Space Physics Data Facility (SPDF) , in collaboration with the National Space Science Data Coordinated Archive (NSSDCA), is converting datasets from older NASA missions to online storage. Valuable science is still buried within these datasets, particularly by applying modern algorithms on computers with vastly more storage and processing power than available when originally measured, and when analyzed in conjunction with other data and models. The data were also not readily accessible as archived on 7- and 9-track tapes, microfilm and microfiche and other media. Although many datasets have now been moved online in formats that are readily analyzed, others will still require some deciphering to puzzle out the data values and scientific meaning. There is an ongoing effort to convert the datasets to a modern Common Data Format (CDF) and add metadata for use in browse and analysis tools such as CDAWeb .

  2. Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.

    2014-12-01

    The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.

  3. KSC-06pd0546

    NASA Image and Video Library

    2006-03-24

    KENNEDY SPACE CENTER, FLA. -- Scott Kerr, director of Engineering Development at Kennedy Space Center, addresses guests at a ribbon-cutting ceremony for the Operations Support Building II (behind him). He and other key Center personnel and guests attended the significant event. The Operations Support Building II is an Agency safety and health initiative project to replace 198,466 square feet of substandard modular housing and trailers in the Launch Complex 39 area at Kennedy Space Center. The five-story building, which sits south of the Vehicle Assembly Building and faces the launch pads, includes 960 office spaces, 16 training rooms, computer and multimedia conference rooms, a Mission Conference Center with an observation deck, technical libraries, an Exchange store, storage, break areas, and parking. Photo credit: NASA/George Shelton

  4. KSC-06pd0544

    NASA Image and Video Library

    2006-03-24

    KENNEDY SPACE CENTER, FLA. -- Kennedy Space Center Deputy Director Bill Parsons talks to guests at a ribbon-cutting ceremony for the Operations Support Building II (behind him). He and other key Center personnel and guests attended the significant event. The Operations Support Building II is an Agency safety and health initiative project to replace 198,466 square feet of substandard modular housing and trailers in the Launch Complex 39 area at Kennedy Space Center. The five-story building, which sits south of the Vehicle Assembly Building and faces the launch pads, includes 960 office spaces, 16 training rooms, computer and multimedia conference rooms, a Mission Conference Center with an observation deck, technical libraries, an Exchange store, storage, break areas, and parking. Photo credit: NASA/George Shelton

  5. Overview of Energy Storage Technologies for Space Applications

    NASA Technical Reports Server (NTRS)

    Surampudi, Subbarao

    2006-01-01

    This presentations gives an overview of the energy storage technologies that are being used in space applications. Energy storage systems have been used in 99% of the robotic and human space missions launched since 1960. Energy storage is used in space missions to provide primary electrical power to launch vehicles, crew exploration vehicles, planetary probes, and astronaut equipment; store electrical energy in solar powered orbital and surface missions and provide electrical energy during eclipse periods; and, to meet peak power demands in nuclear powered rovers, landers, and planetary orbiters. The power source service life (discharge hours) dictates the choice of energy storage technology (capacitors, primary batteries, rechargeable batteries, fuel cells, regenerative fuel cells, flywheels). NASA is planning a number of robotic and human space exploration missions for the exploration of space. These missions will require energy storage devices with mass and volume efficiency, long life capability, an the ability to operate safely in extreme environments. Advanced energy storage technologies continue to be developed to meet future space mission needs.

  6. Paging memory from random access memory to backing storage in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  7. Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-03-01

    Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.

  8. Study of space shuttle orbiter system management computer function. Volume 2: Automated performance verification concepts

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The findings are presented of investigations on concepts and techniques in automated performance verification. The investigations were conducted to provide additional insight into the design methodology and to develop a consolidated technology base from which to analyze performance verification design approaches. Other topics discussed include data smoothing, function selection, flow diagrams, data storage, and shuttle hydraulic systems.

  9. The Czech National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  10. Computer-based communication in support of scientific and technical work. [conferences on management information systems used by scientists of NASA programs

    NASA Technical Reports Server (NTRS)

    Vallee, J.; Wilson, T.

    1976-01-01

    Results are reported of the first experiments for a computer conference management information system at the National Aeronautics and Space Administration. Between August 1975 and March 1976, two NASA projects with geographically separated participants (NASA scientists) used the PLANET computer conferencing system for portions of their work. The first project was a technology assessment of future transportation systems. The second project involved experiments with the Communication Technology Satellite. As part of this project, pre- and postlaunch operations were discussed in a computer conference. These conferences also provided the context for an analysis of the cost of computer conferencing. In particular, six cost components were identified: (1) terminal equipment, (2) communication with a network port, (3) network connection, (4) computer utilization, (5) data storage and (6) administrative overhead.

  11. Fluid management in the optimization of space construction

    NASA Technical Reports Server (NTRS)

    Snyder, Howard

    1990-01-01

    Fluid management impacts strongly on the optimization of space construction. Large quantities of liquids are needed for propellants and life support. The mass of propellant liquids is comparable to that required for the structures. There may be a strong dynamic interaction between the stored liquids and the space structure unless the design minimizes the interaction. The constraints of cost and time required optimization of the supply/resupply strategy. The proper selection and design of the fluid management methods for: slosh control; stratification control; acquisition; transfer; gauging; venting; dumping; contamination control; selection of tank configuration and size; the storage state and the control system can improve the entire system performance substantially. Our effort consists of building mathematical/computer models of the various fluid management methods and testing them against the available experimental data. The results of the models are used as inputs to the system operations studies. During the past year, the emphasis has been on modeling: the transfer of cryogens; sloshing and the storage configuration. The work has been intermeshed with ongoing NASA design and development studies to leverage the funds provided by the Center.

  12. Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs.

    PubMed

    Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang

    2017-06-17

    Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12-1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15-1.26 times the storage utilization efficiency compared with other schemes.

  13. Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs

    PubMed Central

    Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang

    2017-01-01

    Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12–1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15–1.26 times the storage utilization efficiency compared with other schemes. PMID:28629135

  14. Centralized Duplicate Removal Video Storage System with Privacy Preservation in IoT.

    PubMed

    Yan, Hongyang; Li, Xuan; Wang, Yu; Jia, Chunfu

    2018-06-04

    In recent years, the Internet of Things (IoT) has found wide application and attracted much attention. Since most of the end-terminals in IoT have limited capabilities for storage and computing, it has become a trend to outsource the data from local to cloud computing. To further reduce the communication bandwidth and storage space, data deduplication has been widely adopted to eliminate the redundant data. However, since data collected in IoT are sensitive and closely related to users' personal information, the privacy protection of users' information becomes a challenge. As the channels, like the wireless channels between the terminals and the cloud servers in IoT, are public and the cloud servers are not fully trusted, data have to be encrypted before being uploaded to the cloud. However, encryption makes the performance of deduplication by the cloud server difficult because the ciphertext will be different even if the underlying plaintext is identical. In this paper, we build a centralized privacy-preserving duplicate removal storage system, which supports both file-level and block-level deduplication. In order to avoid the leakage of statistical information of data, Intel Software Guard Extensions (SGX) technology is utilized to protect the deduplication process on the cloud server. The results of the experimental analysis demonstrate that the new scheme can significantly improve the deduplication efficiency and enhance the security. It is envisioned that the duplicated removal system with privacy preservation will be of great use in the centralized storage environment of IoT.

  15. Development of climate data storage and processing model

    NASA Astrophysics Data System (ADS)

    Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.

    2016-11-01

    We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.

  16. 41 CFR 302-8.103 - Where may my HHG be stored?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... stored? 302-8.103 Section 302-8.103 Public Contracts and Property Management Federal Travel Regulation... Government-owned storage space; or (b) Suitable commercial storage space obtained by the Government if: (1) Government-owned space is not available, or (2) Commercial storage space is more economical or suitable...

  17. 41 CFR 302-8.103 - Where may my HHG be stored?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... stored? 302-8.103 Section 302-8.103 Public Contracts and Property Management Federal Travel Regulation... Government-owned storage space; or (b) Suitable commercial storage space obtained by the Government if: (1) Government-owned space is not available, or (2) Commercial storage space is more economical or suitable...

  18. 41 CFR 302-8.103 - Where may my HHG be stored?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... stored? 302-8.103 Section 302-8.103 Public Contracts and Property Management Federal Travel Regulation... Government-owned storage space; or (b) Suitable commercial storage space obtained by the Government if: (1) Government-owned space is not available, or (2) Commercial storage space is more economical or suitable...

  19. 41 CFR 302-8.103 - Where may my HHG be stored?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... stored? 302-8.103 Section 302-8.103 Public Contracts and Property Management Federal Travel Regulation... Government-owned storage space; or (b) Suitable commercial storage space obtained by the Government if: (1) Government-owned space is not available, or (2) Commercial storage space is more economical or suitable...

  20. 41 CFR 302-8.103 - Where may my HHG be stored?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... stored? 302-8.103 Section 302-8.103 Public Contracts and Property Management Federal Travel Regulation... Government-owned storage space; or (b) Suitable commercial storage space obtained by the Government if: (1) Government-owned space is not available, or (2) Commercial storage space is more economical or suitable...

  1. Low temperature storage container for transporting perishables to space station

    NASA Technical Reports Server (NTRS)

    Dean, William G (Inventor); Owen, James W. (Inventor)

    1988-01-01

    This invention is directed to the long term storage of frozen and refrigerated food and biological samples by the space shuttle to the space station. A storage container is utilized which has a passive system so that fluid/thermal and electrical interfaces with the logistics module is not required. The container for storage comprises two units, each having an inner storage shell and an outer shell receiving the inner shell and spaced about it. The novelty appears to lie in the integration of thermally efficient cryogenic storage techniques with phase change materials, including the multilayer metalized surface thin plastic film insulation and the vacuum between the shells. Additionally the fiberglass constructed shells having fiberglass honeycomb portions, and the lining of the space between the shells with foil combine to form a storage container which may keep food and biological samples at very low temperatures for very long periods of time utilizing a passive system.

  2. Computer simulation of thermal and fluid systems for MIUS integration and subsystems test /MIST/ laboratory. [Modular Integrated Utility System

    NASA Technical Reports Server (NTRS)

    Rochelle, W. C.; Liu, D. K.; Nunnery, W. J., Jr.; Brandli, A. E.

    1975-01-01

    This paper describes the application of the SINDA (systems improved numerical differencing analyzer) computer program to simulate the operation of the NASA/JSC MIUS integration and subsystems test (MIST) laboratory. The MIST laboratory is designed to test the integration capability of the following subsystems of a modular integrated utility system (MIUS): (1) electric power generation, (2) space heating and cooling, (3) solid waste disposal, (4) potable water supply, and (5) waste water treatment. The SINDA/MIST computer model is designed to simulate the response of these subsystems to externally impressed loads. The computer model determines the amount of recovered waste heat from the prime mover exhaust, water jacket and oil/aftercooler and from the incinerator. This recovered waste heat is used in the model to heat potable water, for space heating, absorption air conditioning, waste water sterilization, and to provide for thermal storage. The details of the thermal and fluid simulation of MIST including the system configuration, modes of operation modeled, SINDA model characteristics and the results of several analyses are described.

  3. Star Identification Without Attitude Knowledge: Testing with X-Ray Timing Experiment Data

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor

    1997-01-01

    As the budget for the scientific exploration of space shrinks, the need for more autonomous spacecraft increases. For a spacecraft with a star tracker, the ability to determinate attitude from a lost in space state autonomously requires the capability to identify the stars in the field of view of the tracker. Although there have been efforts to produce autonomous star trackers which perform this function internally, many programs cannot afford these sensors. The author previously presented a method for identifying stars without a priori attitude knowledge specifically targeted for onboard computers as it minimizes the necessary computer storage. The method has previously been tested with simulated data. This paper provides results of star identification without a priori attitude knowledge using flight data from two 8 by 8 degree charge coupled device star trackers onboard the X-Ray Timing Experiment.

  4. Atmospheric density models

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An atmospheric model developed by Jacchia, quite accurate but requiring a large amount of computer storage and execution time, was found to be ill-suited for the space shuttle onboard program. The development of a simple atmospheric density model to simulate the Jacchia model was studied. Required characteristics including variation with solar activity, diurnal variation, variation with geomagnetic activity, semiannual variation, and variation with height were met by the new atmospheric density model.

  5. Simulation of multistage turbine flows

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Mulac, Richard A.

    1987-01-01

    A flow model has been developed for analyzing multistage turbomachinery flows. This model, referred to as the average passage flow model, describes the time-averaged flow field with a typical passage of a blade row embedded within a multistage configuration. Computer resource requirements, supporting empirical modeling, formulation code development, and multitasking and storage are discussed. Illustrations from simulations of the space shuttle main engine (SSME) fuel turbine performed to date are given.

  6. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  7. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  8. Method of locating related items in a geometric space for data mining

    DOEpatents

    Hendrickson, B.A.

    1999-07-27

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity. 12 figs.

  9. Method of locating related items in a geometric space for data mining

    DOEpatents

    Hendrickson, Bruce A.

    1999-01-01

    A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity.

  10. Analysis of Big Data from Space

    NASA Astrophysics Data System (ADS)

    Tan, J.; Osborne, B.

    2017-09-01

    Massive data have been collected through various space mission. To maximize the investment, the data need to be exploited to the fullest. In this paper, we address key topics on big data from space about the status and future development using the system engineering method. First, we summarized space data including operation data and mission data, on their sources, access way, characteristics of 5Vs and application models based on the concept of big data, as well as the challenges they faced in application. Second, we gave proposals on platform design and architecture to meet the demand and challenges on space data application. It has taken into account of features of space data and their application models. It emphasizes high scalability and flexibility in the aspects of storage, computing and data mining. Thirdly, we suggested typical and promising practices for space data application, that showed valuable methodologies for improving intelligence on space application, engineering, and science. Our work will give an interdisciplinary knowledge to space engineers and information engineers.

  11. Provenance based data integrity checking and verification in cloud environments

    PubMed Central

    Haq, Inam Ul; Jan, Bilal; Khan, Fakhri Alam; Ahmad, Awais

    2017-01-01

    Cloud computing is a recent tendency in IT that moves computing and data away from desktop and hand-held devices into large scale processing hubs and data centers respectively. It has been proposed as an effective solution for data outsourcing and on demand computing to control the rising cost of IT setups and management in enterprises. However, with Cloud platforms user’s data is moved into remotely located storages such that users lose control over their data. This unique feature of the Cloud is facing many security and privacy challenges which need to be clearly understood and resolved. One of the important concerns that needs to be addressed is to provide the proof of data integrity, i.e., correctness of the user’s data stored in the Cloud storage. The data in Clouds is physically not accessible to the users. Therefore, a mechanism is required where users can check if the integrity of their valuable data is maintained or compromised. For this purpose some methods are proposed like mirroring, checksumming and using third party auditors amongst others. However, these methods use extra storage space by maintaining multiple copies of data or the presence of a third party verifier is required. In this paper, we address the problem of proving data integrity in Cloud computing by proposing a scheme through which users are able to check the integrity of their data stored in Clouds. In addition, users can track the violation of data integrity if occurred. For this purpose, we utilize a relatively new concept in the Cloud computing called “Data Provenance”. Our scheme is capable to reduce the need of any third party services, additional hardware support and the replication of data items on client side for integrity checking. PMID:28545151

  12. Provenance based data integrity checking and verification in cloud environments.

    PubMed

    Imran, Muhammad; Hlavacs, Helmut; Haq, Inam Ul; Jan, Bilal; Khan, Fakhri Alam; Ahmad, Awais

    2017-01-01

    Cloud computing is a recent tendency in IT that moves computing and data away from desktop and hand-held devices into large scale processing hubs and data centers respectively. It has been proposed as an effective solution for data outsourcing and on demand computing to control the rising cost of IT setups and management in enterprises. However, with Cloud platforms user's data is moved into remotely located storages such that users lose control over their data. This unique feature of the Cloud is facing many security and privacy challenges which need to be clearly understood and resolved. One of the important concerns that needs to be addressed is to provide the proof of data integrity, i.e., correctness of the user's data stored in the Cloud storage. The data in Clouds is physically not accessible to the users. Therefore, a mechanism is required where users can check if the integrity of their valuable data is maintained or compromised. For this purpose some methods are proposed like mirroring, checksumming and using third party auditors amongst others. However, these methods use extra storage space by maintaining multiple copies of data or the presence of a third party verifier is required. In this paper, we address the problem of proving data integrity in Cloud computing by proposing a scheme through which users are able to check the integrity of their data stored in Clouds. In addition, users can track the violation of data integrity if occurred. For this purpose, we utilize a relatively new concept in the Cloud computing called "Data Provenance". Our scheme is capable to reduce the need of any third party services, additional hardware support and the replication of data items on client side for integrity checking.

  13. Development of a software interface for optical disk archival storage for a new life sciences flight experiments computer

    NASA Technical Reports Server (NTRS)

    Bartram, Peter N.

    1989-01-01

    The current Life Sciences Laboratory Equipment (LSLE) microcomputer for life sciences experiment data acquisition is now obsolete. Among the weaknesses of the current microcomputer are small memory size, relatively slow analog data sampling rates, and the lack of a bulk data storage device. While life science investigators normally prefer data to be transmitted to Earth as it is taken, this is not always possible. No down-link exists for experiments performed in the Shuttle middeck region. One important aspect of a replacement microcomputer is provision for in-flight storage of experimental data. The Write Once, Read Many (WORM) optical disk was studied because of its high storage density, data integrity, and the availability of a space-qualified unit. In keeping with the goals for a replacement microcomputer based upon commercially available components and standard interfaces, the system studied includes a Small Computer System Interface (SCSI) for interfacing the WORM drive. The system itself is designed around the STD bus, using readily available boards. Configurations examined were: (1) master processor board and slave processor board with the SCSI interface; (2) master processor with SCSI interface; (3) master processor with SCSI and Direct Memory Access (DMA); (4) master processor controlling a separate STD bus SCSI board; and (5) master processor controlling a separate STD bus SCSI board with DMA.

  14. Novel systems and methods for quantum communication, quantum computation, and quantum simulation

    NASA Astrophysics Data System (ADS)

    Gorshkov, Alexey Vyacheslavovich

    Precise control over quantum systems can enable the realization of fascinating applications such as powerful computers, secure communication devices, and simulators that can elucidate the physics of complex condensed matter systems. However, the fragility of quantum effects makes it very difficult to harness the power of quantum mechanics. In this thesis, we present novel systems and tools for gaining fundamental insights into the complex quantum world and for bringing practical applications of quantum mechanics closer to reality. We first optimize and show equivalence between a wide range of techniques for storage of photons in atomic ensembles. We describe experiments demonstrating the potential of our optimization algorithms for quantum communication and computation applications. Next, we combine the technique of photon storage with strong atom-atom interactions to propose a robust protocol for implementing the two-qubit photonic phase gate, which is an important ingredient in many quantum computation and communication tasks. In contrast to photon storage, many quantum computation and simulation applications require individual addressing of closely-spaced atoms, ions, quantum dots, or solid state defects. To meet this requirement, we propose a method for coherent optical far-field manipulation of quantum systems with a resolution that is not limited by the wavelength of radiation. While alkali atoms are currently the system of choice for photon storage and many other applications, we develop new methods for quantum information processing and quantum simulation with ultracold alkaline-earth atoms in optical lattices. We show how multiple qubits can be encoded in individual alkaline-earth atoms and harnessed for quantum computing and precision measurements applications. We also demonstrate that alkaline-earth atoms can be used to simulate highly symmetric systems exhibiting spin-orbital interactions and capable of providing valuable insights into strongly correlated physics of transition metal oxides, heavy fermion materials, and spin liquid phases. While ultracold atoms typically exhibit only short-range interactions, numerous exotic phenomena and practical applications require long-range interactions, which can be achieved with ultracold polar molecules. We demonstrate the possibility to engineer a repulsive interaction between polar molecules, which allows for the suppression of inelastic collisions, efficient evaporative cooling, and the creation of novel phases of polar molecules.

  15. Mass Storage Systems.

    ERIC Educational Resources Information Center

    Ranade, Sanjay; Schraeder, Jeff

    1991-01-01

    Presents an overview of the mass storage market and discusses mass storage systems as part of computer networks. Systems for personal computers, workstations, minicomputers, and mainframe computers are described; file servers are explained; system integration issues are raised; and future possibilities are suggested. (LRW)

  16. Using dCache in Archiving Systems oriented to Earth Observation

    NASA Astrophysics Data System (ADS)

    Garcia Gil, I.; Perez Moreno, R.; Perez Navarro, O.; Platania, V.; Ozerov, D.; Leone, R.

    2012-04-01

    The object of LAST activity (Long term data Archive Study on new Technologies) is to perform an independent study on best practices and assessment of different archiving technologies mature for operation in the short and mid-term time frame, or available in the long-term with emphasis on technologies better suited to satisfy the requirements of ESA, LTDP and other European and Canadian EO partners in terms of digital information preservation and data accessibility and exploitation. During the last phase of the project, a testing of several archiving solutions has been performed in order to evaluate their suitability. In particular, dCache, aimed to provide a file system tree view of the data repository exchanging this data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, dCache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user Dcache was created to solve the requirements of big computer centers and universities with big amounts of data, putting their efforts together and founding EMI (European Middleware Initiative). At the moment being, Dcache is mature enough to be implemented, being used by several research centers of relevance (e.g. LHC storing up to 50TB/day). This solution has been not used so far in Earth Observation and the results of the study are summarized in this article, focusing on the capacities over a simulated environment to get in line with the ESA requirements for a geographically distributed storage. The challenge of a geographically distributed storage system can be summarized as the way to provide a maximum quality for storage and dissemination services with the minimum cost.

  17. The raw disk i/o performance of compaq storage works RAID arrays under tru64 unix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselton, A C

    2000-10-19

    We report on the raw disk i/o performance of a set of Compaq StorageWorks RAID arrays connected to our cluster of Compaq ES40 computers via Fibre Channel. The best cumulative peak sustained data rate is l17MB/s per node for reads and 77MB/s per node for writes. This value occurs for a configuration in which a node has two Fibre Channel interfaces to a switch, which in turn has two connections to each of two Compaq StorageWorks RAID arrays. Each RAID array has two HSG80 RAID controllers controlling (together) two 5+p RAID chains. A 10% more space efficient arrangement using amore » single 1l+p RAID chain in place of the two 5+P chains is 25% slower for reads and 40% slower for writes.« less

  18. High Storage Efficiency and Large Fractional Delay of EIT-Based Memory

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Hsin; Lee, Meng-Jung; Wang, I.-Chung; Du, Shengwang; Chen, Yong-Fan; Chen, Ying-Cheng; Yu, Ite

    2013-05-01

    In long-distance quantum communication and optical quantum computation, an efficient and long-lived quantum memory is an important component. We first experimentally demonstrated that a time-space-reversing method plus the optimum pulse shape can improve the storage efficiency (SE) of light pulses to 78% in cold media based on the effect of electromagnetically induced transparency (EIT). We obtain a large fractional delay of 74 at 50% SE, which is the best record so far. The measured classical fidelity of the recalled pulse is higher than 90% and nearly independent of the storage time, implying that the optical memory maintains excellent phase coherence. Our results suggest the current result may be readily applied to single-photon quantum states due to quantum nature of the EIT light-matter inference. This study advances the EIT-based quantum memory in practical quantum information applications.

  19. A guide to the National Space Science Data Center

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This is the second edition of a document that was published to acquaint space and Earth research scientists with an overview of the services offered by the NSSDC. As previously stated, the NSSDC was established by NASA to be the long term archive for data from its space missions. However, the NSSDC has evolved into an organization that provides a multitude of services for scientists throughout the world. Brief articles are presented which discuss these services. At the end of each article is the name, address, and telephone number of the person to contact for additional information. Online Information and Data Systems, Electronic Access, Offline Data Archive, Value Added Services, Mass Storage Activities, and Computer Science Research are all detailed.

  20. Reusable module for the storage, transportation, and supply of multiple propellants in a space environment

    NASA Technical Reports Server (NTRS)

    Mazanek, Daniel D. (Inventor); Mankins, John C. (Inventor)

    2004-01-01

    A space module has an outer structure designed for traveling in space, a docking mechanism for facilitating a docking operation therewith in space, a first storage system storing a first propellant that burns as a result of a chemical reaction therein, a second storage system storing a second propellant that burns as a result of electrical energy being added thereto, and a bi-directional transfer interface coupled to each of the first and second storage systems to transfer the first and second propellants into and out thereof. The space module can be part of a propellant supply architecture that includes at least two of the space modules placed in an orbit in space.

  1. Analysis and Research on Spatial Data Storage Model Based on Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Hu, Yong

    2017-12-01

    In this paper, the data processing and storage characteristics of cloud computing are analyzed and studied. On this basis, a cloud computing data storage model based on BP neural network is proposed. In this data storage model, it can carry out the choice of server cluster according to the different attributes of the data, so as to complete the spatial data storage model with load balancing function, and have certain feasibility and application advantages.

  2. Robust holographic storage system design.

    PubMed

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration. © 2011 Optical Society of America

  3. Study of the longitudinal space charge compensation and longitudinal instability of the ferrite inductive inserts in the Los Alamos Proton Storage Ring

    NASA Astrophysics Data System (ADS)

    Beltran, Chris

    Future high intensity synchrotrons will have a large space charge effect. It has been demonstrated in the Proton Storage Ring (PSR) at the Los Alamos National Laboratory (LANL) that ferrite inductive inserts can be used to compensate for the longitudinal space charge effect. However, simply installing ferrite inductors in the PSR led to longitudinal instabilities that were not tolerable. It was proposed that heating the ferrite would change the material properties in such a way as to reduce the instability. This proposal was tested in the PSR, and found to be true. This dissertation investigates and describes the complex permeability of the ferrite at room temperature and at an elevated temperature. The derived complex permeability is then used to obtain an impedance at the two temperatures. The impedance is used to determine the amount of space charge compensation supplied by the inductors and predict the growth time and frequency range of the longitudinal instability. The impedance is verified by comparing the experimental growth time and frequency range of the longitudinal instability to theoretical and computer simulated growth times and frequency ranges of the longitudinal instability. Lastly, an approach to mitigating the longitudinal instability that does not involve heating the ferrite is explored.

  4. DORMAN computer program (study 2.5). Volume 1: Executive summary. [development of data bank for computerized information storage of NASA programs

    NASA Technical Reports Server (NTRS)

    Stricker, L. T.

    1973-01-01

    The DORCA Applications study has been directed at development of a data bank management computer program identified as DORMAN. Because of the size of the DORCA data files and the manipulations required on that data to support analyses with the DORCA program, automated data techniques to replace time-consuming manual input generation are required. The Dynamic Operations Requirements and Cost Analysis (DORCA) program was developed for use by NASA in planning future space programs. Both programs are designed for implementation on the UNIVAC 1108 computing system. The purpose of this Executive Summary Report is to define for the NASA management the basic functions of the DORMAN program and its capabilities.

  5. Sampling from a Discrete Distribution While Preserving Monotonicity.

    DTIC Science & Technology

    1982-02-01

    in a table beforehand, this procedure, known as the inverse transform method, requires n storage spaces and EX comparisons on average, which may prove...limitations that deserve attention: a. In general, the alias method does not preserve a monotone relationship between U and X as does the inverse transform method...uses the inverse transform approach but with more information computed beforehand, as in the alias method. The proposed method is not new having been

  6. Analysis and Design of Cryogenic Pressure Vessels for Automotive Hydrogen Storage

    NASA Astrophysics Data System (ADS)

    Espinosa-Loza, Francisco Javier

    Cryogenic pressure vessels maximize hydrogen storage density by combining the high pressure (350-700 bar) typical of today's composite pressure vessels with the cryogenic temperature (as low as 25 K) typical of low pressure liquid hydrogen vessels. Cryogenic pressure vessels comprise a high-pressure inner vessel made of carbon fiber-coated metal (similar to those used for storage of compressed gas), a vacuum space filled with numerous sheets of highly reflective metalized plastic (for high performance thermal insulation), and a metallic outer jacket. High density of hydrogen storage is key to practical hydrogen-fueled transportation by enabling (1) long-range (500+ km) transportation with high capacity vessels that fit within available spaces in the vehicle, and (2) reduced cost per kilogram of hydrogen stored through reduced need for expensive structural material (carbon fiber composite) necessary to make the vessel. Low temperature of storage also leads to reduced expansion energy (by an order of magnitude or more vs. ambient temperature compressed gas storage), potentially providing important safety advantages. All this is accomplished while simultaneously avoiding fuel venting typical of cryogenic vessels for all practical use scenarios. This dissertation describes the work necessary for developing and demonstrating successive generations of cryogenic pressure vessels demonstrated at Lawrence Livermore National Laboratory. The work included (1) conceptual design, (2) detailed system design (3) structural analysis of cryogenic pressure vessels, (4) thermal analysis of heat transfer through cryogenic supports and vacuum multilayer insulation, and (5) experimental demonstration. Aside from succeeding in demonstrating a hydrogen storage approach that has established all the world records for hydrogen storage on vehicles (longest driving range, maximum hydrogen storage density, and maximum containment of cryogenic hydrogen without venting), the work also demonstrated a methodology for computationally efficient detailed modeling of cryogenic pressure vessels. The work continues with support of the US Department of Energy to demonstrate a new generation of cryogenic vessels anticipated to improve on the hydrogen storage performance figures previously imposed in this project. The author looks forward to further contributing to a future of long-range, inexpensive, and safe zero emissions transportation.

  7. 14 CFR 420.67 - Storage or handling of liquid propellants.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Storage or handling of liquid propellants. 420.67 Section 420.67 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... Licensee § 420.67 Storage or handling of liquid propellants. (a) For an explosive hazard facility where...

  8. 14 CFR 420.67 - Storage or handling of liquid propellants.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Storage or handling of liquid propellants. 420.67 Section 420.67 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... Licensee § 420.67 Storage or handling of liquid propellants. (a) For an explosive hazard facility where...

  9. 14 CFR 420.67 - Storage or handling of liquid propellants.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Storage or handling of liquid propellants. 420.67 Section 420.67 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION, FEDERAL AVIATION... Licensee § 420.67 Storage or handling of liquid propellants. (a) For an explosive hazard facility where...

  10. Electrodynamic tether system study

    NASA Technical Reports Server (NTRS)

    1987-01-01

    The purpose of this program is to define an Electrodynamic Tether System (ETS) that could be erected from the space station and/or platforms to function as an energy storage device. A schematic representation of the ETS concept mounted on the space station is presented. In addition to the hardware design and configuration efforts, studies are also documented involving simulations of the Earth's magnetic fields and the effects this has on overall system efficiency calculations. Also discussed are some preliminary computer simulations of orbit perturbations caused by the cyclic/night operations of the ETS. System cost estimates, an outline for future development testing for the ETS system, and conclusions and recommendations are also provided.

  11. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  12. Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Blackwell, Kim; Blasso, Len (Editor); Lipscomb, Ann (Editor)

    1991-01-01

    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  13. Remote direct memory access

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.

    2012-12-11

    Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation.

  14. Solar-Terrestrial and Astronomical Research Network (STAR-Network) - A Meaningful Practice of New Cyberinfrastructure on Space Science

    NASA Astrophysics Data System (ADS)

    Hu, X.; Zou, Z.

    2017-12-01

    For the next decades, comprehensive big data application environment is the dominant direction of cyberinfrastructure development on space science. To make the concept of such BIG cyberinfrastructure (e.g. Digital Space) a reality, these aspects of capability should be focused on and integrated, which includes science data system, digital space engine, big data application (tools and models) and the IT infrastructure. In the past few years, CAS Chinese Space Science Data Center (CSSDC) has made a helpful attempt in this direction. A cloud-enabled virtual research platform on space science, called Solar-Terrestrial and Astronomical Research Network (STAR-Network), has been developed to serve the full lifecycle of space science missions and research activities. It integrated a wide range of disciplinary and interdisciplinary resources, to provide science-problem-oriented data retrieval and query service, collaborative mission demonstration service, mission operation supporting service, space weather computing and Analysis service and other self-help service. This platform is supported by persistent infrastructure, including cloud storage, cloud computing, supercomputing and so on. Different variety of resource are interconnected: the science data can be displayed on the browser by visualization tools, the data analysis tools and physical models can be drived by the applicable science data, the computing results can be saved on the cloud, for example. So far, STAR-Network has served a series of space science mission in China, involving Strategic Pioneer Program on Space Science (this program has invested some space science satellite as DAMPE, HXMT, QUESS, and more satellite will be launched around 2020) and Meridian Space Weather Monitor Project. Scientists have obtained some new findings by using the science data from these missions with STAR-Network's contribution. We are confident that STAR-Network is an exciting practice of new cyberinfrastructure architecture on space science.

  15. Radiation Shielding Materials Containing Hydrogen, Boron, and Nitrogen: Systematic Computational and Experimental Study. Phase I

    NASA Technical Reports Server (NTRS)

    Thibeault, Sheila A.; Fay, Catharine C.; Lowther, Sharon E.; Earle, Kevin D.; Sauti, Godfrey; Kang, Jin Ho; Park, Cheol; McMullen, Amelia M.

    2012-01-01

    The key objectives of this study are to investigate, both computationally and experimentally, which forms, compositions, and layerings of hydrogen, boron, and nitrogen containing materials will offer the greatest shielding in the most structurally robust combination against galactic cosmic radiation (GCR), secondary neutrons, and solar energetic particles (SEP). The objectives and expected significance of this research are to develop a space radiation shielding materials system that has high efficacy for shielding radiation and that also has high strength for load bearing primary structures. Such a materials system does not yet exist. The boron nitride nanotube (BNNT) can theoretically be processed into structural BNNT and used for load bearing structures. Furthermore, the BNNT can be incorporated into high hydrogen polymers and the combination used as matrix reinforcement for structural composites. BNNT's molecular structure is attractive for hydrogen storage and hydrogenation. There are two methods or techniques for introducing hydrogen into BNNT: (1) hydrogen storage in BNNT, and (2) hydrogenation of BNNT (hydrogenated BNNT). In the hydrogen storage method, nanotubes are favored to store hydrogen over particles and sheets because they have much larger surface areas and higher hydrogen binding energy. The carbon nanotube (CNT) and BNNT have been studied as potentially outstanding hydrogen storage materials since 1997. Our study of hydrogen storage in BNNT - as a function of temperature, pressure, and hydrogen gas concentration - will be performed with a hydrogen storage chamber equipped with a hydrogen generator. The second method of introducing hydrogen into BNNT is hydrogenation of BNNT, where hydrogen is covalently bonded onto boron, nitrogen, or both. Hydrogenation of BN and BNNT has been studied theoretically. Hyper-hydrogenated BNNT has been theoretically predicted with hydrogen coverage up to 100% of the individual atoms. This is a higher hydrogen content than possible with hydrogen storage; however, a systematic experimental hydrogenation study has not been reported. A combination of the two approaches may be explored to provide yet higher hydrogen content. The hydrogen containing BNNT produced in our study will be characterized for hydrogen content and thermal stability in simulated space service environments. These new materials systems will be tested for their radiation shielding effectiveness against high energy protons and high energy heavy ions at the HIMAC facility in Japan, or a comparable facility. These high energy particles simulate exposure to SEP and GCR environments. They will also be tested in the LaRC Neutron Exposure Laboratory for their neutron shielding effectiveness, an attribute that determines their capability to shield against the secondary neutrons found inside structures and on lunar and planetary surfaces. The potential significance is to produce a radiation protection enabling technology for future exploration missions. Crew on deep space human exploration missions greater than approximately 90 days cannot remain below current crew Permissible Exposure Limits without shielding and/or biological countermeasures. The intent of this research is to bring the Agency closer to extending space missions beyond the 90-day limit, with 1 year as a long-term goal. We are advocating a systems solution with a structural materials component. Our intent is to develop the best materials system for that materials component. In this Phase I study, we have shown, computationally, that hydrogen containing BNNT is effective for shielding against GCR, SEP, and neutrons over a wide range of energies. This is why we are focusing on hydrogen containing BNNT as an innovative advanced concept. In our future work, we plan to demonstrate, experimentally, that hydrogen, boron, and nitrogen based materials can provide mechanically strong, thermally stable, structural materials with effective radiation shielding against GCR, SEP, and neutrons.

  16. Architecture and method for a burst buffer using flash technology

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung

    2016-03-15

    A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.

  17. Experimental Results from the Thermal Energy Storage-2 (TES-2) Flight Experiment

    NASA Technical Reports Server (NTRS)

    Tolbert, Carol

    2000-01-01

    Thermal Energy Storage-2 (TES-2) is a flight experiment that flew on the Space Shuttle Endeavour (STS-72), in January 1996. TES-2 originally flew with TES-1 as part of the OAST-2 Hitchhiker payload on the Space Shuttle Columbia (STS-62) in early 1994. The two experiments, TES-1 and TES-2 were identical except for the fluoride salts to be characterized. TES-1 provided data on lithium fluoride (LiF), TES-2 provided data on a fluoride eutectic (LiF/CaF2). Each experiment was a complex autonomous payload in a Get-Away-Special payload canister. TES-1 operated flawlessly for 22 hr. Results were reported in a paper entitled, Effect of Microgravity on Materials Undergoing Melting and Freezing-The TES Experiment, by David Namkoong et al. A software failure in TES-2 caused its shutdown after 4 sec of operation. TES-1 and 2 were the first experiments in a four experiment suite designed to provide data for understanding the long duration microgravity behavior of thermal energy storage salts that undergo repeated melting and freezing. Such data have never been obtained before and have direct application for the development of space-based solar dynamic (SD) power systems. These power systems will store energy in a thermal energy salt such as lithium fluoride or a eutectic of lithium fluoride/calcium difluoride. The stored energy is extracted during the shade portion of the orbit. This enables the solar dynamic power system to provide constant electrical power over the entire orbit. Analytical computer codes were developed for predicting performance of a space-based solar dynamic power system. Experimental verification of the analytical predictions were needed prior to using the analytical results for future space power design applications. The four TES flight experiments were to be used to obtain the needed experimental data. This paper will address the flight results from the first and second experiments, TES-1 and 2, in comparison to the predicted results from the Thermal Energy Storage Simulation (TESSIM) analytical computer code. An analysis of the TES-2 data was conducted by Cleveland State University Professor, Mounir Ibrahim. TESSIM validation was based on two types of results; temperature history of various points on the containment vessel and TES material distribution within the vessel upon return from flight. The TESSIM prediction showed close comparison with the flight data. Distribution of the TES material within the vessel was obtained by a tomography imaging process. The frozen TES material was concentrated toward the colder end of the canister. The TESSIM prediction indicated a similar pattern. With agreement between TESSIM and the flight data, a computerized representation was produced to show the movement and behavior of the void during the entire melting and freezing cycles.

  18. Telemetry data storage systems technology for the Space Station Freedom era

    NASA Technical Reports Server (NTRS)

    Dalton, John T.

    1989-01-01

    This paper examines the requirements and functions of the telemetry-data recording and storage systems, and the data-storage-system technology projected for the Space Station, with particular attention given to the Space Optical Disk Recorder, an on-board storage subsystem based on 160 gigabit erasable optical disk units each capable of operating at 300 M bits per second. Consideration is also given to storage systems for ground transport recording, which include systems for data capture, buffering, processing, and delivery on the ground. These can be categorized as the first in-first out storage, the fast random-access storage, and the slow access with staging. Based on projected mission manifests and data rates, the worst case requirements were developed for these three storage architecture functions. The results of the analysis are presented.

  19. Performance of the engineering analysis and data system 2 common file system

    NASA Technical Reports Server (NTRS)

    Debrunner, Linda S.

    1993-01-01

    The Engineering Analysis and Data System (EADS) was used from April 1986 to July 1993 to support large scale scientific and engineering computation (e.g. computational fluid dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP in June 1991, after which a contract was awarded to Cray Grumman. EADS II was installed in February 1993, and by July 1993 most users were migrated. EADS II is a network of heterogeneous computer systems supporting scientific and engineering applications. The Common File System (CFS) is a key component of this system. The CFS provides a seamless, integrated environment to the users of EADS II including both disk and tape storage. UniTree software is used to implement this hierarchical storage management system. The performance of the CFS suffered during the early months of the production system. Several of the performance problems were traced to software bugs which have been corrected. Other problems were associated with hardware. However, the use of NFS in UniTree UCFM software limits the performance of the system. The performance issues related to the CFS have led to a need to develop a greater understanding of the CFS organization. This paper will first describe the EADS II with emphasis on the CFS. Then, a discussion of mass storage systems will be presented, and methods of measuring the performance of the Common File System will be outlined. Finally, areas for further study will be identified and conclusions will be drawn.

  20. Shuttle orbiter storage locker system: A study

    NASA Technical Reports Server (NTRS)

    Butler, D. R.; Schowalter, D. T.; Weil, D. C.

    1973-01-01

    Study has been made to assure maximum utility of storage space and crew member facilities in planned space shuttle orbiter. Techniques discussed in this study should be of interest to designers of storage facilities in which space is at premium and vibration is severe. Manufacturers of boats, campers, house trailers, and aircraft could benefit from it.

  1. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.

  2. A 'two-tank' seasonal storage concept for solar space heating of buildings

    NASA Astrophysics Data System (ADS)

    Cha, B. K.; Connor, D. W.; Mueller, R. O.

    This paper presents an analysis of a novel 'two-tank' water storage system, consisting of a large primary water tank for seasonal storage of solar energy plus a much smaller secondary water tank for storage of solar energy collected during the heating season. The system offers the advantages of high collection efficiency during the early stages of the heating season, a period when the temperature of the primary tank is generally high. By preferentially drawing energy from the small secondary tank to meet load, its temperature can be kept well below that of the larger primary tank, thereby providing a lower-temperature source for collector inlet fluid. The resulting improvement in annual system efficiency through the addition of a small secondary tank is found to be substantial - for the site considered in the paper (Madison, Wisconsin), the relative percentage gain in annual performance is in the range of 10 to 20%. A simple computer model permits accurate hour-by-hour transient simulation of thermal performance over a yearly cycle. The paper presents results of detailed simulations of collectors and storage sizing and design trade-offs for solar energy systems supplying 90% to 100% of annual heating load requirements.

  3. Designing mixed metal halide ammines for ammonia storage using density functional theory and genetic algorithms.

    PubMed

    Jensen, Peter Bjerre; Lysgaard, Steen; Quaade, Ulrich J; Vegge, Tejs

    2014-09-28

    Metal halide ammines have great potential as a future, high-density energy carrier in vehicles. So far known materials, e.g. Mg(NH3)6Cl2 and Sr(NH3)8Cl2, are not suitable for automotive, fuel cell applications, because the release of ammonia is a multi-step reaction, requiring too much heat to be supplied, making the total efficiency lower. Here, we apply density functional theory (DFT) calculations to predict new mixed metal halide ammines with improved storage capacities and the ability to release the stored ammonia in one step, at temperatures suitable for system integration with polymer electrolyte membrane fuel cells (PEMFC). We use genetic algorithms (GAs) to search for materials containing up to three different metals (alkaline-earth, 3d and 4d) and two different halides (Cl, Br and I) - almost 27,000 combinations, and have identified novel mixtures, with significantly improved storage capacities. The size of the search space and the chosen fitness function make it possible to verify that the found candidates are the best possible candidates in the search space, proving that the GA implementation is ideal for this kind of computational materials design, requiring calculations on less than two percent of the candidates to identify the global optimum.

  4. Simulation and evaluation of latent heat thermal energy storage

    NASA Technical Reports Server (NTRS)

    Sigmon, T. W.

    1980-01-01

    The relative value of thermal energy storage (TES) for heat pump storage (heating and cooling) as a function of storage temperature, mode of storage (hotside or coldside), geographic locations, and utility time of use rate structures were derived. Computer models used to simulate the performance of a number of TES/heat pump configurations are described. The models are based on existing performance data of heat pump components, available building thermal load computational procedures, and generalized TES subsystem design. Life cycle costs computed for each site, configuration, and rate structure are discussed.

  5. The Petascale Data Storage Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Garth; Long, Darrell; Honeyman, Peter

    2013-07-01

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.

  6. Above the cloud computing orbital services distributed data model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  7. Phase change energy storage for solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Chiaramonte, F. P.; Taylor, J. D.

    1992-01-01

    This paper presents the results of a transient computer simulation that was developed to study phase change energy storage techniques for Space Station Freedom (SSF) solar dynamic (SD) power systems. Such SD systems may be used in future growth SSF configurations. Two solar dynamic options are considered in this paper: Brayton and Rankine. Model elements consist of a single node receiver and concentrator, and takes into account overall heat engine efficiency and power distribution characteristics. The simulation not only computes the energy stored in the receiver phase change material (PCM), but also the amount of the PCM required for various combinations of load demands and power system mission constraints. For a solar dynamic power system in low earth orbit, the amount of stored PCM energy is calculated by balancing the solar energy input and the energy consumed by the loads corrected by an overall system efficiency. The model assumes an average 75 kW SD power system load profile which is connected to user loads via dedicated power distribution channels. The model then calculates the stored energy in the receiver and subsequently estimates the quantity of PCM necessary to meet peaking and contingency requirements. The model can also be used to conduct trade studies on the performance of SD power systems using different storage materials.

  8. Phase change energy storage for solar dynamic power systems

    NASA Astrophysics Data System (ADS)

    Chiaramonte, F. P.; Taylor, J. D.

    This paper presents the results of a transient computer simulation that was developed to study phase change energy storage techniques for Space Station Freedom (SSF) solar dynamic (SD) power systems. Such SD systems may be used in future growth SSF configurations. Two solar dynamic options are considered in this paper: Brayton and Rankine. Model elements consist of a single node receiver and concentrator, and takes into account overall heat engine efficiency and power distribution characteristics. The simulation not only computes the energy stored in the receiver phase change material (PCM), but also the amount of the PCM required for various combinations of load demands and power system mission constraints. For a solar dynamic power system in low earth orbit, the amount of stored PCM energy is calculated by balancing the solar energy input and the energy consumed by the loads corrected by an overall system efficiency. The model assumes an average 75 kW SD power system load profile which is connected to user loads via dedicated power distribution channels. The model then calculates the stored energy in the receiver and subsequently estimates the quantity of PCM necessary to meet peaking and contingency requirements. The model can also be used to conduct trade studies on the performance of SD power systems using different storage materials.

  9. Results of the Second U.S. Manned Orbital Space Flight

    DTIC Science & Technology

    1962-05-24

    accomplish 11F voice transmission to the range of approximately 250 miles. Both the ground were unsuccessful. Thc reason for the SARAH beacon and UHF...exposures taken 0 .iquid o 0 0 of the horizon. The .11IT photographic study Front view Front view is discussed, and a sample plhotograplh is shown...beyond from each tracking station automatically in the meaningful limits of horizon-to-horizon the core storage of the computers. Two IBM track. The

  10. A real-space stochastic density matrix approach for density functional electronic structure.

    PubMed

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  11. A Bookless Library, Part I: Relocating Print Materials to Off-Site Storage

    ERIC Educational Resources Information Center

    Sewell, Bethany B.

    2013-01-01

    This article presents an analysis of the feasibility of a bookless library in a research setting. As spaces for collections are being converted for increased study and community spaces, many libraries have been moving low-use collections to off-site storage. Issues regarding the types of storage spaces available are addressed. Concerns and…

  12. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.

  13. FFTs in external or hierarchical memory

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1989-01-01

    A description is given of advanced techniques for computing an ordered FFT on a computer with external or hierarchical memory. These algorithms (1) require as few as two passes through the external data set, (2) use strictly unit stride, long vector transfers between main memory and external storage, (3) require only a modest amount of scratch space in main memory, and (4) are well suited for vector and parallel computation. Performance figures are included for implementations of some of these algorithms on Cray supercomputers. Of interest is the fact that a main memory version outperforms the current Cray library FFT routines on the Cray-2, the Cray X-MP, and the Cray Y-MP systems. Using all eight processors on the Cray Y-MP, this main memory routine runs at nearly 2 Gflops.

  14. 46 CFR 95.16-20 - Extinguishing agent: Cylinder storage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... cylinder storage room and the protected spaces must meet the insulation criteria for Class A-60, as defined... pneumatic heat actuator as well as a remote manual control. (c) The cylinder storage space must be properly...

  15. 46 CFR 95.16-20 - Extinguishing agent: Cylinder storage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... cylinder storage room and the protected spaces must meet the insulation criteria for Class A-60, as defined... pneumatic heat actuator as well as a remote manual control. (c) The cylinder storage space must be properly...

  16. 46 CFR 95.16-20 - Extinguishing agent: Cylinder storage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... cylinder storage room and the protected spaces must meet the insulation criteria for Class A-60, as defined... pneumatic heat actuator as well as a remote manual control. (c) The cylinder storage space must be properly...

  17. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 3

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    This report contains copies of nearly all of the technical papers and viewgraphs presented at the National Space Science Data Center (NSSDC) Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990s.

  18. Space Station tethered refueling facility operations

    NASA Technical Reports Server (NTRS)

    Kiefel, E. R.; Rudolph, L. K.; Fester, D. A.

    1986-01-01

    The space-based orbital transfer vehicle will require a large cryogenic fuel storage facility at the Space Station. An alternative to fuel storage onboard the Space Station, is on a tethered orbital refueling facility (TORF) which is separated from the Space Station by a sufficient distance to induce a gravity gradient to settle the propellants. Facility operations are a major concern associated with a tethered LO2/LH2 storage depot. A study was carried out to analyze these operations so as to identify the preferred TORF deployment direction (up or down) and whether the TORF should be permanently or intermittently deployed. The analyses considered safety, contamination, rendezvous, servicing, transportation rate, communication, and viewing. An upwardly, intermittently deployed facility is the preferred configuration for a tethered cryogenic fuel storage.

  19. Beam induced electron cloud resonances in dipole magnetic fields

    DOE PAGES

    Calvey, J. R.; Hartung, W.; Makita, J.; ...

    2016-07-01

    The buildup of low energy electrons in an accelerator, known as electron cloud, can be severely detrimental to machine performance. Under certain beam conditions, the beam can become resonant with the cloud dynamics, accelerating the buildup of electrons. This paper will examine two such effects: multipacting resonances, in which the cloud development time is resonant with the bunch spacing, and cyclotron resonances, in which the cyclotron period of electrons in a magnetic field is a multiple of bunch spacing. Both resonances have been studied directly in dipole fields using retarding field analyzers installed in the Cornell Electron Storage Ring. Thesemore » measurements are supported by both analytical models and computer simulations.« less

  20. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1992-01-01

    In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.

  1. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1993-01-01

    In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.

  2. A Pseudo-Temporal Multi-Grid Relaxation Scheme for Solving the Parabolized Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    White, J. A.; Morrison, J. H.

    1999-01-01

    A multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the elliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows. The space marching algorithms developed to improve convergence rate and or reduce computational cost are emphasized. The algorithms presented are extensions to the class of implicit pseudo-time iterative, upwind space-marching schemes. A full approximate storage, full multi-grid scheme is also described which is used to accelerate the convergence of a Gauss-Seidel relaxation method. The multi-grid algorithm is shown to significantly improve convergence on high aspect ratio grids.

  3. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  4. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  5. Visualization Techniques in Space and Atmospheric Sciences

    NASA Technical Reports Server (NTRS)

    Szuszczewicz, E. P. (Editor); Bredekamp, Joseph H. (Editor)

    1995-01-01

    Unprecedented volumes of data will be generated by research programs that investigate the Earth as a system and the origin of the universe, which will in turn require analysis and interpretation that will lead to meaningful scientific insight. Providing a widely distributed research community with the ability to access, manipulate, analyze, and visualize these complex, multidimensional data sets depends on a wide range of computer science and technology topics. Data storage and compression, data base management, computational methods and algorithms, artificial intelligence, telecommunications, and high-resolution display are just a few of the topics addressed. A unifying theme throughout the papers with regards to advanced data handling and visualization is the need for interactivity, speed, user-friendliness, and extensibility.

  6. Application of a simple cerebellar model to geologic surface mapping

    USGS Publications Warehouse

    Hagens, A.; Doveton, J.H.

    1991-01-01

    Neurophysiological research into the structure and function of the cerebellum has inspired computational models that simulate information processing associated with coordination and motor movement. The cerebellar model arithmetic computer (CMAC) has a design structure which makes it readily applicable as an automated mapping device that "senses" a surface, based on a sample of discrete observations of surface elevation. The model operates as an iterative learning process, where cell weights are continuously modified by feedback to improve surface representation. The storage requirements are substantially less than those of a conventional memory allocation, and the model is extended easily to mapping in multidimensional space, where the memory savings are even greater. ?? 1991.

  7. Computation of tightly-focused laser beams in the FDTD method

    PubMed Central

    Çapoğlu, İlker R.; Taflove, Allen; Backman, Vadim

    2013-01-01

    We demonstrate how a tightly-focused coherent TEMmn laser beam can be computed in the finite-difference time-domain (FDTD) method. The electromagnetic field around the focus is decomposed into a plane-wave spectrum, and approximated by a finite number of plane waves injected into the FDTD grid using the total-field/scattered-field (TF/SF) method. We provide an error analysis, and guidelines for the discrete approximation. We analyze the scattering of the beam from layered spaces and individual scatterers. The described method should be useful for the simulation of confocal microscopy and optical data storage. An implementation of the method can be found in our free and open source FDTD software (“Angora”). PMID:23388899

  8. Computation of tightly-focused laser beams in the FDTD method.

    PubMed

    Capoğlu, Ilker R; Taflove, Allen; Backman, Vadim

    2013-01-14

    We demonstrate how a tightly-focused coherent TEMmn laser beam can be computed in the finite-difference time-domain (FDTD) method. The electromagnetic field around the focus is decomposed into a plane-wave spectrum, and approximated by a finite number of plane waves injected into the FDTD grid using the total-field/scattered-field (TF/SF) method. We provide an error analysis, and guidelines for the discrete approximation. We analyze the scattering of the beam from layered spaces and individual scatterers. The described method should be useful for the simulation of confocal microscopy and optical data storage. An implementation of the method can be found in our free and open source FDTD software ("Angora").

  9. Analysis of Expandability and Modifiability of Computer Configuration Concepts for ATC. Volume I. Distributed Concept.

    DTIC Science & Technology

    1979-11-01

    C-9 TRANSITION CONFIGURATION ........................... C-29 i r Fvii LIST OF TABLES Table Page 2.1-1 PARAMETERS DESCRIBING ATC OPERATION - BASELINE...buffer load from each sensor. Medium Storage (Buffer space for data in and out is largest factor .) II P=K +KR Processing. Where R is the number of... factors to the data from the next scan VII Provides support service Operational Role VIII Dependence Requires data from Preliminary Processing and Target

  10. Longitudinal space charge compensation at PSR

    NASA Astrophysics Data System (ADS)

    Neri, Filippo

    1998-11-01

    The longitudinal space-charge force in neutron spallation source compressor ring or other high intensity proton storage rings can be compensated by introducing an insert in the ring. The effect of the inductor is to cancel all or part of the space charge potential, because it is capacitive. The Proton Storage Ring at Los Alamos National Laboratory is a compressor ring used to produce short pulses of spallation neutrons. Inductive inserts design for space charge compensation at the Los Alamos Proton Storage Ring is described.

  11. SciServer Compute brings Analysis to Big Data in the Cloud

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.

  12. RAID Unbound: Storage Fault Tolerance in a Distributed Environment

    NASA Technical Reports Server (NTRS)

    Ritchie, Brian

    1996-01-01

    Mirroring, data replication, backup, and more recently, redundant arrays of independent disks (RAID) are all technologies used to protect and ensure access to critical company data. A new set of problems has arisen as data becomes more and more geographically distributed. Each of the technologies listed above provides important benefits; but each has failed to adapt fully to the realities of distributed computing. The key to data high availability and protection is to take the technologies' strengths and 'virtualize' them across a distributed network. RAID and mirroring offer high data availability, which data replication and backup provide strong data protection. If we take these concepts at a very granular level (defining user, record, block, file, or directory types) and them liberate them from the physical subsystems with which they have traditionally been associated, we have the opportunity to create a highly scalable network wide storage fault tolerance. The network becomes the virtual storage space in which the traditional concepts of data high availability and protection are implemented without their corresponding physical constraints.

  13. Specimen Sample Preservation for Cell and Tissue Cultures

    NASA Technical Reports Server (NTRS)

    Meeker, Gabrielle; Ronzana, Karolyn; Schibner, Karen; Evans, Robert

    1996-01-01

    The era of the International Space Station with its longer duration missions will pose unique challenges to microgravity life sciences research. The Space Station Biological Research Project (SSBRP) is responsible for addressing these challenges and defining the science requirements necessary to conduct life science research on-board the International Space Station. Space Station will support a wide range of cell and tissue culture experiments for durations of 1 to 30 days. Space Shuttle flights to bring experimental samples back to Earth for analyses will only occur every 90 days. Therefore, samples may have to be retained for periods up to 60 days. This presents a new challenge in fresh specimen sample storage for cell biology. Fresh specimen samples are defined as samples that are preserved by means other than fixation and cryopreservation. The challenge of long-term storage of fresh specimen samples includes the need to suspend or inhibit proliferation and metabolism pending return to Earth-based laboratories. With this challenge being unique to space research, there have not been any ground based studies performed to address this issue. It was decided hy SSBRP that experiment support studies to address the following issues were needed: Fixative Solution Management; Media Storage Conditions; Fresh Specimen Sample Storage of Mammalian Cell/Tissue Cultures; Fresh Specimen Sample Storage of Plant Cell/Tissue Cultures; Fresh Specimen Sample Storage of Aquatic Cell/Tissue Cultures; and Fresh Specimen Sample Storage of Microbial Cell/Tissue Cultures. The objective of these studies was to derive a set of conditions and recommendations that can be used in a long duration microgravity environment such as Space Station that will permit extended storage of cell and tissue culture specimens in a state consistent with zero or minimal growth, while at the same time maintaining their stability and viability.

  14. A Computing Method for Sound Propagation Through a Nonuniform Jet Stream

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Liu, C. H.

    1974-01-01

    Understanding the principles of jet noise propagation is an essential ingredient of systematic noise reduction research. High speed computer methods offer a unique potential for dealing with complex real life physical systems whereas analytical solutions are restricted to sophisticated idealized models. The classical formulation of sound propagation through a jet flow was found to be inadequate for computer solutions and a more suitable approach was needed. Previous investigations selected the phase and amplitude of the acoustic pressure as dependent variables requiring the solution of a system of nonlinear algebraic equations. The nonlinearities complicated both the analysis and the computation. A reformulation of the convective wave equation in terms of a new set of dependent variables is developed with a special emphasis on its suitability for numerical solutions on fast computers. The technique is very attractive because the resulting equations are linear in nonwaving variables. The computer solution to such a linear system of algebraic equations may be obtained by well-defined and direct means which are conservative of computer time and storage space. Typical examples are illustrated and computational results are compared with available numerical and experimental data.

  15. Reconciling Scratch Space Consumption, Exposure, and Volatility to Achieve Timely Staging of Job Input Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monti, Henri; Butt, Ali R; Vazhkudai, Sudharshan S

    2010-04-01

    Innovative scientific applications and emerging dense data sources are creating a data deluge for high-end computing systems. Processing such large input data typically involves copying (or staging) onto the supercomputer's specialized high-speed storage, scratch space, for sustained high I/O throughput. The current practice of conservatively staging data as early as possible makes the data vulnerable to storage failures, which may entail re-staging and consequently reduced job throughput. To address this, we present a timely staging framework that uses a combination of job startup time predictions, user-specified intermediate nodes, and decentralized data delivery to coincide input data staging with job start-up.more » By delaying staging to when it is necessary, the exposure to failures and its effects can be reduced. Evaluation using both PlanetLab and simulations based on three years of Jaguar (No. 1 in Top500) job logs show as much as 85.9% reduction in staging times compared to direct transfers, 75.2% reduction in wait time on scratch, and 2.4% reduction in usage/hour.« less

  16. Flight Computer Design for the Space Technology 5 (ST-5) Mission

    NASA Technical Reports Server (NTRS)

    Speer, David; Jackson, George; Raphael, Dave; Day, John H. (Technical Monitor)

    2001-01-01

    As part of NASA's New Millennium Program, the Space Technology 5 mission will validate a variety of technologies for nano-satellite and constellation mission applications. Included are: a miniaturized and low power X-band transponder, a constellation communication and navigation transceiver, a cold gas micro-thruster, two different variable emittance (thermal) controllers, flex cables for solar array power collection, autonomous groundbased constellation management tools, and a new CMOS ultra low-power, radiation-tolerant, +0.5 volt logic technology. The ST-5 focus is on small and low-power. A single-processor, multi-function flight computer will implement direct digital and analog interfaces to all of the other spacecraft subsystems and components. There will not be a distributed data system that uses a standardized serial bus such as MIL-STD-1553 or MIL-STD-1773. The flight software running on the single processor will be responsible for all real-time processing associated with: guidance, navigation and control, command and data handling (C&DH) including uplink/downlink, power switching and battery charge management, science data analysis and storage, intra-constellation communications, and housekeeping data collection and logging. As a nanosatellite trail-blazer for future constellations of up to 100 separate space vehicles, ST-5 will demonstrate a compact (single board), low power (5.5 watts) solution to the data acquisition, control, communications, processing and storage requirements that have traditionally required an entire network of separate circuit boards and/or avionics boxes. In addition to the New Millennium technologies, other major spacecraft subsystems include the power system electronics, a lithium-ion battery, triple-junction solar cell arrays, a science-grade magnetometer, a miniature spinning sun sensor, and a propulsion system.

  17. A review of emerging non-volatile memory (NVM) technologies and applications

    NASA Astrophysics Data System (ADS)

    Chen, An

    2016-11-01

    This paper will review emerging non-volatile memory (NVM) technologies, with the focus on phase change memory (PCM), spin-transfer-torque random-access-memory (STTRAM), resistive random-access-memory (RRAM), and ferroelectric field-effect-transistor (FeFET) memory. These promising NVM devices are evaluated in terms of their advantages, challenges, and applications. Their performance is compared based on reported parameters of major industrial test chips. Memory selector devices and cell structures are discussed. Changing market trends toward low power (e.g., mobile, IoT) and data-centric applications create opportunities for emerging NVMs. High-performance and low-cost emerging NVMs may simplify memory hierarchy, introduce non-volatility in logic gates and circuits, reduce system power, and enable novel architectures. Storage-class memory (SCM) based on high-density NVMs could fill the performance and density gap between memory and storage. Some unique characteristics of emerging NVMs can be utilized for novel applications beyond the memory space, e.g., neuromorphic computing, hardware security, etc. In the beyond-CMOS era, emerging NVMs have the potential to fulfill more important functions and enable more efficient, intelligent, and secure computing systems.

  18. KSC-06pd0547

    NASA Image and Video Library

    2006-03-24

    KENNEDY SPACE CENTER, FLA. -- With the ribbon-cutting ceremony, the new Operations Support Building II is officially in business. Participating in the event are (left to right) Aris Garcia, vice president of the architecture firm Wolfgang Alvarez; Mark Nappi, associate program manager of Ground Operations for United Space Alliance; Donald Minderman, NASA project manager; Scott Kerr, director of Engineering Development at Kennedy; Bill Parsons, deputy director of Kennedy Space Center; Miguel Morales, with NASA Engineering Development; Mike Wetmore, director of Shuttle Processing; and Tim Clancy, president of the construction firm Clancy & Theys. The Operations Support Building II is an Agency safety and health initiative project to replace 198,466 square feet of substandard modular housing and trailers in the Launch Complex 39 area at Kennedy Space Center. The five-story building, which sits south of the Vehicle Assembly Building and faces the launch pads, includes 960 office spaces, 16 training rooms, computer and multimedia conference rooms, a Mission Conference Center with an observation deck, technical libraries, an Exchange store, storage, break areas, and parking. Photo credit: NASA/George Shelton

  19. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  20. Chemical Space: Big Data Challenge for Molecular Diversity.

    PubMed

    Awale, Mahendra; Visini, Ricardo; Probst, Daniel; Arús-Pous, Josep; Reymond, Jean-Louis

    2017-10-25

    Chemical space describes all possible molecules as well as multi-dimensional conceptual spaces representing the structural diversity of these molecules. Part of this chemical space is available in public databases ranging from thousands to billions of compounds. Exploiting these databases for drug discovery represents a typical big data problem limited by computational power, data storage and data access capacity. Here we review recent developments of our laboratory, including progress in the chemical universe databases (GDB) and the fragment subset FDB-17, tools for ligand-based virtual screening by nearest neighbor searches, such as our multi-fingerprint browser for the ZINC database to select purchasable screening compounds, and their application to discover potent and selective inhibitors for calcium channel TRPV6 and Aurora A kinase, the polypharmacology browser (PPB) for predicting off-target effects, and finally interactive 3D-chemical space visualization using our online tools WebDrugCS and WebMolCS. All resources described in this paper are available for public use at www.gdb.unibe.ch.

  1. Evolutionary growth for Space Station Freedom electrical power system

    NASA Technical Reports Server (NTRS)

    Marshall, Matthew Fisk; Mclallin, Kerry; Zernic, Mike

    1989-01-01

    Over an operational lifetime of at least 30 yr, Space Station Freedom will encounter increased Space Station user requirements and advancing technologies. The Space Station electrical power system is designed with the flexibility to accommodate these emerging technologies and expert systems and is being designed with the necessary software hooks and hardware scars to accommodate increased growth demand. The electrical power system is planned to grow from the initial 75 kW up to 300 kW. The Phase 1 station will utilize photovoltaic arrays to produce the electrical power; however, for growth to 300 kW, solar dynamic power modules will be utilized. Pairs of 25 kW solar dynamic power modules will be added to the station to reach the power growth level. The addition of solar dynamic power in the growth phase places constraints in the initial Space Station systems such as guidance, navigation, and control, external thermal, truss structural stiffness, computational capabilities and storage, which must be planned-in, in order to facilitate the addition of the solar dynamic modules.

  2. Distributed state-space generation of discrete-state stochastic models

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Gluckman, Joshua; Nicol, David

    1995-01-01

    High-level formalisms such as stochastic Petri nets can be used to model complex systems. Analysis of logical and numerical properties of these models of ten requires the generation and storage of the entire underlying state space. This imposes practical limitations on the types of systems which can be modeled. Because of the vast amount of memory consumed, we investigate distributed algorithms for the generation of state space graphs. The distributed construction allows us to take advantage of the combined memory readily available on a network of workstations. The key technical problem is to find effective methods for on-the-fly partitioning, so that the state space is evenly distributed among processors. In this paper we report on the implementation of a distributed state-space generator that may be linked to a number of existing system modeling tools. We discuss partitioning strategies in the context of Petri net models, and report on performance observed on a network of workstations, as well as on a distributed memory multi-computer.

  3. An Isotope-Powered Thermal Storage unit for space applications

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E.; Rose, M. F.

    1991-01-01

    An Isotope-Powered Thermal Storage Unit (ITSU), that would store and utilize heat energy in a 'pulsed' fashion in space operations, is described. Properties of various radioisotopes are considered in conjunction with characteristics of thermal energy storage materials, to evaluate possible implementation of such a device. The utility of the unit is discussed in light of various space applications, including rocket propulsion, power generation, and spacecraft thermal management.

  4. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the ability of utilizing the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with actual measurements of leak sounds made by a one atmosphere to vacuum leak through a small hole in the pressure wall of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). While the E-FEM method represents a reverberant sound field calculation, of importance to this application is the requirement to also handle the direct field effect of the sound generation. It was also important to be able to compute the sound fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  5. PC Software graphics tool for conceptual design of space/planetary electrical power systems

    NASA Technical Reports Server (NTRS)

    Truong, Long V.

    1995-01-01

    This paper describes the Decision Support System (DSS), a personal computer software graphics tool for designing conceptual space and/or planetary electrical power systems. By using the DSS, users can obtain desirable system design and operating parameters, such as system weight, electrical distribution efficiency, and bus power. With this tool, a large-scale specific power system was designed in a matter of days. It is an excellent tool to help designers make tradeoffs between system components, hardware architectures, and operation parameters in the early stages of the design cycle. The DSS is a user-friendly, menu-driven tool with online help and a custom graphical user interface. An example design and results are illustrated for a typical space power system with multiple types of power sources, frequencies, energy storage systems, and loads.

  6. A universal computer control system for motors

    NASA Technical Reports Server (NTRS)

    Szakaly, Zoltan F. (Inventor)

    1991-01-01

    A control system for a multi-motor system such as a space telerobot, having a remote computational node and a local computational node interconnected with one another by a high speed data link is described. A Universal Computer Control System (UCCS) for the telerobot is located at each node. Each node is provided with a multibus computer system which is characterized by a plurality of processors with all processors being connected to a common bus, and including at least one command processor. The command processor communicates over the bus with a plurality of joint controller cards. A plurality of direct current torque motors, of the type used in telerobot joints and telerobot hand-held controllers, are connected to the controller cards and responds to digital control signals from the command processor. Essential motor operating parameters are sensed by analog sensing circuits and the sensed analog signals are converted to digital signals for storage at the controller cards where such signals can be read during an address read/write cycle of the command processing processor.

  7. Tchebichef moment transform on image dithering for mobile applications

    NASA Astrophysics Data System (ADS)

    Ernawan, Ferda; Abu, Nur Azman; Rahmalan, Hidayah

    2012-04-01

    Currently, mobile image applications spend a lot of computing process to display images. A true color raw image contains billions of colors and it consumes high computational power in most mobile image applications. At the same time, mobile devices are only expected to be equipped with lower computing process and minimum storage space. Image dithering is a popular technique to reduce the numbers of bit per pixel at the expense of lower quality image displays. This paper proposes a novel approach on image dithering using 2x2 Tchebichef moment transform (TMT). TMT integrates a simple mathematical framework technique using matrices. TMT coefficients consist of real rational numbers. An image dithering based on TMT has the potential to provide better efficiency and simplicity. The preliminary experiment shows a promising result in term of error reconstructions and image visual textures.

  8. Study on Global GIS architecture and its key technologies

    NASA Astrophysics Data System (ADS)

    Cheng, Chengqi; Guan, Li; Lv, Xuefeng

    2009-09-01

    Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that functions of four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. Three kinds of core supportive technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology, and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will be an important development tendency of GIS.

  9. Study on Global GIS architecture and its key technologies

    NASA Astrophysics Data System (ADS)

    Cheng, Chengqi; Guan, Li; Lv, Xuefeng

    2010-11-01

    Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that functions of four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. Three kinds of core supportive technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology, and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will be an important development tendency of GIS.

  10. A third-order implicit discontinuous Galerkin method based on a Hermite WENO reconstruction for time-accurate solution of the compressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Liu, Xiaodong; Luo, Hong

    2015-06-01

    Here, a space and time third-order discontinuous Galerkin method based on a Hermite weighted essentially non-oscillatory reconstruction is presented for the unsteady compressible Euler and Navier–Stokes equations. At each time step, a lower-upper symmetric Gauss–Seidel preconditioned generalized minimal residual solver is used to solve the systems of linear equations arising from an explicit first stage, single diagonal coefficient, diagonally implicit Runge–Kutta time integration scheme. The performance of the developed method is assessed through a variety of unsteady flow problems. Numerical results indicate that this method is able to deliver the designed third-order accuracy of convergence in both space and time,more » while requiring remarkably less storage than the standard third-order discontinous Galerkin methods, and less computing time than the lower-order discontinous Galerkin methods to achieve the same level of temporal accuracy for computing unsteady flow problems.« less

  11. KSC-07pd2416

    NASA Image and Video Library

    2007-09-10

    KENNEDY SPACE CENTER, FLA. -- In bay 3 of the Orbiter Processing Facility, a tool storage assembly unit is being moved for storage in Discovery's payload bay. The tools may be used on a spacewalk, yet to be determined, during mission STS-120. In an unusual operation, the payload bay doors had to be reopened after closure to accommodate the storage. Space shuttle Discovery is targeted to launch Oct. 23 to the International Space Station. It will carry the U.S. Node 2, a connecting module, named Harmony, for assembly on the space station. Photo credit: NASA/Amanda Diller

  12. Virtualization and cloud computing in dentistry.

    PubMed

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  13. High temperature superconducting magnetic energy storage for future NASA missions

    NASA Technical Reports Server (NTRS)

    Faymon, Karl A.; Rudnick, Stanley J.

    1988-01-01

    Several NASA sponsored studies based on 'conventional' liquid helium temperature level superconductivity technology have concluded that superconducting magnetic energy storage has considerable potential for space applications. The advent of high temperature superconductivity (HTSC) may provide additional benefits over conventional superconductivity technology, making magnetic energy storage even more attractive. The proposed NASA space station is a possible candidate for the application of HTSC energy storage. Alternative energy storage technologies for this and other low Earth orbit missions are compared.

  14. Experimental and Numerical Investigation of Reduced Gravity Fluid Slosh Dynamics for the Characterization of Cryogenic Launch and Space Vehicle Propellants

    NASA Technical Reports Server (NTRS)

    Walls, Laurie K.; Kirk, Daniel; deLuis, Kavier; Haberbusch, Mark S.

    2011-01-01

    As space programs increasingly investigate various options for long duration space missions the accurate prediction of propellant behavior over long periods of time in microgravity environment has become increasingly imperative. This has driven the development of a detailed, physics-based understanding of slosh behavior of cryogenic propellants over a range of conditions and environments that are relevant for rocket and space storage applications. Recent advancements in computational fluid dynamics (CFD) models and hardware capabilities have enabled the modeling of complex fluid behavior in microgravity environment. Historically, launch vehicles with moderate duration upper stage coast periods have contained very limited instrumentation to quantify propellant stratification and boil-off in these environments, thus the ability to benchmark these complex computational models is of great consequence. To benchmark enhanced CFD models, recent work focuses on establishing an extensive experimental database of liquid slosh under a wide range of relevant conditions. In addition, a mass gauging system specifically designed to provide high fidelity measurements for both liquid stratification and liquid/ullage position in a micro-gravity environment has been developed. This pUblication will summarize the various experimental programs established to produce this comprehensive database and unique flight measurement techniques.

  15. 19 CFR 19.30 - Domestic wheat not to be allowed in bonded space.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Domestic wheat not to be allowed in bonded space... THEREIN Space Bonded for the Storage of Wheat § 19.30 Domestic wheat not to be allowed in bonded space. The presence of domestic wheat in space bonded for the storage of imported wheat shall not be...

  16. Semantics-based distributed I/O with the ParaMEDIC framework.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaji, P.; Feng, W.; Lin, H.

    2008-01-01

    Many large-scale applications simultaneously rely on multiple resources for efficient execution. For example, such applications may require both large compute and storage resources; however, very few supercomputing centers can provide large quantities of both. Thus, data generated at the compute site oftentimes has to be moved to a remote storage site for either storage or visualization and analysis. Clearly, this is not an efficient model, especially when the two sites are distributed over a wide-area network. Thus, we present a framework called 'ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing' which uses application-specific semantic information to convert the generatedmore » data to orders-of-magnitude smaller metadata at the compute site, transfer the metadata to the storage site, and re-process the metadata at the storage site to regenerate the output. Specifically, ParaMEDIC trades a small amount of additional computation (in the form of data post-processing) for a potentially significant reduction in data that needs to be transferred in distributed environments.« less

  17. Characterization of heat transfer in nutrient materials. [space flight feeding

    NASA Technical Reports Server (NTRS)

    Witte, L. C.

    1985-01-01

    The processing and storage of foodstuffs in zero-g environments such as in Skylab and the space shuttle were investigated. Particular attention was given to the efficient heating of foodstuffs. The thermophysical properties of various foods were cataloged and critiqued. The low temperature storage of biological samples as well as foodstuffs during shuttle flights was studied. Research and development requirements related to food preparation and storage on the space station are discussed.

  18. 14 CFR 249.5 - Storage of records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Storage of records. 249.5 Section 249.5 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS PRESERVATION OF AIR CARRIER RECORDS General Instructions § 249.5 Storage of records. Each carrier...

  19. 14 CFR 249.5 - Storage of records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Storage of records. 249.5 Section 249.5 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS PRESERVATION OF AIR CARRIER RECORDS General Instructions § 249.5 Storage of records. Each carrier...

  20. 14 CFR 249.5 - Storage of records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Storage of records. 249.5 Section 249.5 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS PRESERVATION OF AIR CARRIER RECORDS General Instructions § 249.5 Storage of records. Each carrier...

  1. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  2. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...

  3. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...

  4. Optimization of thermal protection systems for the space shuttle vehicle. Volume 1: Final report

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A study performed to continue development of computational techniques for the Space Shuttle Thermal Protection System is reported. The resulting computer code was used to perform some additional optimization studies on several TPS configurations. The program was developed in Fortran 4 for the CDC 6400, and it was converted to Fortran 5 to be used for the Univac 1108. The computational methodology is developed in modular fashion to facilitate changes and updating of the techniques and to allow overlaying the computer code to fit into approximately 131,000 octal words of core storage. The program logic involves subroutines which handle input and output of information between computer and user, thermodynamic stress, dynamic, and weight/estimate analyses of a variety of panel configurations. These include metallic, ablative, RSI (with and without an underlying phase change material), and a thermodynamic analysis only of carbon-carbon systems applied to the leading edge and flat cover panels. Two different thermodynamic analyses are used. The first is a two-dimensional, explicit precedure with variable time steps which is used to describe the behavior of metallic and carbon-carbon leading edges. The second is a one-dimensional implicity technique used to predict temperature in the charring ablator and the noncharring RSI. The latter analysis is performed simply by suppressing the chemical reactions and pyrolysis of the TPS material.

  5. Operational Numerical Weather Prediction at the Met Office and potential ways forward for operational space weather prediction systems

    NASA Astrophysics Data System (ADS)

    Jackson, David

    NICT (National Institute of Information and Communications Technology) has been in charge of space weather forecast service in Japan for more than 20 years. The main target region of the space weather is the geo-space in the vicinity of the Earth where human activities are dominant. In the geo-space, serious damages of satellites, international space stations and astronauts take place caused by energetic particles or electromagnetic disturbances: the origin of the causes is dynamically changing of solar activities. Positioning systems via GPS satellites are also im-portant recently. Since the most significant effect of positioning error comes from disturbances of the ionosphere, it is crucial to estimate time-dependent modulation of the electron density profiles in the ionosphere. NICT is one of the 13 members of the ISES (International Space Environment Service), which is an international assembly of space weather forecast centers under the UNESCO. With help of geo-space environment data exchanging among the member nations, NICT operates daily space weather forecast service every day to provide informa-tion on forecasts of solar flare, geomagnetic disturbances, solar proton event, and radio-wave propagation conditions in the ionosphere. The space weather forecast at NICT is conducted based on the three methodologies: observations, simulations and informatics (OSI model). For real-time or quasi real-time reporting of space weather, we conduct our original observations: Hiraiso solar observatory to monitor the solar activity (solar flare, coronal mass ejection, and so on), domestic ionosonde network, magnetometer HF radar observations in far-east Siberia, and south-east Asia low-latitude ionosonde network (SEALION). Real-time observation data to monitor solar and solar-wind activities are obtained through antennae at NICT from ACE and STEREO satellites. We have a middle-class super-computer (NEC SX-8R) to maintain real-time computer simulations for solar and solar-wind, magnetosphere and ionosphere. The three simulations are directly or indirectly connected each other based on real-time observa-tion data to reproduce a virtual geo-space region on the super-computer. Informatics is a new methodology to make precise forecast of space weather. Based on new information and communication technologies (ICT), it provides more information in both quality and quantity. At NICT, we have been developing a cloud-computing system named "space weather cloud" based on a high-speed network system (JGN2+). Huge-scale distributed storage (1PB), clus-ter computers, visualization systems and other resources are expected to derive new findings and services of space weather forecasting. The final goal of NICT space weather service is to predict near-future space weather conditions and disturbances which will be causes of satellite malfunctions, tele-communication problems, and error of GPS navigations. In the present talk, we introduce our recent activities on the space weather services and discuss how we are going to develop the services from the view points of space science and practical uses.

  6. Activities of NICT space weather project

    NASA Astrophysics Data System (ADS)

    Murata, Ken T.; Nagatsuma, Tsutomu; Watari, Shinichi; Shinagawa, Hiroyuki; Ishii, Mamoru

    NICT (National Institute of Information and Communications Technology) has been in charge of space weather forecast service in Japan for more than 20 years. The main target region of the space weather is the geo-space in the vicinity of the Earth where human activities are dominant. In the geo-space, serious damages of satellites, international space stations and astronauts take place caused by energetic particles or electromagnetic disturbances: the origin of the causes is dynamically changing of solar activities. Positioning systems via GPS satellites are also im-portant recently. Since the most significant effect of positioning error comes from disturbances of the ionosphere, it is crucial to estimate time-dependent modulation of the electron density profiles in the ionosphere. NICT is one of the 13 members of the ISES (International Space Environment Service), which is an international assembly of space weather forecast centers under the UNESCO. With help of geo-space environment data exchanging among the member nations, NICT operates daily space weather forecast service every day to provide informa-tion on forecasts of solar flare, geomagnetic disturbances, solar proton event, and radio-wave propagation conditions in the ionosphere. The space weather forecast at NICT is conducted based on the three methodologies: observations, simulations and informatics (OSI model). For real-time or quasi real-time reporting of space weather, we conduct our original observations: Hiraiso solar observatory to monitor the solar activity (solar flare, coronal mass ejection, and so on), domestic ionosonde network, magnetometer HF radar observations in far-east Siberia, and south-east Asia low-latitude ionosonde network (SEALION). Real-time observation data to monitor solar and solar-wind activities are obtained through antennae at NICT from ACE and STEREO satellites. We have a middle-class super-computer (NEC SX-8R) to maintain real-time computer simulations for solar and solar-wind, magnetosphere and ionosphere. The three simulations are directly or indirectly connected each other based on real-time observa-tion data to reproduce a virtual geo-space region on the super-computer. Informatics is a new methodology to make precise forecast of space weather. Based on new information and communication technologies (ICT), it provides more information in both quality and quantity. At NICT, we have been developing a cloud-computing system named "space weather cloud" based on a high-speed network system (JGN2+). Huge-scale distributed storage (1PB), clus-ter computers, visualization systems and other resources are expected to derive new findings and services of space weather forecasting. The final goal of NICT space weather service is to predict near-future space weather conditions and disturbances which will be causes of satellite malfunctions, tele-communication problems, and error of GPS navigations. In the present talk, we introduce our recent activities on the space weather services and discuss how we are going to develop the services from the view points of space science and practical uses.

  7. Design and Analysis of a Flexible, Reliable Deep Space Life Support System

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    This report describes a flexible, reliable, deep space life support system design approach that uses either storage or recycling or both together. The design goal is to provide the needed life support performance with the required ultra reliability for the minimum Equivalent System Mass (ESM). Recycling life support systems used with multiple redundancy can have sufficient reliability for deep space missions but they usually do not save mass compared to mixed storage and recycling systems. The best deep space life support system design uses water recycling with sufficient water storage to prevent loss of crew if recycling fails. Since the amount of water needed for crew survival is a small part of the total water requirement, the required amount of stored water is significantly less than the total to be consumed. Water recycling with water, oxygen, and carbon dioxide removal material storage can achieve the high reliability of full storage systems with only half the mass of full storage and with less mass than the highly redundant recycling systems needed to achieve acceptable reliability. Improved recycling systems with lower mass and higher reliability could perform better than systems using storage.

  8. Advanced dosimetry systems for the space transport and space station

    NASA Technical Reports Server (NTRS)

    Wailly, L. F.; Schneider, M. F.; Clark, B. C.

    1972-01-01

    Advanced dosimetry system concepts are described that will provide automated and instantaneous measurement of dose and particle spectra. Systems are proposed for measuring dose rate from cosmic radiation background to greater than 3600 rads/hr. Charged particle spectrometers, both internal and external to the spacecraft, are described for determining mixed field energy spectra and particle fluxes for both real time onboard and ground-based computer evaluation of the radiation hazard. Automated passive dosimetry systems consisting of thermoluminescent dosimeters and activation techniques are proposed for recording the dose levels for twelve or more crew members. This system will allow automatic onboard readout and data storage of the accumulated dose and can be transmitted to ground after readout or data records recovered with each crew rotation.

  9. Low-rank approximation in the numerical modeling of the Farley-Buneman instability in ionospheric plasma

    NASA Astrophysics Data System (ADS)

    Dolgov, S. V.; Smirnov, A. P.; Tyrtyshnikov, E. E.

    2014-04-01

    We consider numerical modeling of the Farley-Buneman instability in the Earth's ionosphere plasma. The ion behavior is governed by the kinetic Vlasov equation with the BGK collisional term in the four-dimensional phase space, and since the finite difference discretization on a tensor product grid is used, this equation becomes the most computationally challenging part of the scheme. To relax the complexity and memory consumption, an adaptive model reduction using the low-rank separation of variables, namely the Tensor Train format, is employed. The approach was verified via a prototype MATLAB implementation. Numerical experiments demonstrate the possibility of efficient separation of space and velocity variables, resulting in the solution storage reduction by a factor of order tens.

  10. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  11. Damsel: A Data Model Storage Library for Exascale Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koziol, Quincey

    The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.

  12. Role of Laboratory Plasma Experiments in exploring the Physics of Solar Eruptions

    NASA Astrophysics Data System (ADS)

    Tripathi, S.

    2017-12-01

    Solar eruptive events are triggered over a broad range of spatio-temporal scales by a variety of fundamental processes (e.g., force-imbalance, magnetic-reconnection, electrical-current driven instabilities) associated with arched magnetoplasma structures in the solar atmosphere. Contemporary research on solar eruptive events is at the forefront of solar and heliospheric physics due to its relevance to space weather. Details on the formation of magnetized plasma structures on the Sun, storage of magnetic energy in such structures over a long period (several Alfven transit times), and their impulsive eruptions have been recorded in numerous observations and simulated in computer models. Inherent limitations of space observations and uncontrolled nature of solar eruptions pose significant challenges in testing theoretical models and developing the predictive capability for space-weather. The pace of scientific progress in this area can be significantly boosted by tapping the potential of appropriately scaled laboratory plasma experiments to compliment solar observations, theoretical models, and computer simulations. To give an example, recent results from a laboratory plasma experiment on arched magnetic flux ropes will be presented and future challenges will be discussed. (Work supported by National Science Foundation, USA under award number 1619551)

  13. CyVerse Data Commons: lessons learned in cyberinfrastructure management and data hosting from the Life Sciences

    NASA Astrophysics Data System (ADS)

    Swetnam, T. L.; Walls, R.; Merchant, N.

    2017-12-01

    CyVerse, is a US National Science Foundation funded initiative "to design, deploy, and expand a national cyberinfrastructure for life sciences research, and to train scientists in its use," supporting and enabling cross disciplinary collaborations across institutions. CyVerse' free, open-source, cyberinfrastructure is being adopted into biogeoscience and space sciences research. CyVerse data-science agnostic platforms provide shared data storage, high performance computing, and cloud computing that allow analysis of very large data sets (including incomplete or work-in-progress data sets). Part of CyVerse success has been in addressing the handling of data through its entire lifecycle, from creation to final publication in a digital data repository to reuse in new analyses. CyVerse developers and user communities have learned many lessons that are germane to Earth and Environmental Science. We present an overview of the tools and services available through CyVerse including: interactive computing with the Discovery Environment (https://de.cyverse.org/), an interactive data science workbench featuring data storage and transfer via the Data Store; cloud computing with Atmosphere (https://atmo.cyverse.org); and access to HPC via Agave API (https://agaveapi.co/). Each CyVerse service emphasizes access to long term data storage, including our own Data Commons (http://datacommons.cyverse.org), as well as external repositories. The Data Commons service manages, organizes, preserves, publishes, allows for discovery and reuse of data. All data published to CyVerse's Curated Data receive a permanent identifier (PID) in the form of a DOI (Digital Object Identifier) or ARK (Archival Resource Key). Data that is more fluid can also be published in the Data commons through Community Collaborated data. The Data Commons provides landing pages, permanent DOIs or ARKs, and supports data reuse and citation through features such as open data licenses and downloadable citations. The ability to access and do computing on data within the CyVerse framework or with external compute resources when necessary, has proven highly beneficial to our user community, which has continuously grown since the inception of CyVerse nine years ago.

  14. Perspectives on energy storage wheels for space station application

    NASA Technical Reports Server (NTRS)

    Oglevie, R. E.

    1984-01-01

    Several of the issues of the workshop are addressed from the perspective of a potential Space Station developer and energy wheel user. Systems' considerations are emphasized rather than component technology. The potential of energy storage wheel (ESW) concept is discussed. The current status of the technology base is described. Justification for advanced technology development is also discussed. The study concludes that energy storage in wheels is an attractive concept for immediate technology development and future Space Station application.

  15. FORCEnet Net Centric Architecture - A Standards View

    DTIC Science & Technology

    2006-06-01

    SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION DATA MANAGEMENT APPLICATION...R V I C E P L A T F O R M S E R V I C E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM...E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION

  16. KENNEDY SPACE CENTER, FLA. - Workers in the Columbia Debris Hangar pull items from storage containers to transfer to storage in the Vehicle Assembly Building. About 83,000 pieces were shipped to KSC during search and recovery efforts in East Texas.

    NASA Image and Video Library

    2003-09-02

    KENNEDY SPACE CENTER, FLA. - Workers in the Columbia Debris Hangar pull items from storage containers to transfer to storage in the Vehicle Assembly Building. About 83,000 pieces were shipped to KSC during search and recovery efforts in East Texas.

  17. Energy storage options for space power

    NASA Astrophysics Data System (ADS)

    Hoffman, H. W.; Martin, J. F.; Olszewski, M.

    Including energy storage in a space power supply enhances the feasibility of using thermal power cycles (Rankine or Brayton) and providing high-power pulses. Superconducting magnets, capacitors, electrochemical batteries, thermal phase-change materials (PCM), and flywheels are assessed; the results obtained suggest that flywheels and phase-change devices hold the most promise. Latent heat storage using inorganic salts and metallic eutectics offers thermal energy storage densities of 1500 kJ/kg to 2000 kJ/kg at temperatures to 1675 K. Innovative techniques allow these media to operate in direct contact with the heat engine working fluid. Enhancing thermal conductivity and/or modifying PCM crystallization habit provide other options. Flywheels of low-strain graphite and Kevlar fibers have achieved mechanical energy storage densities of 300 kJ/kg. With high-strain graphite fibers, storage densities appropriate to space power needs (about 500 kJ/kg) seem feasible. Coupling advanced flywheels with emerging high power density homopolar generators and compulsators could result in electric pulse-power storage modules of significantly higher energy density.

  18. Flash drive memory apparatus and method

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor)

    2010-01-01

    A memory apparatus includes a non-volatile computer memory, a USB mass storage controller connected to the non-volatile computer memory, the USB mass storage controller including a daisy chain component, a male USB interface connected to the USB mass storage controller, and at least one other interface for a memory device, other than a USB interface, the at least one other interface being connected to the USB mass storage controller.

  19. Scalable cloud without dedicated storage

    NASA Astrophysics Data System (ADS)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  20. 76 FR 5120 - Highway-Rail Grade Crossing; Safe Clearance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-28

    ... driver from entering onto a highway-rail grade crossing unless there is sufficient space to drive... crossing unless there is sufficient space to drive completely through the grade crossing without stopping... as the ``clear storage distance.'' \\1\\ Chapter 8 guidance material also refers to ``storage space...

  1. Storage Costs and Heuristics Interact to Produce Patterns of Aphasic Sentence Comprehension Performance

    PubMed Central

    Clark, David Glenn

    2012-01-01

    Background: Despite general agreement that aphasic individuals exhibit difficulty understanding complex sentences, the nature of sentence complexity itself is unresolved. In addition, aphasic individuals appear to make use of heuristic strategies for understanding sentences. This research is a comparison of predictions derived from two approaches to the quantification of sentence complexity, one based on the hierarchical structure of sentences, and the other based on dependency locality theory (DLT). Complexity metrics derived from these theories are evaluated under various assumptions of heuristic use. Method: A set of complexity metrics was derived from each general theory of sentence complexity and paired with assumptions of heuristic use. Probability spaces were generated that summarized the possible patterns of performance across 16 different sentence structures. The maximum likelihood of comprehension scores of 42 aphasic individuals was then computed for each probability space and the expected scores from the best-fitting points in the space were recorded for comparison to the actual scores. Predictions were then compared using measures of fit quality derived from linear mixed effects models. Results: All three of the metrics that provide the most consistently accurate predictions of patient scores rely on storage costs based on the DLT. Patients appear to employ an Agent–Theme heuristic, but vary in their tendency to accept heuristically generated interpretations. Furthermore, the ability to apply the heuristic may be degraded in proportion to aphasia severity. Conclusion: DLT-derived storage costs provide the best prediction of sentence comprehension patterns in aphasia. Because these costs are estimated by counting incomplete syntactic dependencies at each point in a sentence, this finding suggests that aphasia is associated with reduced availability of cognitive resources for maintaining these dependencies. PMID:22590462

  2. Storage costs and heuristics interact to produce patterns of aphasic sentence comprehension performance.

    PubMed

    Clark, David Glenn

    2012-01-01

    Despite general agreement that aphasic individuals exhibit difficulty understanding complex sentences, the nature of sentence complexity itself is unresolved. In addition, aphasic individuals appear to make use of heuristic strategies for understanding sentences. This research is a comparison of predictions derived from two approaches to the quantification of sentence complexity, one based on the hierarchical structure of sentences, and the other based on dependency locality theory (DLT). Complexity metrics derived from these theories are evaluated under various assumptions of heuristic use. A set of complexity metrics was derived from each general theory of sentence complexity and paired with assumptions of heuristic use. Probability spaces were generated that summarized the possible patterns of performance across 16 different sentence structures. The maximum likelihood of comprehension scores of 42 aphasic individuals was then computed for each probability space and the expected scores from the best-fitting points in the space were recorded for comparison to the actual scores. Predictions were then compared using measures of fit quality derived from linear mixed effects models. All three of the metrics that provide the most consistently accurate predictions of patient scores rely on storage costs based on the DLT. Patients appear to employ an Agent-Theme heuristic, but vary in their tendency to accept heuristically generated interpretations. Furthermore, the ability to apply the heuristic may be degraded in proportion to aphasia severity. DLT-derived storage costs provide the best prediction of sentence comprehension patterns in aphasia. Because these costs are estimated by counting incomplete syntactic dependencies at each point in a sentence, this finding suggests that aphasia is associated with reduced availability of cognitive resources for maintaining these dependencies.

  3. Evaluation of the matrix exponential for use in ground-water-flow and solute-transport simulations; theoretical framework

    USGS Publications Warehouse

    Umari, A.M.; Gorelick, S.M.

    1986-01-01

    It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)

  4. An Investigation to Advance the Technology Readiness Level of the Centaur Derived On-orbit Propellant Storage and Transfer System

    NASA Astrophysics Data System (ADS)

    Silvernail, Nathan L.

    This research was carried out in collaboration with the United Launch Alliance (ULA), to advance an innovative Centaur-based on-orbit propellant storage and transfer system that takes advantage of rotational settling to simplify Fluid Management (FM), specifically enabling settled fluid transfer between two tanks and settled pressure control. This research consists of two specific objectives: (1) technique and process validation and (2) computational model development. In order to raise the Technology Readiness Level (TRL) of this technology, the corresponding FM techniques and processes must be validated in a series of experimental tests, including: laboratory/ground testing, microgravity flight testing, suborbital flight testing, and orbital testing. Researchers from Embry-Riddle Aeronautical University (ERAU) have joined with the Massachusetts Institute of Technology (MIT) Synchronized Position Hold Engage and Reorient Experimental Satellites (SPHERES) team to develop a prototype FM system for operations aboard the International Space Station (ISS). Testing of the integrated system in a representative environment will raise the FM system to TRL 6. The tests will demonstrate the FM system and provide unique data pertaining to the vehicle's rotational dynamics while undergoing fluid transfer operations. These data sets provide insight into the behavior and physical tendencies of the on-orbit refueling system. Furthermore, they provide a baseline for comparison against the data produced by various computational models; thus verifying the accuracy of the models output and validating the modeling approach. Once these preliminary models have been validated, the parameters defined by them will provide the basis of development for accurate simulations of full scale, on-orbit systems. The completion of this project and the models being developed will accelerate the commercialization of on-orbit propellant storage and transfer technologies as well as all in-space technologies that utilize or will utilize similar FM techniques and processes.

  5. Aero-thermo-dynamic analysis of the Spaceliner-7.1 vehicle in high altitude flight

    NASA Astrophysics Data System (ADS)

    Zuppardi, Gennaro; Morsa, Luigi; Sippel, Martin; Schwanekamp, Tobias

    2014-12-01

    SpaceLiner, designed by DLR, is a visionary, extremely fast passenger transportation concept. It consists of two stages: a winged booster, a vehicle. After separation of the two stages, the booster makes a controlled re-entry and returns to the launch site. According to the current project, version 7-1 of SpaceLiner (SpaceLiner-7.1), the vehicle should be brought at an altitude of 75 km and then released, undertaking the descent path. In the perspective that the vehicle of SpaceLiner-7.1 could be brought to altitudes higher than 75 km, e.g. 100 km or above and also for a speculative purpose, in this paper the aerodynamic parameters of the SpaceLiner-7.1 vehicle are calculated in the whole transition regime, from continuum low density to free molecular flows. Computer simulations have been carried out by three codes: two DSMC codes, DS3V in the altitude interval 100-250 km for the evaluation of the global aerodynamic coefficients and DS2V at the altitude of 60 km for the evaluation of the heat flux and pressure distributions along the vehicle nose, and the DLR HOTSOSE code for the evaluation of the global aerodynamic coefficients in continuum, hypersonic flow at the altitude of 44.6 km. The effectiveness of the flaps with deflection angle of -35 deg. was evaluated in the above mentioned altitude interval. The vehicle showed longitudinal stability in the whole altitude interval even with no flap. The global bridging formulae verified to be proper for the evaluation of the aerodynamic coefficients in the altitude interval 80-100 km where the computations cannot be fulfilled either by CFD, because of the failure of the classical equations computing the transport coefficients, or by DSMC because of the requirement of very high computer resources both in terms of the core storage (a high number of simulated molecules is needed) and to the very long processing time.

  6. Solar space- and water-heating system at Stanford University. Central Food Services Building

    NASA Astrophysics Data System (ADS)

    1980-05-01

    The closed-loop drain-back system is described as offering dependability of gravity drain-back freeze protection, low maintenance, minimal costs, and simplicity. The system features an 840 square-foot collector and storage capacity of 1550 gallons. The acceptance testing and the predicted system performance data are briefly described. Solar performance calculations were performed using a computer design program (FCHART). Bidding, costs, and economics of the system are reviewed. Problems are discussed and solutions and recommendations given. An operation and maintenance manual is given.

  7. Tesla: An application for real-time data analysis in High Energy Physics

    NASA Astrophysics Data System (ADS)

    Aaij, R.; Amato, S.; Anderlini, L.; Benson, S.; Cattaneo, M.; Clemencic, M.; Couturier, B.; Frank, M.; Gligorov, V. V.; Head, T.; Jones, C.; Komarov, I.; Lupton, O.; Matev, R.; Raven, G.; Sciascia, B.; Skwarnicki, T.; Spradlin, P.; Stahl, S.; Storaci, B.; Vesterinen, M.

    2016-11-01

    Upgrades to the LHCb computing infrastructure in the first long shutdown of the LHC have allowed for high quality decay information to be calculated by the software trigger making a separate offline event reconstruction unnecessary. Furthermore, the storage space of the triggered candidate is an order of magnitude smaller than the entire raw event that would otherwise need to be persisted. Tesla is an application designed to process the information calculated by the trigger, with the resulting output used to directly perform physics measurements.

  8. Future remote-sensing programs

    NASA Technical Reports Server (NTRS)

    Schweickart, R. L.

    1975-01-01

    User requirements and methods developed to fulfill them are discussed. Quick-look data, data storage on computer-compatible tape, and an integrated capability for production of images from the whole class of earth-viewing satellites are among the new developments briefly described. The increased capability of LANDSAT-C and Nimbus G and the needs of specialized applications such as, urban land use planning, cartography, accurate measurement of small agricultural fields, thermal mapping and coastal zone management are examined. The affect of the space shuttle on remote sensing technology through increased capability is considered.

  9. Experimental, Numerical and Analytical Characterization of Slosh Dynamics Applied to In-Space Propellant Storage, Management and Transfer

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah M.; Kirk, Daniel; Gutierrez, Hector; Marsell, Brandon; Schallhorn, Paul; Lapilli, Gabriel D.

    2015-01-01

    Experimental and numerical results are presented from a new cryogenic fluid slosh program at the Florida Institute of Technology (FIT). Water and cryogenic liquid nitrogen are used in various ground-based tests with an approximately 30 cm diameter spherical tank to characterize damping, slosh mode frequencies, and slosh forces. The experimental results are compared to a computational fluid dynamics (CFD) model for validation. An analytical model is constructed from prior work for comparison. Good agreement is seen between experimental, numerical, and analytical results.

  10. AIAA/NASA International Symposium on Space Information Systems, 2nd, Pasadena, CA, Sept. 17-19, 1990, Proceedings. Vols. 1 & 2

    NASA Technical Reports Server (NTRS)

    Tavenner, Leslie A. (Editor)

    1991-01-01

    These proceedings overview major space information system projects and lessons learned from current missions. Other topics include the science information system requirements for the 1990s, an information systems design approach for major programs, the technology needs and projections, the standards for space data information systems, the artificial intelligence technology and applications, international interoperability, and spacecraft data systems and architectures advanced communications. Other topics include the software engineering technology and applications, the multimission multidiscipline information system architectures, the distributed planning and scheduling systems and operations, and the computer and information systems architectures. Paper presented include prospects for scientific data analysis systems for solar-terrestrial physics in the 1990s, the Columbus data management system, data storage technologies for the future, the German aerospace research establishment, and launching artificial intelligence in NASA ground systems.

  11. Space charge effects on the dielectric response of polymer nanocomposites

    NASA Astrophysics Data System (ADS)

    Shen, Zhong-Hui; Wang, Jian-Jun; Zhang, Xin; Lin, Yuanhua; Nan, Ce-Wen; Chen, Long-Qing; Shen, Yang

    2017-08-01

    Adding high-κ ceramic nanoparticles into polymers is a general strategy to improve the performances in energy storage. Classic effective medium theories may fail to predict the effective permittivity in polymer nanocomposites wherein the space charge effects are important. In this work, a computational model is developed to understand the space charge effects on the frequency-dependent dielectric properties including the real permittivity and the loss for polymer nanocomposites with both randomly distributed and aggregated nanoparticle fillers. It is found that the real permittivity of the SrTiO3/polyethylene (12% SrTiO3 in volume fraction) nanocomposite can be increased to as high as 60 when there is nanoparticle aggregation and the ion concentration in the bulk polymer is around 1016 cm-3. This model can be employed to quantitatively predict the frequency-dependent dielectric properties for polymer nanocomposites with arbitrary microstructures.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yiqi; Shi, Zheng; Lu, Xingjie

    Terrestrial ecosystems have absorbed roughly 30 % of anthropogenic CO 2 emissions over the past decades, but it is unclear whether this carbon (C) sink will endure into the future. Despite extensive modeling and experimental and observational studies, what fundamentally determines transient dynamics of terrestrial C storage under global change is still not very clear. Here we develop a new framework for understanding transient dynamics of terrestrial C storage through mathematical analysis and numerical experiments. Our analysis indicates that the ultimate force driving ecosystem C storage change is the C storage capacity, which is jointly determined by ecosystem C inputmore » (e.g., net primary production, NPP) and residence time. Since both C input and residence time vary with time, the C storage capacity is time-dependent and acts as a moving attractor that actual C storage chases. The rate of change in C storage is proportional to the C storage potential, which is the difference between the current storage and the storage capacity. The C storage capacity represents instantaneous responses of the land C cycle to external forcing, whereas the C storage potential represents the internal capability of the land C cycle to influence the C change trajectory in the next time step. The influence happens through redistribution of net C pool changes in a network of pools with different residence times. Moreover, this and our other studies have demonstrated that one matrix equation can replicate simulations of most land C cycle models (i.e., physical emulators). As a result, simulation outputs of those models can be placed into a three-dimensional (3-D) parameter space to measure their differences. The latter can be decomposed into traceable components to track the origins of model uncertainty. In addition, the physical emulators make data assimilation computationally feasible so that both C flux- and pool-related datasets can be used to better constrain model predictions of land C sequestration. Overall, this new mathematical framework offers new approaches to understanding, evaluating, diagnosing, and improving land C cycle models.« less

  13. 19 CFR 19.29 - Sealing of bins or other bonded space.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Bonded for the Storage of Wheat § 19.29 Sealing of bins or other bonded space. The outlets to all bins or other space bonded for the storage of imported wheat shall be sealed by affixing locks or in bond seals... which will effectively prevent the removal of, or access to, the wheat in the bonded space except under...

  14. Solar energy system performance evaluation report for Solaron-Duffield, Duffield, Virginia

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The Solaron Duffield Solar Energy System was designed to provide 51 percent of the space heating, and 49 percent of the domestic hot water (DHW) to a two story 1940 square foot area residence using air as the transport medium. The system consists of a 429 square foot collector array, a 265 cubic foot rock thermal storage bin, heat exchangers, an 80 gallon DHW preheat tank, pumps, blowers, controls, air ducting and associated plumbing. A air-to-liquid heat pump coupled with a 1,000gallon water storage tank provides for auxiliary space heating and can also be used for space cooling. A 52 gallon electric DHW tank using the solar preheated water provides domestic hot water to the residence. The solar system, which became operational July 1979, has the following modes of operation: First Stage: (1) collector to storage and DHW; (2)collector to space heating; (3) storage to load. Second Stage: (4) heat pump auxiliary direct; (5) auxiliary heat from heat pump storage. Third Stage: (6) electrical resistance (strip) heat.

  15. Short-term storage allocation in a filmless hospital

    NASA Astrophysics Data System (ADS)

    Strickland, Nicola H.; Deshaies, Marc J.; Reynolds, R. Anthony; Turner, Jonathan E.; Allison, David J.

    1997-05-01

    Optimizing limited short term storage (STS) resources requires gradual, systematic changes, monitored and modified within an operational PACS environment. Optimization of the centralized storage requires a balance of exam numbers and types in STS to minimize lengthy retrievals from long term archive. Changes to STS parameters and work procedures were made while monitoring the effects on resource allocation by analyzing disk space temporally. Proportions of disk space allocated to each patient category on STS were measured to approach the desired proportions in a controlled manner. Key factors for STS management were: (1) sophisticated exam prefetching algorithms: HIS/RIS-triggered, body part-related and historically-selected, and (2) a 'storage onion' design allocating various exam categories to layers with differential deletion protection. Hospitals planning for STS space should consider the needs of radiology, wards, outpatient clinics and clinicoradiological conferences for new and historical exams; desired on-line time; and potential increase in image throughput and changing resources, such as an increase in short term storage disk space.

  16. Solar energy system performance evaluation report for Solaron-Duffield, Duffield, Virginia

    NASA Astrophysics Data System (ADS)

    1980-07-01

    The Solaron Duffield Solar Energy System was designed to provide 51 percent of the space heating, and 49 percent of the domestic hot water (DHW) to a two story 1940 square foot area residence using air as the transport medium. The system consists of a 429 square foot collector array, a 265 cubic foot rock thermal storage bin, heat exchangers, an 80 gallon DHW preheat tank, pumps, blowers, controls, air ducting and associated plumbing. A air-to-liquid heat pump coupled with a 1,000gallon water storage tank provides for auxiliary space heating and can also be used for space cooling. A 52 gallon electric DHW tank using the solar preheated water provides domestic hot water to the residence. The solar system, which became operational July 1979, has the following modes of operation: First Stage: (1) collector to storage and DHW; (2)collector to space heating; (3) storage to load. Second Stage: (4) heat pump auxiliary direct; (5) auxiliary heat from heat pump storage. Third Stage: (6) electrical resistance (strip) heat.

  17. CASKS (Computer Analysis of Storage casKS): A microcomputer based analysis system for storage cask design review. User`s manual to Version 1b (including program reference)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T.F.; Gerhard, M.A.; Trummer, D.J.

    CASKS (Computer Analysis of Storage casKS) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for evaluating safety analysis reports on spent-fuel storage casks. The bulk of the complete program and this user`s manual are based upon the SCANS (Shipping Cask ANalysis System) program previously developed at LLNL. A number of enhancements and improvements were added to the original SCANS program to meet requirements unique to storage casks. CASKS is an easy-to-use system that calculates global response of storage casks to impact loads, pressure loads and thermal conditions. This provides reviewers withmore » a tool for an independent check on analyses submitted by licensees. CASKS is based on microcomputers compatible with the IBM-PC family of computers. The system is composed of a series of menus, input programs, cask analysis programs, and output display programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.« less

  18. Classified one-step high-radix signed-digit arithmetic units

    NASA Astrophysics Data System (ADS)

    Cherri, Abdallah K.

    1998-08-01

    High-radix number systems enable higher information storage density, less complexity, fewer system components, and fewer cascaded gates and operations. A simple one-step fully parallel high-radix signed-digit arithmetic is proposed for parallel optical computing based on new joint spatial encodings. This reduces hardware requirements and improves throughput by reducing the space-bandwidth produce needed. The high-radix signed-digit arithmetic operations are based on classifying the neighboring input digit pairs into various groups to reduce the computation rules. A new joint spatial encoding technique is developed to present both the operands and the computation rules. This technique increases the spatial bandwidth product of the spatial light modulators of the system. An optical implementation of the proposed high-radix signed-digit arithmetic operations is also presented. It is shown that our one-step trinary signed-digit and quaternary signed-digit arithmetic units are much simpler and better than all previously reported high-radix signed-digit techniques.

  19. KENNEDY SPACE CENTER, FLA. - Storage boxes and other containers of Columbia debris wait in the Columbia Debris Hangar for transfer to storage in the Vehicle Assembly Building. About 83,000 pieces were shipped to KSC during search and recovery efforts in East Texas.

    NASA Image and Video Library

    2003-09-02

    KENNEDY SPACE CENTER, FLA. - Storage boxes and other containers of Columbia debris wait in the Columbia Debris Hangar for transfer to storage in the Vehicle Assembly Building. About 83,000 pieces were shipped to KSC during search and recovery efforts in East Texas.

  20. Flexible Graphene-based Energy Storage Devices for Space Application Project

    NASA Technical Reports Server (NTRS)

    Calle, Carlos I.

    2014-01-01

    Develop prototype graphene-based reversible energy storage devices that are flexible, thin, lightweight, durable, and that can be easily attached to spacesuits, rovers, landers, and equipment used in space.

  1. Optimizing the Use of Storage Systems Provided by Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H.; Potter, N.; Byrne, D. A.; Ogata, J.; Relph, J.

    2013-12-01

    Cloud computing systems present a set of features that include familiar computing resources (albeit augmented to support dynamic scaling of processing power) bundled with a mix of conventional and unconventional storage systems. The linux base on which many Cloud environments (e.g., Amazon) are based make it tempting to assume that any Unix software will run efficiently in this environment efficiently without change. OPeNDAP and NODC collaborated on a short project to explore how the S3 and Glacier storage systems provided by the Amazon Cloud Computing infrastructure could be used with a data server developed primarily to access data stored in a traditional Unix file system. Our work used the Amazon cloud system, but we strived for designs that could be adapted easily to other systems like OpenStack. Lastly, we evaluated different architectures from a computer security perspective. We found that there are considerable issues associated with treating S3 as if it is a traditional file system, even though doing so is conceptually simple. These issues include performance penalties because using a software tool that emulates a traditional file system to store data in S3 performs poorly when compared to a storing data directly in S3. We also found there are important benefits beyond performance to ensuring that data written to S3 can directly accessed without relying on a specific software tool. To provide a hierarchical organization to the data stored in S3, we wrote 'catalog' files, using XML. These catalog files map discrete files to S3 access keys. Like a traditional file system's directories, the catalogs can also contain references to other catalogs, providing a simple but effective hierarchy overlaid on top of S3's flat storage space. An added benefit to these catalogs is that they can be viewed in a web browser; our storage scheme provides both efficient access for the data server and access via a web browser. We also looked at the Glacier storage system and found that the system's response characteristics are very different from a traditional file system or database; it behaves like a near-line storage system. To be used by a traditional data server, the underlying access protocol must support asynchronous accesses. This is because the Glacier system takes a minimum of four hours to deliver any data object, so systems built with the expectation of instant access (i.e., most web systems) must be fundamentally changed to use Glacier. Part of a related project has been to develop an asynchronous access mode for OPeNDAP, and we have developed a design using that new addition to the DAP protocol with Glacier as a near-line mass store. In summary, we found that both S3 and Glacier require special treatment to be effectively used by a data server. It is important to add (new) interfaces to data servers that enable them to use these storage devices through their native interfaces. We also found that our designs could easily map to a cloud environment based on OpenStack. Lastly, we noted that while these designs invited more liberal use of remote references for data objects, that can expose software to new security risks.

  2. Care and Handling of Computer Magnetic Storage Media.

    ERIC Educational Resources Information Center

    Geller, Sidney B.

    Intended for use by data processing installation managers, operating personnel, and technical staff, this publication provides a comprehensive set of care and handling guidelines for the physical/chemical preservation of computer magnetic storage media--principally computer magnetic tapes--and their stored data. Emphasis is placed on media…

  3. Analysis of thermal energy storage material with change-of-phase volumetric effects

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Ibrahim, Mounir B.

    1990-01-01

    NASA's Space Station Freedom proposed hybrid power system includes photovoltaic arrays with nickel hydrogen batteries for energy storage and solar dynamic collectors driving Brayton heat engines with change-of-phase Thermal Energy Storage (TES) devices. A TES device is comprised of multiple metallic, annular canisters which contain a eutectic composition LiF-CaF2 Phase Change Material (PCM) that melts at 1040 K. A moderately sophisticated LiF-CaF2 PCM computer model is being developed in three stages considering 1-D, 2-D, and 3-D canister geometries, respectively. The 1-D model results indicate that the void has a marked effect on the phase change process due to PCM displacement and dynamic void heat transfer resistance. Equally influential are the effects of different boundary conditions and liquid PCM natural convection. For the second stage, successful numerical techniques used in the 1-D phase change model are extended to a 2-D (r,z) PCM containment canister model. A prototypical PCM containment canister is analyzed and the results are discussed.

  4. Manyscale Computing for Sensor Processing in Support of Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.

    2014-09-01

    Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include performance analysis and results in terms of execution time as well as storage, power, and energy consumption for bus-connected and/or networked architectures. The feasibility of the manyscale paradigm is demonstrated by addressing four principal challenges: (1) architectural/structural diversity, parallelism, and locality, (2) masking of I/O and memory latencies, (3) scalability of design as well as implementation, and (4) efficient representation/expression of parallel applications. Examples will demonstrate how manyscale computing helps solve these challenges efficiently on real-world computing systems.

  5. A system for the input and storage of data in the Besm-6 digital computer

    NASA Technical Reports Server (NTRS)

    Schmidt, K.; Blenke, L.

    1975-01-01

    Computer programs used for the decoding and storage of large volumes of data on the the BESM-6 computer are described. The following factors are discussed: the programming control language allows the programs to be run as part of a modular programming system used in data processing; data control is executed in a hierarchically built file on magnetic tape with sequential index storage; and the programs are not dependent on the structure of the data.

  6. Regenerative fuel cell systems for space station

    NASA Technical Reports Server (NTRS)

    Hoberecht, M. A.; Sheibley, D. W.

    1985-01-01

    Regenerative fuel cell (RFC) systems are the leading energy storage candidates for Space Station. Key design features are the advanced state of technology readiness and high degree of system level design flexibility. Technology readiness was demonstrated through testing at the single cell, cell stack, mechanical ancillary component, subsystem, and breadboard levels. Design flexibility characteristics include independent sizing of power and energy storage portions of the system, integration of common reactants with other space station systems, and a wide range of various maintenance approaches. The design features led to selection of a RFC system as the sole electrochemical energy storage technology option for the space station advanced development program.

  7. QoS support for end users of I/O-intensive applications using shared storage systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2011-01-19

    I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less

  8. Space environmental considerations for a long-term cryogenic storage vessel

    NASA Technical Reports Server (NTRS)

    Nakanishi, Shigeo

    1987-01-01

    Information is given on the kind of protection that is needed against impact and perforation of a long-term cryogenic storage vessel in space by meteoroids and space debris. The long-term effects of the space environment on thermal control surfaces and coatings, and the question of whether the insulation and thermal control surfaces should be encased in a vacuum jacket shell are discussed.

  9. In-space inertial energy storage design

    NASA Technical Reports Server (NTRS)

    Studer, P. A.; Evans, H. E.

    1981-01-01

    Flywheel energy storage is a means of significantly improving the performance of space power systems. Two study contracts have been completed to investigate the merits of a magnetically suspended, ironless armature, ring rotor 'Mechanical Capacitor' design. The design of a suitable energy storage system is evaluated, taking into account baseline requirements, the motor generator, details regarding the suspension design, power conditioning, the rotor, and an example design. It appears on the basis of this evaluation that the inertial (flywheel) energy storage design is feasible.

  10. KENNEDY SPACE CENTER, FLA. - Storage boxes filled with Columbia debris (left) await transfer to storage in the Vehicle Assembly Building. Empty boxes at right wait to be filled with more of the approximately 83,000 pieces shipped to KSC during search and recovery efforts in East Texas.

    NASA Image and Video Library

    2003-09-02

    KENNEDY SPACE CENTER, FLA. - Storage boxes filled with Columbia debris (left) await transfer to storage in the Vehicle Assembly Building. Empty boxes at right wait to be filled with more of the approximately 83,000 pieces shipped to KSC during search and recovery efforts in East Texas.

  11. Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Anderson, Thomas E.; Ousterhout, John K.; Patterson, David A.

    1991-01-01

    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications.

  12. How to Use Removable Mass Storage Memory Devices

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2004-01-01

    Mass storage refers to the variety of ways to keep large amounts of information that are used on a computer. Over the years, the removable storage devices have grown smaller, increased in capacity, and transferred the information to the computer faster. The 8" floppy disk of the 1960s stored 100 kilobytes, or about 60 typewritten, double-spaced…

  13. Computer memory: the LLL experience. [Octopus computer network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.

    1976-02-01

    Those aspects of Octopus computer network design are reviewed that relate to memory and storage. Emphasis is placed on the difficulties and problems that arise because of the limitations of present storage devices, and indications are made of the directions in which technological advance could be of most value. (auth)

  14. ArrayBridge: Interweaving declarative array processing with high-performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less

  15. 14 CFR 420.66 - Separation distance requirements for storage of hydrogen peroxide, hydrazine, and liquid hydrogen...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Separation distance requirements for storage of hydrogen peroxide, hydrazine, and liquid hydrogen and any incompatible energetic liquids stored within an intraline distance. 420.66 Section 420.66 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION...

  16. 14 CFR 420.66 - Separation distance requirements for storage of hydrogen peroxide, hydrazine, and liquid hydrogen...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Separation distance requirements for storage of hydrogen peroxide, hydrazine, and liquid hydrogen and any incompatible energetic liquids stored within an intraline distance. 420.66 Section 420.66 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION...

  17. The SpaceCube Family of Hybrid On-Board Science Data Processors: An Update

    NASA Astrophysics Data System (ADS)

    Flatley, T.

    2012-12-01

    SpaceCube is an FPGA based on-board hybrid science data processing system developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. The SpaceCube design strategy incorporates commercial rad-tolerant FPGA technology and couples it with an upset mitigation software architecture to provide "order of magnitude" improvements in computing power over traditional rad-hard flight systems. Many of the missions proposed in the Earth Science Decadal Survey (ESDS) will require "next generation" on-board processing capabilities to meet their specified mission goals. Advanced laser altimeter, radar, lidar and hyper-spectral instruments are proposed for at least ten of the ESDS missions, and all of these instrument systems will require advanced on-board processing capabilities to facilitate the timely conversion of Earth Science data into Earth Science information. Both an "order of magnitude" increase in processing power and the ability to "reconfigure on the fly" are required to implement algorithms that detect and react to events, to produce data products on-board for applications such as direct downlink, quick look, and "first responder" real-time awareness, to enable "sensor web" multi-platform collaboration, and to perform on-board "lossless" data reduction by migrating typical ground-based processing functions on-board, thus reducing on-board storage and downlink requirements. This presentation will highlight a number of SpaceCube technology developments to date and describe current and future efforts, including the collaboration with the U.S. Department of Defense - Space Test Program (DoD/STP) on the STP-H4 ISS experiment pallet (launch June 2013) that will demonstrate SpaceCube 2.0 technology on-orbit.; ;

  18. Active holographic interconnects for interfacing volume storage

    NASA Astrophysics Data System (ADS)

    Domash, Lawrence H.; Schwartz, Jay R.; Nelson, Arthur R.; Levin, Philip S.

    1992-04-01

    In order to achieve the promise of terabit/cm3 data storage capacity for volume holographic optical memory, two technological challenges must be met. Satisfactory storage materials must be developed and the input/output architectures able to match their capacity with corresponding data access rates must also be designed. To date the materials problem has received more attention than devices and architectures for access and addressing. Two philosophies of parallel data access to 3-D storage have been discussed. The bit-oriented approach, represented by recent work on two-photon memories, attempts to store bits at local sites within a volume without affecting neighboring bits. High speed acousto-optic or electro- optic scanners together with dynamically focused lenses not presently available would be required. The second philosophy is that volume optical storage is essentially holographic in nature, and that each data write or read is to be distributed throughout the material volume on the basis of angle multiplexing or other schemes consistent with the principles of holography. The requirements for free space optical interconnects for digital computers and fiber optic network switching interfaces are also closely related to this class of devices. Interconnects, beamlet generators, angle multiplexers, scanners, fiber optic switches, and dynamic lenses are all devices which may be implemented by holographic or microdiffractive devices of various kinds, which we shall refer to collectively as holographic interconnect devices. At present, holographic interconnect devices are either fixed holograms or spatial light modulators. Optically or computer generated holograms (submicron resolution, 2-D or 3-D, encoding 1013 bits, nearly 100 diffraction efficiency) can implement sophisticated mathematical design principles, but of course once fabricated they cannot be changed. Spatial light modulators offer high speed programmability but have limited resolution (512 X 512 pixels, encoding about 106 bits of data) and limited diffraction efficiency. For any application, one must choose between high diffractive performance and programmability.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokhansanj, Shahabaddine; Kuang, Xingya; Shankar, T.S.

    Few papers have been published in the open literature on the emissions from biomass fuels, including wood pellets, during the storage and transportation and their potential health impacts. The purpose of this study is to provide data on the concentrations, emission factors, and emission rate factors of CO2, CO, and CH4 from wood pellets stored with different headspace to container volume ratios with different initial oxygen levels, in order to develop methods to reduce the toxic off-gas emissions and accumulation in storage spaces. Metal containers (45 l, 305 mm diameter by 610 mm long) were used to study the effectmore » of headspace and oxygen levels on the off-gas emissions from wood pellets. Concentrations of CO2, CO, and CH4 in the headspace were measured using a gas chromatograph as a function of storage time. The results showed that the ratio of the headspace ratios and initial oxygen levels in the storage space significantly affected the off-gas emissions from wood pellets stored in a sealed container. Higher peak emission factors and higher emission rates are associated with higher headspace ratios. Lower emissions of CO2 and CO were generated at room temperature under lower oxygen levels, whereas CH4 emission is insensitive to the oxygen level. Replacing oxygen with inert gases in the storage space is thus a potentially effective method to reduce the biomass degradation and toxic off-gas emissions. The proper ventilation of the storage space can also be used to maintain a high oxygen level and low concentrations of toxic off-gassing compounds in the storage space, which is especially useful during the loading and unloading operations to control the hazards associated with the storage and transportation of wood pellets.« less

  20. Methods and Apparatus for Aggregation of Multiple Pulse Code Modulation Channels into a Signal Time Division Multiplexing Stream

    NASA Technical Reports Server (NTRS)

    Chang, Chen J. (Inventor); Liaghati, Jr., Amir L. (Inventor); Liaghati, Mahsa L. (Inventor)

    2018-01-01

    Methods and apparatus are provided for telemetry processing using a telemetry processor. The telemetry processor can include a plurality of communications interfaces, a computer processor, and data storage. The telemetry processor can buffer sensor data by: receiving a frame of sensor data using a first communications interface and clock data using a second communications interface, receiving an end of frame signal using a third communications interface, and storing the received frame of sensor data in the data storage. After buffering the sensor data, the telemetry processor can generate an encapsulated data packet including a single encapsulated data packet header, the buffered sensor data, and identifiers identifying telemetry devices that provided the sensor data. A format of the encapsulated data packet can comply with a Consultative Committee for Space Data Systems (CCSDS) standard. The telemetry processor can send the encapsulated data packet using a fourth and a fifth communications interfaces.

  1. Data Storage and sharing for the long tail of science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B.; Pouchard, L.; Smith, P. M.

    Research data infrastructure such as storage must now accommodate new requirements resulting from trends in research data management that require researchers to store their data for the long term and make it available to other researchers. We propose Data Depot, a system and service that provides capabilities for shared space within a group, shared applications, flexible access patterns and ease of transfer at Purdue University. We evaluate Depot as a solution for storing and sharing multiterabytes of data produced in the long tail of science with a use case in soundscape ecology studies from the Human- Environment Modeling and Analysismore » Laboratory. We observe that with the capabilities enabled by Data Depot, researchers can easily deploy fine-grained data access control, manage data transfer and sharing, as well as integrate their workflows into a High Performance Computing environment.« less

  2. Lost in space: Onboard star identification using CCD star tracker data without an a priori attitude

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor A.; Tolson, Robert H.

    1993-01-01

    There are many algorithms in use today which determine spacecraft attitude by identifying stars in the field of view of a star tracker. Some methods, which date from the early 1960's, compare the angular separation between observed stars with a small catalog. In the last 10 years, several methods have been developed which speed up the process and reduce the amount of memory needed, a key element to onboard attitude determination. However, each of these methods require some a priori knowledge of the spacecraft attitude. Although the Sun and magnetic field generally provide the necessary coarse attitude information, there are occasions when a spacecraft could get lost when it is not prudent to wait for sunlight. Also, the possibility of efficient attitude determination using only the highly accurate CCD star tracker could lead to fully autonomous spacecraft attitude determination. The need for redundant coarse sensors could thus be eliminated at substantial cost reduction. Some groups have extended their algorithms to implement a computation intense full sky scan. Some require large data bases. Both storage and speed are concerns for autonomous onboard systems. Neural network technology is even being explored by some as a possible solution, but because of the limited number of patterns that can be stored and large overhead, nothing concrete has resulted from these efforts. This paper presents an algorithm which, by descretizing the sky and filtering by visual magnitude of the brightness observed star, speeds up the lost in space star identification process while reducing the amount of necessary onboard computer storage compared to existing techniques.

  3. Discrepancy between mRNA and protein abundance: Insight from information retrieval process in computers

    PubMed Central

    Wang, Degeng

    2008-01-01

    Discrepancy between the abundance of cognate protein and RNA molecules is frequently observed. A theoretical understanding of this discrepancy remains elusive, and it is frequently described as surprises and/or technical difficulties in the literature. Protein and RNA represent different steps of the multi-stepped cellular genetic information flow process, in which they are dynamically produced and degraded. This paper explores a comparison with a similar process in computers - multi-step information flow from storage level to the execution level. Functional similarities can be found in almost every facet of the retrieval process. Firstly, common architecture is shared, as the ribonome (RNA space) and the proteome (protein space) are functionally similar to the computer primary memory and the computer cache memory respectively. Secondly, the retrieval process functions, in both systems, to support the operation of dynamic networks – biochemical regulatory networks in cells and, in computers, the virtual networks (of CPU instructions) that the CPU travels through while executing computer programs. Moreover, many regulatory techniques are implemented in computers at each step of the information retrieval process, with a goal of optimizing system performance. Cellular counterparts can be easily identified for these regulatory techniques. In other words, this comparative study attempted to utilize theoretical insight from computer system design principles as catalysis to sketch an integrative view of the gene expression process, that is, how it functions to ensure efficient operation of the overall cellular regulatory network. In context of this bird’s-eye view, discrepancy between protein and RNA abundance became a logical observation one would expect. It was suggested that this discrepancy, when interpreted in the context of system operation, serves as a potential source of information to decipher regulatory logics underneath biochemical network operation. PMID:18757239

  4. Discrepancy between mRNA and protein abundance: insight from information retrieval process in computers.

    PubMed

    Wang, Degeng

    2008-12-01

    Discrepancy between the abundance of cognate protein and RNA molecules is frequently observed. A theoretical understanding of this discrepancy remains elusive, and it is frequently described as surprises and/or technical difficulties in the literature. Protein and RNA represent different steps of the multi-stepped cellular genetic information flow process, in which they are dynamically produced and degraded. This paper explores a comparison with a similar process in computers-multi-step information flow from storage level to the execution level. Functional similarities can be found in almost every facet of the retrieval process. Firstly, common architecture is shared, as the ribonome (RNA space) and the proteome (protein space) are functionally similar to the computer primary memory and the computer cache memory, respectively. Secondly, the retrieval process functions, in both systems, to support the operation of dynamic networks-biochemical regulatory networks in cells and, in computers, the virtual networks (of CPU instructions) that the CPU travels through while executing computer programs. Moreover, many regulatory techniques are implemented in computers at each step of the information retrieval process, with a goal of optimizing system performance. Cellular counterparts can be easily identified for these regulatory techniques. In other words, this comparative study attempted to utilize theoretical insight from computer system design principles as catalysis to sketch an integrative view of the gene expression process, that is, how it functions to ensure efficient operation of the overall cellular regulatory network. In context of this bird's-eye view, discrepancy between protein and RNA abundance became a logical observation one would expect. It was suggested that this discrepancy, when interpreted in the context of system operation, serves as a potential source of information to decipher regulatory logics underneath biochemical network operation.

  5. Global Software Development with Cloud Platforms

    NASA Astrophysics Data System (ADS)

    Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya

    Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.

  6. Plant engineers solar energy handbook. [Includes glossaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1978-01-21

    This handbook is to provide plant engineers with factual information on solar energy technology and on the various methods for assessing the future potential of this alternative energy source. The following areas are covered: solar components and systems (collectors, storage, service hot-water systems, space heating with liquid and air systems, space cooling, heat pumps and controls); computer programs for system optimization local solar and weather data; a description of buildings and plants in the San Francisco Bay Area applying solar technology; current Federal and California solar legislation; standards, codes, and performance testing information; a listing of manufacturers, distributors, and professionalmore » services that are available in Northern California; and information access. Finally, solar design checklists are provided for those engineers who wish to design their own systems. (MHR)« less

  7. Metal Hydrides, MOFs, and Carbon Composites as Space Radiation Shielding Mitigators

    NASA Technical Reports Server (NTRS)

    Atwell, William; Rojdev, Kristina; Liang, Daniel; Hill, Matthew

    2014-01-01

    Recently, metal hydrides and MOFs (Metal-Organic Framework/microporous organic polymer composites - for their hydrogen and methane storage capabilities) have been studied with applications in fuel cell technology. We have investigated a dual-use of these materials and carbon composites (CNT-HDPE) to include space radiation shielding mitigation. In this paper we present the results of a detailed study where we have analyzed 64 materials. We used the Band fit spectra for the combined 19-24 October 1989 solar proton events as the input source term radiation environment. These computational analyses were performed with the NASA high energy particle transport/dose code HZETRN. Through this analysis we have identified several of the materials that have excellent radiation shielding properties and the details of this analysis will be discussed further in the paper.

  8. Archiving and access systems for remote sensing: Chapter 6

    USGS Publications Warehouse

    Faundeen, John L.; Percivall, George; Baros, Shirley; Baumann, Peter; Becker, Peter H.; Behnke, J.; Benedict, Karl; Colaiacomo, Lucio; Di, Liping; Doescher, Chris; Dominguez, J.; Edberg, Roger; Ferguson, Mark; Foreman, Stephen; Giaretta, David; Hutchison, Vivian; Ip, Alex; James, N.L.; Khalsa, Siri Jodha S.; Lazorchak, B.; Lewis, Adam; Li, Fuqin; Lymburner, Leo; Lynnes, C.S.; Martens, Matt; Melrose, Rachel; Morris, Steve; Mueller, Norman; Navale, Vivek; Navulur, Kumar; Newman, D.J.; Oliver, Simon; Purss, Matthew; Ramapriyan, H.K.; Rew, Russ; Rosen, Michael; Savickas, John; Sixsmith, Joshua; Sohre, Tom; Thau, David; Uhlir, Paul; Wang, Lan-Wei; Young, Jeff

    2016-01-01

    Focuses on major developments inaugurated by the Committee on Earth Observation Satellites, the Group on Earth Observations System of Systems, and the International Council for Science World Data System at the global level; initiatives at national levels to create data centers (e.g. the National Aeronautics and Space Administration (NASA) Distributed Active Archive Centers and other international space agency counterparts), and non-government systems (e.g. Center for International Earth Science Information Network). Other major elements focus on emerging tool sets, requirements for metadata, data storage and refresh methods, the rise of cloud computing, and questions about what and how much data should be saved. The sub-sections of the chapter address topics relevant to the science, engineering and standards used for state-of-the-art operational and experimental systems.

  9. Water recovery and management test support modeling for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Mohamadinejad, Habib; Bacskay, Allen S.

    1990-01-01

    The water-recovery and management (WRM) subsystem proposed for the Space Station Freedom program is outlined, and its computerized modeling and simulation based on a Computer Aided System Engineering and Analysis (CASE/A) program are discussed. A WRM test model consisting of a pretreated urine processing (TIMES), hygiene water processing (RO), RO brine processing using TIMES, and hygiene water storage is presented. Attention is drawn to such end-user equipment characteristics as the shower, dishwasher, clotheswasher, urine-collection facility, and handwash. The transient behavior of pretreated-urine, RO waste-hygiene, and RO brine tanks is assessed, as well as the total input/output to or from the system. The model is considered to be beneficial for pretest analytical predictions as a program cost-saving feature.

  10. The Virtual Earth-Solar Observatory of the SCiESMEX

    NASA Astrophysics Data System (ADS)

    De la Luz, V.; Gonzalez-Esparza, A.; Cifuentes-Nava, G.

    2015-12-01

    The Mexican Space Weather Service (SCiESMEX, http://www.sciesmex.unam.mx) started operations in October 2014. The project includes the Virtual Earth-Solar Observatory (VESO, http://www.veso.unam.mx). The VESO is a improved project wich objetive is integrate the space weather instrumentation network from the National Autonomous University of Mexico (UNAM). The network includes the Mexican Array Radiotelescope (MEXART), the Callisto receptor (MEXART), a Neutron Telescope, a Cosmic Ray Telescope. the Schumann Antenna, the National Magnetic Service, and the mexican GPS network (TlalocNet). The VESO facility is located at the Geophysics Institute campus Michoacan (UNAM). We offer the service of data store, real-time data, and quasi real-time data. The hardware of VESO includes a High Performance Computer (HPC) dedicated specially to big data storage.

  11. Subjective evaluation with FAA criteria: A multidimensional scaling approach. [ground track control management

    NASA Technical Reports Server (NTRS)

    Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.

    1975-01-01

    Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.

  12. Advanced program development management software system. Software description and user's manual

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The objectives of this project were to apply emerging techniques and tools from the computer science discipline of paperless management to the activities of the Space Transportation and Exploration Office (PT01) in Marshall Space Flight Center (MSFC) Program Development, thereby enhancing the productivity of the workforce, the quality of the data products, and the collection, dissemination, and storage of information. The approach used to accomplish the objectives emphasized the utilization of finished form (off-the-shelf) software products to the greatest extent possible without impacting the performance of the end product, to pursue developments when necessary in the rapid prototyping environment to provide a mechanism for frequent feedback from the users, and to provide a full range of user support functions during the development process to promote testing of the software.

  13. Measuring the impact of computer resource quality on the software development process and product

    NASA Technical Reports Server (NTRS)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  14. Automation of electromagnetic compatability (EMC) test facilities

    NASA Technical Reports Server (NTRS)

    Harrison, C. A.

    1986-01-01

    Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.

  15. Standardised Embedded Data framework for Drones [SEDD

    NASA Astrophysics Data System (ADS)

    Wyngaard, J.; Barbieri, L.; Peterson, F. S.

    2015-12-01

    A number of barriers to entry remain for UAS use in science. One in particular is that of implementing an experiment and UAS specific software stack. Currently this stack is most often developed in-house and customised for a particular UAS-sensor pairing - limiting its reuse. Alternatively, when adaptable a suitable commercial package may be used, but such systems are both costly and usually suboptimal.In order to address this challenge the Standardised Embedded Data framework for Drones [SEDD] is being developed in μpython. SEDD provides an open source, reusable, and scientist-accessible drop in solution for drone data capture and triage. Targeted at embedded hardware, and offering easy access to standard I/O interfaces, SEDD provides an easy solution for simply capturing data from a sensor. However, the intention is rather to enable more complex systems of multiple sensors, computer hardware, and feedback loops, via 3 primary components.A data asset manager ensures data assets are associated with appropriate metadata as they are captured. Thereafter, the asset is easily archived or otherwise redirected, possibly to - onboard storage, onboard compute resource for processing, an interface for transmission, another sensor control system, remote storage and processing (such as EarthCube's CHORDS), or to any combination of the above.A service workflow managerenables easy implementation of complex onboard systems via dedicated control of multiple continuous and periodic services. Such services will include the housekeeping chores of operating a UAS and multiple sensors, but will also permit a scientist to drop in an initial scientific data processing code utilising on-board compute resources beyond the autopilot. Having such capabilities firstly enables easy creation of real-time feedback, to the human- or auto- pilot, or other sensors, on data quality or needed flight path changes. Secondly, compute hardware provides the opportunity to carry out real-time data triage, for the purposes of conserving on-board storage space or transmission bandwidth in inherently poor connectivity environments.A compute manager is finally included. Depending on system complexity, and given the need for power efficient parallelism, it can quickly become necessary to provide a scheduling service for multiple workflows.

  16. Computational Analysis of Nanoparticles-Molten Salt Thermal Energy Storage for Concentrated Solar Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Vinod

    2017-05-05

    High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly butmore » important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.« less

  17. Computer predictions of ground storage effects on performance of Galileo and ISPM generators

    NASA Technical Reports Server (NTRS)

    Chmielewski, A.

    1983-01-01

    Radioisotope Thermoelectric Generators (RTG) that will supply electrical power to the Galileo and International Solar Polar Mission (ISPM) spacecraft are exposed to several degradation mechanisms during the prolonged ground storage before launch. To assess the effect of storage on the RTG flight performance, a computer code has been developed which simulates all known degradation mechanisms that occur in an RTG during storage and flight. The modeling of these mechanisms and their impact on the RTG performance are discussed.

  18. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 2

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    This report contains copies of nearly all of the technical papers and viewgraphs presented at the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Application. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include the following: magnetic disk and tape technologies; optical disk and tape; software storage and file management systems; and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  19. An efficient sparse matrix multiplication scheme for the CYBER 205 computer

    NASA Technical Reports Server (NTRS)

    Lambiotte, Jules J., Jr.

    1988-01-01

    This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.

  20. Compendium of Authenticated Systems and Logistics Terms, Definitions and Acronyms

    DTIC Science & Technology

    1981-04-01

    assigned for storage operations, within OTHER NON WAREHOUSE SPACE a structure designed for other than storage Space being used for storage within any...opposed to Any work done in order to correct rejected work. administrative), design (engineering design and (AFLCM1 74-2) drafting), experimental test...study. (principal or designated representative) authorized practices, methodology , or procedures involved in to request, receive, store, and account

  1. Transient dynamics of terrestrial carbon storage: Mathematical foundation and numeric examples

    DOE PAGES

    Luo, Yiqi; Shi, Zheng; Lu, Xingjie; ...

    2016-09-16

    Terrestrial ecosystems absorb roughly 30% of anthropogenic CO 2 emissions since preindustrial era, but it is unclear whether this carbon (C) sink will endure into the future. Despite extensive modeling, experimental, and observational studies, what fundamentally determines transient dynamics of terrestrial C storage under climate change is still not very clear. Here we develop a new framework for understanding transient dynamics of terrestrial C storage through mathematical analysis and numerical experiments. Our analysis indicates that the ultimate force driving ecosystem C storage change is the C storage capacity, which is jointly determined by ecosystem C input (e.g., net primary production,more » NPP) and residence time. Since both C input and residence time vary with time, the C storage capacity is time-dependent and acts as a moving attractor that actual C storage chases. The rate of change in C storage is proportional to the C storage potential, the difference between the current storage and the storage capacity. The C storage capacity represents instantaneous responses of the land C cycle to external forcing, whereas the C storage potential represents the internal capability of the land C cycle to influence the C change trajectory in the next time step. The influence happens through redistribution of net C pool changes in a network of pools with different residence times. Furthermore, this and our other studies have demonstrated that one matrix equation can exactly replicate simulations of most land C cycle models (i.e., physical emulators). As a result, simulation outputs of those models can be placed into a three-dimensional (3D) parameter space to measure their differences. The latter can be decomposed into traceable components to track the origins of model uncertainty. Moreover, the emulators make data assimilation computationally feasible so that both C flux- and pool-related datasets can be used to better constrain model predictions of land C sequestration. We also propose that the C storage potential be the targeted variable for research, market trading, and government negotiation for C credits.« less

  2. Transient dynamics of terrestrial carbon storage: Mathematical foundation and numeric examples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yiqi; Shi, Zheng; Lu, Xingjie

    Terrestrial ecosystems absorb roughly 30% of anthropogenic CO 2 emissions since preindustrial era, but it is unclear whether this carbon (C) sink will endure into the future. Despite extensive modeling, experimental, and observational studies, what fundamentally determines transient dynamics of terrestrial C storage under climate change is still not very clear. Here we develop a new framework for understanding transient dynamics of terrestrial C storage through mathematical analysis and numerical experiments. Our analysis indicates that the ultimate force driving ecosystem C storage change is the C storage capacity, which is jointly determined by ecosystem C input (e.g., net primary production,more » NPP) and residence time. Since both C input and residence time vary with time, the C storage capacity is time-dependent and acts as a moving attractor that actual C storage chases. The rate of change in C storage is proportional to the C storage potential, the difference between the current storage and the storage capacity. The C storage capacity represents instantaneous responses of the land C cycle to external forcing, whereas the C storage potential represents the internal capability of the land C cycle to influence the C change trajectory in the next time step. The influence happens through redistribution of net C pool changes in a network of pools with different residence times. Furthermore, this and our other studies have demonstrated that one matrix equation can exactly replicate simulations of most land C cycle models (i.e., physical emulators). As a result, simulation outputs of those models can be placed into a three-dimensional (3D) parameter space to measure their differences. The latter can be decomposed into traceable components to track the origins of model uncertainty. Moreover, the emulators make data assimilation computationally feasible so that both C flux- and pool-related datasets can be used to better constrain model predictions of land C sequestration. We also propose that the C storage potential be the targeted variable for research, market trading, and government negotiation for C credits.« less

  3. ROI-Based On-Board Compression for Hyperspectral Remote Sensing Images on GPU.

    PubMed

    Giordano, Rossella; Guccione, Pietro

    2017-05-19

    In recent years, hyperspectral sensors for Earth remote sensing have become very popular. Such systems are able to provide the user with images having both spectral and spatial information. The current hyperspectral spaceborne sensors are able to capture large areas with increased spatial and spectral resolution. For this reason, the volume of acquired data needs to be reduced on board in order to avoid a low orbital duty cycle due to limited storage space. Recently, literature has focused the attention on efficient ways for on-board data compression. This topic is a challenging task due to the difficult environment (outer space) and due to the limited time, power and computing resources. Often, the hardware properties of Graphic Processing Units (GPU) have been adopted to reduce the processing time using parallel computing. The current work proposes a framework for on-board operation on a GPU, using NVIDIA's CUDA (Compute Unified Device Architecture) architecture. The algorithm aims at performing on-board compression using the target's related strategy. In detail, the main operations are: the automatic recognition of land cover types or detection of events in near real time in regions of interest (this is a user related choice) with an unsupervised classifier; the compression of specific regions with space-variant different bit rates including Principal Component Analysis (PCA), wavelet and arithmetic coding; and data volume management to the Ground Station. Experiments are provided using a real dataset taken from an AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) airborne sensor in a harbor area.

  4. Advanced energy storage for space applications: A follow-up

    NASA Technical Reports Server (NTRS)

    Halpert, Gerald; Surampudi, Subbarao

    1994-01-01

    Viewgraphs on advanced energy storage for space applications are presented. Topics covered include: categories of space missions using batteries; battery challenges; properties of SOA and advanced primary batteries; lithium primary cell applications; advanced rechargeable battery applications; present limitations of advanced battery technologies; and status of Li-TiS2, Ni-MH, and Na-NiCl2 cell technologies.

  5. Space Power Architectures for NASA Missions: The Applicability and Benefits of Advanced Power and Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Hoffman, David J.

    2001-01-01

    The relative importance of electrical power systems as compared with other spacecraft bus systems is examined. The quantified benefits of advanced space power architectures for NASA Earth Science, Space Science, and Human Exploration and Development of Space (HEDS) missions is then presented. Advanced space power technologies highlighted include high specific power solar arrays, regenerative fuel cells, Stirling radioisotope power sources, flywheel energy storage and attitude control, lithium ion polymer energy storage and advanced power management and distribution.

  6. Damsel: A Data Model Storage Library for Exascale Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Liao, Wei-keng

    Computational science applications have been described as having one of seven motifs (the “seven dwarfs”), each having a particular pattern of computation and communication. From a storage and I/O perspective, these applications can also be grouped into a number of data model motifs describing the way data is organized and accessed during simulation, analysis, and visualization. Major storage data models developed in the 1990s, such as Network Common Data Format (netCDF) and Hierarchical Data Format (HDF) projects, created support for more complex data models. Development of both netCDF and HDF5 was influenced by multi-dimensional dataset storage requirements, but their accessmore » models and formats were designed with sequential storage in mind (e.g., a POSIX I/O model). Although these and other high-level I/O libraries have had a beneficial impact on large parallel applications, they do not always attain a high percentage of peak I/O performance due to fundamental design limitations, and they do not address the full range of current and future computational science data models. The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. The project consists of three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community. The product of this project, Damsel library, is openly available for download from http://cucis.ece.northwestern.edu/projects/DAMSEL. Several case studies and application programming interface reference are also available to assist new users to learn to use the library.« less

  7. VIEW OF INTERIOR SPACE WITH ANODIZING TANK AND LIQUID BIN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF INTERIOR SPACE WITH ANODIZING TANK AND LIQUID BIN STORAGE TANK IN FOREGROUND, FACING NORTH. - Douglas Aircraft Company Long Beach Plant, Aircraft Parts Receiving & Storage Building, 3855 Lakewood Boulevard, Long Beach, Los Angeles County, CA

  8. Comprehensive monitoring for heterogeneous geographically distributed storage

    DOE PAGES

    Ratnikova, Natalia; Karavakis, E.; Lammel, S.; ...

    2015-12-23

    Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a centrally managed storage space, will be increased to up to 40%. For comprehensive tracking and monitoring of the storage utilization across all participating sites, CMS developed a space monitoring system, which provides a central view of the geographically dispersed heterogeneous storage systems. The first prototype was deployed at pilot sites in summer 2014, and has been substantially reworked since then.more » In this study, we discuss the functionality and our experience of system deployment and operation on the full CMS scale.« less

  9. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  10. Space-charge-sustained microbunch structure in the Los Alamos Proton Storage Ring

    NASA Astrophysics Data System (ADS)

    Cousineau, S.; Danilov, V.; Holmes, J.; Macek, R.

    2004-09-01

    We present experimental data from the Los Alamos Proton Storage Ring (PSR) showing long-lived linac microbunch structure during beam storage with no rf bunching. Analysis of the experimental data and particle-in-cell simulations of the experiments indicate that space charge, coupled with energy spread effects, is responsible for the sustained microbunch structure. The simulated longitudinal phase space of the beam reveals a well-defined separatrix in the phase space between linac microbunches, with particles executing unbounded motion outside of the separatrix. We show that the longitudinal phase space of the beam was near steady state during the PSR experiments, such that the separatrix persisted for long periods of time. Our simulations indicate that the steady state is very sensitive to the experimental conditions. Finally, we solve the steady-state problem in an analytic, self-consistent fashion for a set of periodic longitudinal space-charge potentials.

  11. 29 CFR 1926.857 - Storage.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 8 2012-07-01 2012-07-01 false Storage. 1926.857 Section 1926.857 Labor Regulations...) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Demolition § 1926.857 Storage. (a) The storage of waste... provide storage space for debris, provided falling material is not permitted to endanger the stability of...

  12. 29 CFR 1926.857 - Storage.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 8 2011-07-01 2011-07-01 false Storage. 1926.857 Section 1926.857 Labor Regulations...) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Demolition § 1926.857 Storage. (a) The storage of waste... provide storage space for debris, provided falling material is not permitted to endanger the stability of...

  13. 29 CFR 1926.857 - Storage.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 8 2014-07-01 2014-07-01 false Storage. 1926.857 Section 1926.857 Labor Regulations...) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Demolition § 1926.857 Storage. (a) The storage of waste... provide storage space for debris, provided falling material is not permitted to endanger the stability of...

  14. 29 CFR 1926.857 - Storage.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 8 2013-07-01 2013-07-01 false Storage. 1926.857 Section 1926.857 Labor Regulations...) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Demolition § 1926.857 Storage. (a) The storage of waste... provide storage space for debris, provided falling material is not permitted to endanger the stability of...

  15. 29 CFR 1926.857 - Storage.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 8 2010-07-01 2010-07-01 false Storage. 1926.857 Section 1926.857 Labor Regulations...) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Demolition § 1926.857 Storage. (a) The storage of waste... provide storage space for debris, provided falling material is not permitted to endanger the stability of...

  16. A connectivity-based modeling approach for representing hysteresis in macroscopic two-phase flow properties

    DOE PAGES

    Cihan, Abdullah; Birkholzer, Jens; Trevisan, Luca; ...

    2014-12-31

    During CO 2 injection and storage in deep reservoirs, the injected CO 2 enters into an initially brine saturated porous medium, and after the injection stops, natural groundwater flow eventually displaces the injected mobile-phase CO 2, leaving behind residual non-wetting fluid. Accurate modeling of two-phase flow processes are needed for predicting fate and transport of injected CO 2, evaluating environmental risks and designing more effective storage schemes. The entrapped non-wetting fluid saturation is typically a function of the spatially varying maximum saturation at the end of injection. At the pore-scale, distribution of void sizes and connectivity of void space playmore » a major role for the macroscopic hysteresis behavior and capillary entrapment of wetting and non-wetting fluids. This paper presents development of an approach based on the connectivity of void space for modeling hysteretic capillary pressure-saturation-relative permeability relationships. The new approach uses void-size distribution and a measure of void space connectivity to compute the hysteretic constitutive functions and to predict entrapped fluid phase saturations. Two functions, the drainage connectivity function and the wetting connectivity function, are introduced to characterize connectivity of fluids in void space during drainage and wetting processes. These functions can be estimated through pore-scale simulations in computer-generated porous media or from traditional experimental measurements of primary drainage and main wetting curves. The hysteresis model for saturation-capillary pressure is tested successfully by comparing the model-predicted residual saturation and scanning curves with actual data sets obtained from column experiments found in the literature. A numerical two-phase model simulator with the new hysteresis functions is tested against laboratory experiments conducted in a quasi-two-dimensional flow cell (91.4cm×5.6cm×61cm), packed with homogeneous and heterogeneous sands. Initial results show that the model can predict spatial and temporal distribution of injected fluid during the experiments reasonably well. However, further analyses are needed for comprehensively testing the ability of the model to predict transient two-phase flow processes and capillary entrapment in geological reservoirs during geological carbon sequestration.« less

  17. Momentum management strategy during Space Station buildup

    NASA Technical Reports Server (NTRS)

    Bishop, Lynda; Malchow, Harvey; Hattis, Philip

    1988-01-01

    The use of momentum storage devices to control effectors for Space Station attitude control throughout the buildup sequence is discussed. Particular attention is given to the problem of providing satisfactory management of momentum storage effectors throughout buildup while experiencing variable torque loading. Continuous and discrete control strategies are compared and the effects of alternative control moment gyro strategies on peak momentum storage requirements and on commanded maneuver characteristics are described.

  18. Problems in the long-term storage of data obtained from scientific space experiments

    NASA Technical Reports Server (NTRS)

    Zlotin, G. N.; Khovanskiy, Y. D.

    1975-01-01

    It is shown that long-term data storage systems can be achieved when the system which organizes and conducts the scientific space experiments is equipped with a specialized subsystem: the information filing system. Its main functions are described along with the necessity of stage-by-stage development and compatibility with the data processing systems. The requirements for long-term data storage media are discussed.

  19. Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.

    2010-01-01

    Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.

  20. A simulation model for wind energy storage systems. Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Warren, A. W.; Edsinger, R. W.; Chan, Y. K.

    1977-01-01

    A comprehensive computer program for the modeling of wind energy and storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) was developed. The level of detail of Simulation Model for Wind Energy Storage (SIMWEST) is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind source storage application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/O, the integration of systems dynamics, and the iteration for conveyance of variables. SIMWEST program, as described, runs on the UNIVAC 1100 series computers.

  1. Parallel checksumming of data chunks of a shared data object using a log-structured file system

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-09-06

    Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.

  2. Transient dynamics of terrestrial carbon storage: Mathematical foundation and its applications

    DOE PAGES

    Luo, Yiqi; Shi, Zheng; Lu, Xingjie; ...

    2017-01-12

    Terrestrial ecosystems have absorbed roughly 30% of anthropogenic CO 2 emissions over the past decades, but it is unclear whether this carbon (C) sink will endure into the future. Despite extensive modeling and experimental and observational studies, what fundamentally determines transient dynamics of terrestrial C storage under global change is still not very clear. Here we develop a new framework for understanding transient dynamics of terrestrial C storage through mathematical analysis and numerical experiments. Our analysis indicates that the ultimate force driving ecosystem C storage change is the C storage capacity, which is jointly determined by ecosystem C input (e.g.,more » net primary production, NPP) and residence time. Since both C input and residence time vary with time, the C storage capacity is time-dependent and acts as a moving attractor that actual C storage chases. The rate of change in C storage is proportional to the C storage potential, which is the difference between the current storage and the storage capacity. The C storage capacity represents instantaneous responses of the land C cycle to external forcing, whereas the C storage potential represents the internal capability of the land C cycle to influence the C change trajectory in the next time step. The influence happens through redistribution of net C pool changes in a network of pools with different residence times. Moreover, this and our other studies have demonstrated that one matrix equation can replicate simulations of most land C cycle models (i.e., physical emulators). As a result, simulation outputs of those models can be placed into a three-dimensional (3-D) parameter space to measure their differences. The latter can be decomposed into traceable components to track the origins of model uncertainty. In addition, the physical emulators make data assimilation computationally feasible so that both C flux- and pool-related datasets can be used to better constrain model predictions of land C sequestration. Altogether, this new mathematical framework offers new approaches to understanding, evaluating, diagnosing, and improving land C cycle models.« less

  3. Transient dynamics of terrestrial carbon storage: Mathematical foundation and its applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yiqi; Shi, Zheng; Lu, Xingjie

    Terrestrial ecosystems have absorbed roughly 30% of anthropogenic CO 2 emissions over the past decades, but it is unclear whether this carbon (C) sink will endure into the future. Despite extensive modeling and experimental and observational studies, what fundamentally determines transient dynamics of terrestrial C storage under global change is still not very clear. Here we develop a new framework for understanding transient dynamics of terrestrial C storage through mathematical analysis and numerical experiments. Our analysis indicates that the ultimate force driving ecosystem C storage change is the C storage capacity, which is jointly determined by ecosystem C input (e.g.,more » net primary production, NPP) and residence time. Since both C input and residence time vary with time, the C storage capacity is time-dependent and acts as a moving attractor that actual C storage chases. The rate of change in C storage is proportional to the C storage potential, which is the difference between the current storage and the storage capacity. The C storage capacity represents instantaneous responses of the land C cycle to external forcing, whereas the C storage potential represents the internal capability of the land C cycle to influence the C change trajectory in the next time step. The influence happens through redistribution of net C pool changes in a network of pools with different residence times. Moreover, this and our other studies have demonstrated that one matrix equation can replicate simulations of most land C cycle models (i.e., physical emulators). As a result, simulation outputs of those models can be placed into a three-dimensional (3-D) parameter space to measure their differences. The latter can be decomposed into traceable components to track the origins of model uncertainty. In addition, the physical emulators make data assimilation computationally feasible so that both C flux- and pool-related datasets can be used to better constrain model predictions of land C sequestration. Altogether, this new mathematical framework offers new approaches to understanding, evaluating, diagnosing, and improving land C cycle models.« less

  4. Mean PB To Failure - Initial results from a long-term study of disk storage patterns at the RACF

    NASA Astrophysics Data System (ADS)

    Caramarcu, C.; Hollowell, C.; Rao, T.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, S. A.

    2015-12-01

    The RACF (RHIC-ATLAS Computing Facility) has operated a large, multi-purpose dedicated computing facility since the mid-1990’s, serving a worldwide, geographically diverse scientific community that is a major contributor to various HEPN projects. A central component of the RACF is the Linux-based worker node cluster that is used for both computing and data storage purposes. It currently has nearly 50,000 computing cores and over 23 PB of storage capacity distributed over 12,000+ (non-SSD) disk drives. The majority of the 12,000+ disk drives provide a cost-effective solution for dCache/XRootD-managed storage, and a key concern is the reliability of this solution over the lifetime of the hardware, particularly as the number of disk drives and the storage capacity of individual drives grow. We report initial results of a long-term study to measure lifetime PB read/written to disk drives in the worker node cluster. We discuss the historical disk drive mortality rate, disk drive manufacturers' published MPTF (Mean PB to Failure) data and how they are correlated to our results. The results help the RACF understand the productivity and reliability of its storage solutions and have implications for other highly-available storage systems (NFS, GPFS, CVMFS, etc) with large I/O requirements.

  5. Cooperative high-performance storage in the accelerated strategic computing initiative

    NASA Technical Reports Server (NTRS)

    Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark

    1996-01-01

    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.

  6. Computer program and user documentation medical data tape retrieval system

    NASA Technical Reports Server (NTRS)

    Anderson, J.

    1971-01-01

    This volume provides several levels of documentation for the program module of the NASA medical directorate mini-computer storage and retrieval system. A biomedical information system overview describes some of the reasons for the development of the mini-computer storage and retrieval system. It briefly outlines all of the program modules which constitute the system.

  7. Acquisition and analysis of primate physiologic data for the Space Shuttle

    NASA Astrophysics Data System (ADS)

    Eberhart, Russell C.; Hogrefe, Arthur F.; Radford, Wade E.; Sanders, Kermit H.; Dobbins, Roy W.

    1988-03-01

    This paper describes the design and prototypes of the Physiologic Acquisition and Telemetry System (PATS), which is a multichannel system, designed for large primates, for the data acquisition, telemetry, storage, and analysis of physiological data. PATS is expected to acquire data from units implanted in the abdominal cavities of rhesus monkeys that will be flown aboard the Spacelab. The system will telemeter both stored and real-time internal physiologic measurements to an external Flight Support Station (FSS) computer system. The implanted Data Acquition and Telemetry Subsystem subunit will be externally activated, controlled and reprogrammed from the FSS.

  8. Use of Computer Statistical Packages to Generate Quality Control Reports on Training

    DTIC Science & Technology

    1980-01-01

    Quality Control Statistical Analysis 126. Th~rAcr ivowhis. sim oeva.e ebb VI .eseem mu 111160#0 by block nuaber; OU6btaining timely and efficient...DISSAI.SFIEC 4 31 EXTRE.,AELY SATISFIED 4 32 8. HUW MAN-Y MEN IN YOU 1QNIT hA:,T TO DO A GOCO JOB IN 5. 2 TRAIING -?5- ə> F01 UF THEM ɚ> SCME CF THEM...permanent disk storage space within the coma- puteor account.* The user may not wish to run the "Audit" program in the s a batch flow as the 6th.: three

  9. Computational design of the basic dynamical processes of the UCLA general circulation model

    NASA Technical Reports Server (NTRS)

    Arakawa, A.; Lamb, V. R.

    1977-01-01

    The 12-layer UCLA general circulation model encompassing troposphere and stratosphere (and superjacent 'sponge layer') is described. Prognostic variables are: surface pressure, horizontal velocity, temperature, water vapor and ozone in each layer, planetary boundary layer (PBL) depth, temperature, moisture and momentum discontinuities at PBL top, ground temperature and water storage, and mass of snow on ground. Selection of space finite-difference schemes for homogeneous incompressible flow, with/without a free surface, nonlinear two-dimensional nondivergent flow, enstrophy conserving schemes, momentum advection schemes, vertical and horizontal difference schemes, and time differencing schemes are discussed.

  10. NSSDC activities with 12-inch optical disk drives

    NASA Technical Reports Server (NTRS)

    Lowrey, Barbara E.; Lopez-Swafford, Brian

    1986-01-01

    The development status of optical-disk data transfer and storage technology at the National Space Science Data Center (NSSDC) is surveyed. The aim of the R&D program is to facilitate the exchange of large volumes of data. Current efforts focus on a 12-inch 1-Gbyte write-once/read-many disk and a disk drive which interfaces with VAX/VMS computer systems. The history of disk development at NSSDC is traced; the results of integration and performance tests are summarized; the operating principles of the 12-inch system are explained and illustrated with diagrams; and the need for greater standardization is indicated.

  11. Converting information from paper to optical media

    NASA Technical Reports Server (NTRS)

    Deaton, Timothy N.; Tiller, Bruce K.

    1990-01-01

    The technology of converting large amounts of paper into electronic form is described for use in information management systems based on optical disk storage. The space savings and photographic nature of microfiche are combined in these systems with the advantages of computerized data (fast and flexible retrieval of graphics and text, simultaneous instant access for multiple users, and easy manipulation of data). It is noted that electronic imaging systems offer a unique opportunity to dramatically increase the productivity and profitability of information systems. Particular attention is given to the CALS (Computer-aided Aquisition and Logistic Support) system.

  12. The Torque of the Planet: NASA Researcher Uses NCCS Computers to Probe Atmosphere-Land-Ocean Coupling

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The study of Earth science is like a giant puzzle, says Braulio Sanchez. "The more you know about the individual pieces, the easier it is to fit them together." A researcher with Goddard's Space Geodesy Branch, Sanchez has been using NCCS supercomputer and mass storage resources to show how the angular momenta of the atmosphere, the oceans, and the solid Earth are dynamically coupled. Sanchez has calculated the magnitude of atmospheric torque on the planet and has determined some of the possible effects that torque has on Earth's rotation.

  13. A Simplified Shuttle Payload Thermal Analyzer /SSPTA/ program

    NASA Technical Reports Server (NTRS)

    Bartoszek, J. T.; Huckins, B.; Coyle, M.

    1979-01-01

    A simple thermal analysis program for Space Shuttle payloads has been developed to accommodate the user who requires an easily understood but dependable analytical tool. The thermal analysis program includes several thermal subprograms traditionally employed in spacecraft thermal studies, a data management system for data generated by the subprograms, and a master program to coordinate the data files and thermal subprograms. The language and logic used to run the thermal analysis program are designed for the small user. In addition, analytical and storage techniques which conserve computer time and minimize core requirements are incorporated into the program.

  14. Storage peak gas-turbine power unit

    NASA Technical Reports Server (NTRS)

    Tsinkotski, B.

    1980-01-01

    A storage gas-turbine power plant using a two-cylinder compressor with intermediate cooling is studied. On the basis of measured characteristics of a .25 Mw compressor computer calculations of the parameters of the loading process of a constant capacity storage unit (05.3 million cu m) were carried out. The required compressor power as a function of time with and without final cooling was computed. Parameters of maximum loading and discharging of the storage unit were calculated, and it was found that for the complete loading of a fully unloaded storage unit, a capacity of 1 to 1.5 million cubic meters is required, depending on the final cooling.

  15. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  16. Computers, the Human Mind, and My In-Laws' House.

    ERIC Educational Resources Information Center

    Esque, Timm J.

    1996-01-01

    Discussion of human memory, computer memory, and the storage of information focuses on a metaphor that can account for memory without storage and can set the stage for systemic research around a more comprehensive, understandable theory. (Author/LRW)

  17. Generation system impacts of storage heating and storage water heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gellings, C.W.; Quade, A.W.; Stovall, J.P.

    Thermal energy storage systems offer the electric utility a means to change customer energy use patterns. At present, however, the costs and benefit to both the customers and utility are uncertain. As part of a nationwide demonstration program Public Service Electric and Gas Company installed storage space heating and water heating appliances in residential homes. Both the test homes and similiar homes using conventional space and water heating appliances were monitored, allowing for detailed comparisons between the two systems. The purpose of this paper is to detail the methodology used and the results of studies completed on the generation systemmore » impacts of storage space and water heating systems. Other electric system impacts involving service entrance size, metering, secondary distribution and primary distribution were detailed in two previous IEEE Papers. This paper is organized into three main sections. The first gives background data on PSEandG and their experience in a nationwide thermal storage demonstration project. The second section details results of the demonstration project and studies that have been performed on the impacts of thermal storage equipment. The last section reports on the conclusions arrived at concerning the impacts of thermal storage on generation. The study was conducted in early 1982 using available data at that time, while PSEandG system plans have changed since then, the conclusions are pertinent and valuable to those contemplating inpacts of thermal energy storage.« less

  18. GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments

    NASA Astrophysics Data System (ADS)

    Chen, Zhanlong; Wu, Xin-cai; Wu, Liang

    2008-12-01

    Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the distributed operation, reduplication operation transfer operation of spatial index in the grid environment. The design of GSHR-Tree has ensured the performance of the load balance in the parallel computation. This tree structure is fit for the parallel process of the spatial information in the distributed network environments. Instead of spatial object's recursive comparison where original R tree has been used, the algorithm builds the spatial index by applying binary code operation in which computer runs more efficiently, and extended dynamic hash code for bit comparison. In GSHR-Tree, a new server is assigned to the network whenever a split of a full node is required. We describe a more flexible allocation protocol which copes with a temporary shortage of storage resources. It uses a distributed balanced binary spatial tree that scales with insertions to potentially any number of storage servers through splits of the overloaded ones. The application manipulates the GSHR-Tree structure from a node in the grid environment. The node addresses the tree through its image that the splits can make outdated. This may generate addressing errors, solved by the forwarding among the servers. In this paper, a spatial index data distribution algorithm that limits the number of servers has been proposed. We improve the storage utilization at the cost of additional messages. The structure of GSHR-Tree is believed that the scheme of this grid spatial index should fit the needs of new applications using endlessly larger sets of spatial data. Our proposal constitutes a flexible storage allocation method for a distributed spatial index. The insertion policy can be tuned dynamically to cope with periods of storage shortage. In such cases storage balancing should be favored for better space utilization, at the price of extra message exchanges between servers. This structure makes a compromise in the updating of the duplicated index and the transformation of the spatial index data. Meeting the needs of the grid computing, GSHRTree has a flexible structure in order to satisfy new needs in the future. The GSHR-Tree provides the R-tree capabilities for large spatial datasets stored over interconnected servers. The analysis, including the experiments, confirmed the efficiency of our design choices. The scheme should fit the needs of new applications of spatial data, using endlessly larger datasets. Using the system response time of the parallel processing of spatial scope query algorithm as the performance evaluation factor, According to the result of the simulated the experiments, GSHR-Tree is performed to prove the reasonable design and the high performance of the indexing structure that the paper presented.

  19. Cheaper Adjoints by Reversing Address Computations

    DOE PAGES

    Hascoët, L.; Utke, J.; Naumann, U.

    2008-01-01

    The reverse mode of automatic differentiation is widely used in science and engineering. A severe bottleneck for the performance of the reverse mode, however, is the necessity to recover certain intermediate values of the program in reverse order. Among these values are computed addresses, which traditionally are recovered through forward recomputation and storage in memory. We propose an alternative approach for recovery that uses inverse computation based on dependency information. Address storage constitutes a significant portion of the overall storage requirements. An example illustrates substantial gains that the proposed approach yields, and we show use cases in practical applications.

  20. Analysis and assessment of STES technologies

    NASA Astrophysics Data System (ADS)

    Brown, D. R.; Blahnik, D. E.; Huber, H. D.

    1982-12-01

    Technical and economic assessments completed in FY 1982 in support of the Seasonal Thermal Energy Storage (STES) segment of the Underground Energy Storage Program included: (1) a detailed economic investigation of the cost of heat storage in aquifers, (2) documentation for AQUASTOR, a computer model for analyzing aquifer thermal energy storage (ATES) coupled with district heating or cooling, and (3) a technical and economic evaluation of several ice storage concepts. This paper summarizes the research efforts and main results of each of these three activities. In addition, a detailed economic investigation of the cost of chill storage in aquifers is currently in progress. The work parallels that done for ATES heat storage with technical and economic assumptions being varied in a parametric analysis of the cost of ATES delivered chill. The computer model AQUASTOR is the principal analytical tool being employed.

  1. Storage Media for Microcomputers.

    ERIC Educational Resources Information Center

    Trautman, Rodes

    1983-01-01

    Reviews computer storage devices designed to provide additional memory for microcomputers--chips, floppy disks, hard disks, optical disks--and describes how secondary storage is used (file transfer, formatting, ingredients of incompatibility); disk/controller/software triplet; magnetic tape backup; storage volatility; disk emulator; and…

  2. A special planning technique for stream-aquifer systems

    USGS Publications Warehouse

    Jenkins, C.T.; Taylor, O. James

    1974-01-01

    The potential effects of water-management plans on stream-aquifer systems in several countries have been simulated using electric-analog or digital-computer models. Many of the electric-analog models require large amounts of hardware preparation for each problem to be solved and some become so bulky that they present serious space and access problems. Digital-computer models require no special hardware preparation but often they require so many repetitive solutions of equations that they result in calculations that are unduly unwieldy and expensive, even on the latest generation of computers. Further, the more detailed digital models require a vast amount of core storage, leaving insufficient storage for evaluation of the many possible schemes of water-management. A concept introduced in 1968 by the senior author of this report offers a solution to these problems. The concept is that the effects on streamflow of ground-water withdrawal or recharge (stress) at any point in such a system can be approximated using two classical equations and a value of time that reflects the integrated effect of the following: irregular impermeable boundaries; stream meanders; aquifer properties and their areal variations; distance of the point from the stream; and imperfect hydraulic connection between the stream and the aquifer. The value of time is called the stream depletion factor (sdf). Results of a relatively few tests on detailed models can be summarized on maps showing lines through points of equal sdf. Sensitivity analyses of models of two large stream-aquifer systems in the State of Colorado show that the sdf technique described in this report provides results within tolerable ranges of error. The sdf technique is extremely versatile, allowing water managers to choose the degree of detail that best suits their needs and available computational hardware. Simple arithmetic, using, for example, only a slide rule and charts or tables of dimensionless values, will be sufficient for many calculations. If a large digital computer is available, detailed description of the system and its stresses will require only a fraction of the core storage, leaving the greater part of the storage available for sophisticated analyses, such as optimization. Once these analyses have been made, the model then is ready to perform its principal task--prediction of streamflow and changes in ground-water storage. In the two systems described in this report, direct diversion from the streams is the principal source of irrigation water, but it is supplemented by numerous wells. The streamflow depends largely on snowmelt. Estimates of both the amount and timing of runoff from snowmelt during the irrigation season are available on a monthly basis during the spring and early summer. These estimates become increasingly accurate as the season progresses, hence frequent changes of stress on the predictive model are necessary. The sdf technique is especially well suited to this purpose, because it is very easy to make such changes, resulting in more up-todate estimates of the availability of streamflow and ground-water storage. These estimates can be made for any time and any location in the system.

  3. 75 FR 27798 - Notice of Issuance of Final Determination Concerning Certain Commodity-Based Clustered Storage Units

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... device to function as a cloud computing device similar to a network storage RAID array (HDDs strung... contract. This final determination, in HQ H082476, was issued at the request of Scale Computing under... response to your request dated October 15, 2009, made on behalf of Scale Computing (``Scale''). You ask for...

  4. Inertial energy storage for advanced space station applications

    NASA Technical Reports Server (NTRS)

    Van Tassel, K. E.; Simon, W. E.

    1985-01-01

    Because the NASA Space Station will spend approximately one-third of its orbital time in the earth's shadow, depriving it of solar energy and requiring an energy storage system to meet system demands, attention has been given to flywheel energy storage systems. These systems promise high mechanical efficiency, long life, light weight, flexible design, and easily monitored depth of discharge. An assessment is presently made of three critical technology areas: rotor materials, magnetic suspension bearings, and motor-generators for energy conversion. Conclusions are presented regarding the viability of inertial energy storage systems and of problem areas requiring further technology development efforts.

  5. Bethune-Cookman University STEM Research Lab. DOE Renovation Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Herbert W.

    DOE funding was used to renovate 4,500 square feet of aging laboratories and classrooms that support science, engineering, and mathematics disciplines (specifically environmental science, and computer engineering). The expansion of the labs was needed to support robotics and environmental science research, and to better accommodate a wide variety of teaching situations. The renovated space includes a robotics laboratory, two multi-use labs, safe spaces for the storage of instrumentation, modern ventilation equipment, and other “smart” learning venues. The renovated areas feature technologies that are environmentally friendly with reduced energy costs. A campus showcase, the laboratories are a reflection of the University’smore » commitment to the environment and research as a tool for teaching. As anticipated, the labs facilitate the exploration of emerging technologies that are compatible with local and regional economic plans.« less

  6. One GHz digitizer for space based laser altimeter

    NASA Technical Reports Server (NTRS)

    Staples, Edward J.

    1991-01-01

    This is the final report for the research and development of the one GHz digitizer for space based laser altimeter. A feasibility model was designed, built, and tested. Only partial testing of essential functions of the digitizer was completed. Hybrid technology was incorporated which allows analog storage (memory) of the digitally sampled data. The actual sampling rate is 62.5 MHz, but executed in 16 parallel channels, to provide an effective sampling rate of one GHz. The average power consumption of the one GHz digitizer is not more than 1.5 Watts. A one GHz oscillator is incorporated for timing purposes. This signal is also made available externally for system timing. A software package was also developed for internal use (controls, commands, etc.) and for data communication with the host computer. The digitizer is equipped with an onboard microprocessor for this purpose.

  7. Prefixed-threshold real-time selection method in free-space quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Wenyuan; Xu, Feihu; Lo, Hoi-Kwong

    2018-03-01

    Free-space quantum key distribution allows two parties to share a random key with unconditional security, between ground stations, between mobile platforms, and even in satellite-ground quantum communications. Atmospheric turbulence causes fluctuations in transmittance, which further affect the quantum bit error rate and the secure key rate. Previous postselection methods to combat atmospheric turbulence require a threshold value determined after all quantum transmission. In contrast, here we propose a method where we predetermine the optimal threshold value even before quantum transmission. Therefore, the receiver can discard useless data immediately, thus greatly reducing data storage requirements and computing resources. Furthermore, our method can be applied to a variety of protocols, including, for example, not only single-photon BB84 but also asymptotic and finite-size decoy-state BB84, which can greatly increase its practicality.

  8. Influence of technology on magnetic tape storage device characteristics

    NASA Technical Reports Server (NTRS)

    Gniewek, John J.; Vogel, Stephen M.

    1994-01-01

    There are available today many data storage devices that serve the diverse application requirements of the consumer, professional entertainment, and computer data processing industries. Storage technologies include semiconductors, several varieties of optical disk, optical tape, magnetic disk, and many varieties of magnetic tape. In some cases, devices are developed with specific characteristics to meet specification requirements. In other cases, an existing storage device is modified and adapted to a different application. For magnetic tape storage devices, examples of the former case are 3480/3490 and QIC device types developed for the high end and low end segments of the data processing industry respectively, VHS, Beta, and 8 mm formats developed for consumer video applications, and D-1, D-2, D-3 formats developed for professional video applications. Examples of modified and adapted devices include 4 mm, 8 mm, 12.7 mm and 19 mm computer data storage devices derived from consumer and professional audio and video applications. With the conversion of the consumer and professional entertainment industries from analog to digital storage and signal processing, there have been increasing references to the 'convergence' of the computer data processing and entertainment industry technologies. There has yet to be seen, however, any evidence of convergence of data storage device types. There are several reasons for this. The diversity of application requirements results in varying degrees of importance for each of the tape storage characteristics.

  9. Evaluation of powertrain solutions for future tactical truck vehicle systems

    NASA Astrophysics Data System (ADS)

    Pisu, Pierluigi; Cantemir, Codrin-Gruie; Dembski, Nicholas; Rizzoni, Giorgio; Serrao, Lorenzo; Josephson, John R.; Russell, James

    2006-05-01

    The article presents the results of a large scale design space exploration for the hybridization of two off-road vehicles, part of the Future Tactical Truck System (FTTS) family: Maneuver Sustainment Vehicle (MSV) and Utility Vehicle (UV). Series hybrid architectures are examined. The objective of the paper is to illustrate a novel design methodology that allows for the choice of the optimal values of several vehicle parameters. The methodology consists in an extensive design space exploration, which involves running a large number of computer simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles, and multiple attributes of performance are measured. The resulting designs are filtered to choose the design tradeoffs that better satisfy the performance and fuel economy requirements. At the end, few promising vehicle configuration designs will be selected that will need additional detailed investigation including neglected metrics like ride and drivability. Several powertrain architectures have been simulated. The design parameters include the number of axles in the vehicle (2 or 3), the number of electric motors per axle (1 or 2), the type of internal combustion engine, the type and quantity of energy storage system devices (batteries, electrochemical capacitors or both together). An energy management control strategy has also been developed to provide efficiency and performance. The control parameters are tunable and have been included into the design space exploration. The results show that the internal combustion engine and the energy storage system devices are extremely important for the vehicle performance.

  10. Interoperating Cloud-based Virtual Farms

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.

    2015-12-01

    The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.

  11. An energy efficient and high speed architecture for convolution computing based on binary resistive random access memory

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.

  12. A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.

    PubMed

    Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin

    2014-01-01

    This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.

  13. A variable resolution x-ray detector for computed tomography: II. Imaging theory and performance.

    PubMed

    DiBianca, F A; Zou, P; Jordan, L M; Laughter, J S; Zeman, H D; Sebes, J

    2000-08-01

    A computed tomography (CT) imaging technique called variable resolution x-ray (VRX) detection provides variable image resolution ranging from that of clinical body scanning (1 cy/mm) to that of microscopy (100 cy/mm). In this paper, an experimental VRX CT scanner based on a rotating subject table and an angulated storage phosphor screen detector is described and tested. The measured projection resolution of the scanner is > or = 20 lp/mm. Using this scanner, 4.8-s CT scans are made of specimens of human extremities and of in vivo hamsters. In addition, the system's projected spatial resolution is calculated to exceed 100 cy/mm for a future on-line CT scanner incorporating smaller focal spots (0.1 mm) than those currently used and a 1008-channel VRX detector with 0.6-mm cell spacing.

  14. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model-Report 2

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the capability of the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with simulated leak sounds. A series of electronically generated structural ultrasonic noise sources were created in the pressure wall to emulate leak signals at different locations of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). The exact sound source profiles created within the pressure wall at the source were unknown, but were estimated from the closest sensor measurement. The E-FEM method represents a reverberant sound field calculation, and of importance to this application is the requirement to correctly handle the direct field effect of the sound generation. It was also important to be able to compute the sound energy fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  15. Space shuttle/food system study. Volume 2, Appendix G: Ground support system analysis. Appendix H: Galley functional details analysis

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The capabilities for preflight feeding of flight personnel and the supply and control of the space shuttle flight food system were investigated to determine ground support requirements; and the functional details of an onboard food system galley are shown in photographic mockups. The elements which were identified as necessary to the efficient accomplishment of ground support functions include the following: (1) administration; (2) dietetics; (3) analytical laboratories; (4) flight food warehouse; (5) stowage module assembly area; (6) launch site module storage area; (7) alert crew restaurant and disperse crew galleys; (8) ground food warehouse; (9) manufacturing facilities; (10) transport; and (11) computer support. Each element is discussed according to the design criteria of minimum cost, maximum flexibility, reliability, and efficiency consistent with space shuttle requirements. The galley mockup overview illustrates the initial operation configuration, food stowage locations, meal assembly and serving trays, meal preparation configuration, serving, trash management, and the logistics of handling and cleanup equipment.

  16. Data storage systems technology for the Space Station era

    NASA Technical Reports Server (NTRS)

    Dalton, John; Mccaleb, Fred; Sos, John; Chesney, James; Howell, David

    1987-01-01

    The paper presents the results of an internal NASA study to determine if economically feasible data storage solutions are likely to be available to support the ground data transport segment of the Space Station mission. An internal NASA effort to prototype a portion of the required ground data processing system is outlined. It is concluded that the requirements for all ground data storage functions can be met with commercial disk and tape drives assuming conservative technology improvements and that, to meet Space Station data rates with commercial technology, the data will have to be distributed over multiple devices operating in parallel and in a sustained maximum throughput mode.

  17. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    NASA Technical Reports Server (NTRS)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  18. Through-Space Intervalence Charge Transfer as a Mechanism for Charge Delocalisation in Metal-Organic Frameworks.

    PubMed

    Hua, Carol; Doheny, Patrick William; Ding, Bowen; Chan, Bun; Yu, Michelle; Kepert, Cameron J; D'Alessandro, Deanna M

    2018-05-04

    Understanding the nature of charge transfer mechanisms in 3-dimensional Metal-Organic Frameworks (MOFs) is an important goal owing to the possibility of harnessing this knowledge to design conductive frameworks. These materials have been implicated as the basis for the next generation of technological devices for applications in energy storage and conversion, including electrochromic devices, electrocatalysts, and battery materials. After nearly two decades of intense research into MOFs, the mechanisms of charge transfer remain relatively poorly understood, and new strategies to achieve charge mobility remain elusive and challenging to experimentally explore, validate and model. We now demonstrate that aromatic stacking interactions in Zn(II) frameworks containing cofacial thiazolo[5,4-d]thiazole units lead to a mixed-valence state upon electrochemical or chemical reduction. This through-space Intervalence Charge Transfer (IVCT) phenomenon represents a new mechanism for charge delocalisation in MOFs. Computational modelling of the optical data combined with application of Marcus-Hush theory to the IVCT bands for the mixed-valence framework has enabled quantification of the degree of delocalisation using both in situ and ex situ electro- and spectro-electrochemical methods. A distance dependence for the through-space electron transfer has also been identified on the basis of experimental studies and computational calculations. This work provides a new window into electron transfer phenomena in 3-dimensional coordination space, of relevance to electroactive MOFs where new mechanisms for charge transfer are highly sought after, and to understanding biological light harvesting systems where through-space mixed-valence interactions are operative.

  19. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  20. Making Space: Automated Storage and Retrieval.

    ERIC Educational Resources Information Center

    Tanis, Norman; Ventuleth, Cindy

    1987-01-01

    Describes a pilot project in automated storage and retrieval of library materials which uses miniload cranes to retrieve bins of materials, and an interface with an online catalog that patrons use to request materials. Savings in space and money and potential problems with the system are discussed. (CLB)

  1. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  2. Efficiently modelling urban heat storage: an interface conduction scheme in an urban land surface model (aTEB v2.0)

    NASA Astrophysics Data System (ADS)

    Lipson, Mathew J.; Hart, Melissa A.; Thatcher, Marcus

    2017-03-01

    Intercomparison studies of models simulating the partitioning of energy over urban land surfaces have shown that the heat storage term is often poorly represented. In this study, two implicit discrete schemes representing heat conduction through urban materials are compared. We show that a well-established method of representing conduction systematically underestimates the magnitude of heat storage compared with exact solutions of one-dimensional heat transfer. We propose an alternative method of similar complexity that is better able to match exact solutions at typically employed resolutions. The proposed interface conduction scheme is implemented in an urban land surface model and its impact assessed over a 15-month observation period for a site in Melbourne, Australia, resulting in improved overall model performance for a variety of common material parameter choices and aerodynamic heat transfer parameterisations. The proposed scheme has the potential to benefit land surface models where computational constraints require a high level of discretisation in time and space, for example at neighbourhood/city scales, and where realistic material properties are preferred, for example in studies investigating impacts of urban planning changes.

  3. Computational Evaluation of Latent Heat Energy Storage Using a High Temperature Phase Change Material

    DTIC Science & Technology

    2012-05-01

    thermal energy storage system using molten silicon as a phase change material. A cylindrical receiver, absorber, converter system was evaluated using...temperature operation. This work computationally evaluates a thermal energy storage system using molten silicon as a phase change material. A cylindrical... salts ) offering a low power density and a low thermal conductivity, leading to a limited rate of charging and discharging (4). A focus on

  4. Production Management System for AMS Computing Centres

    NASA Astrophysics Data System (ADS)

    Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.

    2017-10-01

    The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.

  5. An Online Gravity Modeling Method Applied for High Precision Free-INS

    PubMed Central

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-01-01

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261

  6. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    PubMed

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  7. Assessment of time-dependent density functional theory with the restricted excitation space approximation for excited state calculations of large systems

    NASA Astrophysics Data System (ADS)

    Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.

    2018-06-01

    The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.

  8. Stabilizing stored PuO2 with addition of metal impurities

    NASA Astrophysics Data System (ADS)

    Moten, Shafaq; Huda, Muhammad

    Plutonium oxides is of widespread significance due its application in nuclear fuels, space missions, as well as the long-termed storage of plutonium from spent fuel and nuclear weapons. The processes to refine and store plutonium bring many other elements in contact with the plutonium metal and thereby affect the chemistry of the plutonium. Pure plutonium metal corrodes to an oxide in air with the most stable form of this oxide is stoichiometric plutonium dioxide, PuO2. Defects such as impurities and vacancies can form in the plutonium dioxide before, during and after the refining processes as well as during storage. An impurity defect manifests itself at the bottom of the conduction band and affects the band gap of the unit cell. Studying the interaction between transition metals and plutonium dioxide is critical for better, more efficient storage plans as well as gaining insights to provide a better response to potential threats of exposure to the environment. Our study explores the interaction of a few metals within the plutonium dioxide structure which have a likelihood of being exposed to the plutonium dioxide powder. Using Density Functional Theory, we calculated a substituted metal impurity in PuO2 supercell. We repeated the calculations with an additional oxygen vacancy. Our results reveal interesting volume contraction of PuO2 supercell when one plutonium atom is substituted with a metal atom. The authors acknowledge the Texas Computing Center (TACC) at The University of Texas at Austin and High Performance Computing (HPC) at The University of Texas at Arlington.

  9. Two-dimensional model of a Space Station Freedom thermal energy storage canister

    NASA Astrophysics Data System (ADS)

    Kerslake, Thomas W.; Ibrahim, Mounir B.

    1990-08-01

    The Solar Dynamic Power Module being developed for Space Station Freedom uses a eutectic mixture of LiF-CaF2 phase change salt contained in toroidal canisters for thermal energy storage. Results are presented from heat transfer analyses of the phase change salt containment canister. A 2-D, axisymmetric finite difference computer program which models the canister walls, salt, void, and heat engine working fluid coolant was developed. Analyses included effects of conduction in canister walls and solid salt, conduction and free convection in liquid salt, conduction and radiation across salt vapor filled void regions and forced convection in the heat engine working fluid. Void shape, location, growth or shrinkage (due to density difference between the solid and liquid salt phases) were prescribed based on engineering judgement. The salt phase change process was modeled using the enthalpy method. Discussion of results focuses on the role of free-convection in the liquid salt on canister heat transfer performance. This role is shown to be important for interpreting the relationship between ground based canister performance (in l-g) and expected on-orbit performance (in micro-g). Attention is also focused on the influence of void heat transfer on canister wall temperature distributions. The large thermal resistance of void regions is shown to accentuate canister hot spots and temperature gradients.

  10. Two-dimensional model of a Space Station Freedom thermal energy storage canister

    NASA Astrophysics Data System (ADS)

    Kerslake, Thomas W.; Ibrahim, Mounir B.

    The Solar Dynamic Power Module being developed for Space Station Freedom uses a eutectic mixture of LiF-CaF2 phase change salt contained in toroidal canisters for thermal energy storage. Results are presented from heat transfer analyses of the phase-change salt containment canister. A 2-D, axisymmetric finite-difference computer program which models the canister walls, salt, void, and heat engine working fluid coolant was developed. Analyses included effects of conduction in canister walls and solid salt, conduction and free convection in liquid salt, conduction and radiation across salt vapor filled void regions, and forced convection in the heat engine working fluid. Void shape, location, and growth or shrinkage (due to density difference between the solid and liquid salt phases) were prescribed based on engineering judgement. The salt phase change process was modeled using the enthalpy method. Discussion of results focuses on the role of free-convection in the liquid salt on canister heat transfer performance. This role is shown to be important for interpreting the relationship between groundbased canister performance (in 1-g) and expected on-orbit performance (in micro-g). Attention is also focused on the influence of void heat transfer on canister wall temperature distributions. The large thermal resistance of void regions is shown to accentuate canister hot spots and temperature gradients.

  11. Two-dimensional model of a Space Station Freedom thermal energy storage canister

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Ibrahim, Mounir B.

    1990-01-01

    The Solar Dynamic Power Module being developed for Space Station Freedom uses a eutectic mixture of LiF-CaF2 phase change salt contained in toroidal canisters for thermal energy storage. Results are presented from heat transfer analyses of the phase-change salt containment canister. A 2-D, axisymmetric finite-difference computer program which models the canister walls, salt, void, and heat engine working fluid coolant was developed. Analyses included effects of conduction in canister walls and solid salt, conduction and free convection in liquid salt, conduction and radiation across salt vapor filled void regions, and forced convection in the heat engine working fluid. Void shape, location, and growth or shrinkage (due to density difference between the solid and liquid salt phases) were prescribed based on engineering judgement. The salt phase change process was modeled using the enthalpy method. Discussion of results focuses on the role of free-convection in the liquid salt on canister heat transfer performance. This role is shown to be important for interpreting the relationship between groundbased canister performance (in 1-g) and expected on-orbit performance (in micro-g). Attention is also focused on the influence of void heat transfer on canister wall temperature distributions. The large thermal resistance of void regions is shown to accentuate canister hot spots and temperature gradients.

  12. Feasibility study for measurement of insulation compaction in the cryogenic rocket fuel storage tanks at Kennedy Space Center by fast/thermal neutron techniques

    NASA Astrophysics Data System (ADS)

    Livingston, R. A.; Schweitzer, J. S.; Parsons, A. M.; Arens, E. E.

    2014-02-01

    The liquid hydrogen and oxygen cryogenic storage tanks at John F. Kennedy Space Center (KSC) use expanded perlite as thermal insulation. Some of the perlite may have compacted over time, compromising the thermal performance and also the structural integrity of the tanks. Neutrons can readily penetrate through the 1.75 cm outer steel shell and through the entire 120 cm thick perlite zone. Neutrons interactions with materials produce characteristic gamma rays which are then detected. In compacted perlite the count rates in the individual peaks in the gamma ray spectrum will increase. Portable neutron generators can produce neutron simultaneous fluxes in two energy ranges: fast (14 MeV) and thermal (25 meV). Fast neutrons produce gamma rays by inelastic scattering which is sensitive to Si, Al, Fe and O. Thermal neutrons produce gamma rays by radiative capture in prompt gamma neutron activation (PGNA), which is sensitive to Si, Al, Na, K and H among others. The results of computer simulations using the software MCNP and measurements on a test article suggest that the most promising approach would be to operate the system in time-of-flight mode by pulsing the neutron generator and observing the subsequent die away curve in the PGNA signal.

  13. Two-dimensional model of a Space Station Freedom thermal energy storage canister

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Ibrahim, Mounir B.

    1990-01-01

    The Solar Dynamic Power Module being developed for Space Station Freedom uses a eutectic mixture of LiF-CaF2 phase change salt contained in toroidal canisters for thermal energy storage. Results are presented from heat transfer analyses of the phase change salt containment canister. A 2-D, axisymmetric finite difference computer program which models the canister walls, salt, void, and heat engine working fluid coolant was developed. Analyses included effects of conduction in canister walls and solid salt, conduction and free convection in liquid salt, conduction and radiation across salt vapor filled void regions and forced convection in the heat engine working fluid. Void shape, location, growth or shrinkage (due to density difference between the solid and liquid salt phases) were prescribed based on engineering judgement. The salt phase change process was modeled using the enthalpy method. Discussion of results focuses on the role of free-convection in the liquid salt on canister heat transfer performance. This role is shown to be important for interpreting the relationship between ground based canister performance (in l-g) and expected on-orbit performance (in micro-g). Attention is also focused on the influence of void heat transfer on canister wall temperature distributions. The large thermal resistance of void regions is shown to accentuate canister hot spots and temperature gradients.

  14. Proposal for implementation of CCSDS standards for use with spacecraft engineering/housekeeping data

    NASA Technical Reports Server (NTRS)

    Welch, Dave

    1994-01-01

    Many of today's low earth orbiting spacecraft are using the Consultative Committee for Space Data Systems (CCSDS) protocol for better optimization of down link RF bandwidth and onboard storage space. However, most of the associated housekeeping data has continued to be generated and down linked in a synchronous, Time Division Multiplexed (TDM) fashion. There are many economies that the CCSDS protocol will allow to better utilize the available bandwidth and storage space in order to optimize the housekeeping data for use in operational trending and analysis work. By only outputting what is currently important or of interest, finer resolution of critical items can be obtained. This can be accomplished by better utilizing the normally allocated housekeeping data down link and storage areas rather than taking space reserved for science.

  15. Solid-solid phase change thermal storage application to space-suit battery pack

    NASA Astrophysics Data System (ADS)

    Son, Chang H.; Morehouse, Jeffrey H.

    1989-01-01

    High cell temperatures are seen as the primary safety problem in the Li-BCX space battery. The exothermic heat from the chemical reactions could raise the temperature of the lithium electrode above the melting temperature. Also, high temperature causes the cell efficiency to decrease. Solid-solid phase-change materials were used as a thermal storage medium to lower this battery cell temperature by utilizing their phase-change (latent heat storage) characteristics. Solid-solid phase-change materials focused on in this study are neopentyl glycol and pentaglycerine. Because of their favorable phase-change characteristics, these materials appear appropriate for space-suit battery pack use. The results of testing various materials are reported as thermophysical property values, and the space-suit battery operating temperature is discussed in terms of these property results.

  16. Proposal for implementation of CCSDS standards for use with spacecraft engineering/housekeeping data

    NASA Astrophysics Data System (ADS)

    Welch, Dave

    1994-11-01

    Many of today's low earth orbiting spacecraft are using the Consultative Committee for Space Data Systems (CCSDS) protocol for better optimization of down link RF bandwidth and onboard storage space. However, most of the associated housekeeping data has continued to be generated and down linked in a synchronous, Time Division Multiplexed (TDM) fashion. There are many economies that the CCSDS protocol will allow to better utilize the available bandwidth and storage space in order to optimize the housekeeping data for use in operational trending and analysis work. By only outputting what is currently important or of interest, finer resolution of critical items can be obtained. This can be accomplished by better utilizing the normally allocated housekeeping data down link and storage areas rather than taking space reserved for science.

  17. First-Principles Modeling of Hydrogen Storage in Metal Hydride Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Karl Johnson

    The objective of this project is to complement experimental efforts of MHoCE partners by using state-of-the-art theory and modeling to study the structure, thermodynamics, and kinetics of hydrogen storage materials. Specific goals include prediction of the heats of formation and other thermodynamic properties of alloys from first principles methods, identification of new alloys that can be tested experimentally, calculation of surface and energetic properties of nanoparticles, and calculation of kinetics involved with hydrogenation and dehydrogenation processes. Discovery of new metal hydrides with enhanced properties compared with existing materials is a critical need for the Metal Hydride Center of Excellence. Newmore » materials discovery can be aided by the use of first principles (ab initio) computational modeling in two ways: (1) The properties, including mechanisms, of existing materials can be better elucidated through a combined modeling/experimental approach. (2) The thermodynamic properties of novel materials that have not been made can, in many cases, be quickly screened with ab initio methods. We have used state-of-the-art computational techniques to explore millions of possible reaction conditions consisting of different element spaces, compositions, and temperatures. We have identified potentially promising single- and multi-step reactions that can be explored experimentally.« less

  18. Recent Advances in Photonic Devices for Optical Computing and the Role of Nonlinear Optics-Part II

    NASA Technical Reports Server (NTRS)

    Abdeldayem, Hossin; Frazier, Donald O.; Witherow, William K.; Banks, Curtis E.; Paley, Mark S.

    2007-01-01

    The twentieth century has been the era of semiconductor materials and electronic technology while this millennium is expected to be the age of photonic materials and all-optical technology. Optical technology has led to countless optical devices that have become indispensable in our daily lives in storage area networks, parallel processing, optical switches, all-optical data networks, holographic storage devices, and biometric devices at airports. This chapters intends to bring some awareness to the state-of-the-art of optical technologies, which have potential for optical computing and demonstrate the role of nonlinear optics in many of these components. Our intent, in this Chapter, is to present an overview of the current status of optical computing, and a brief evaluation of the recent advances and performance of the following key components necessary to build an optical computing system: all-optical logic gates, adders, optical processors, optical storage, holographic storage, optical interconnects, spatial light modulators and optical materials.

  19. Improvements to the Ionizing Radiation Risk Assessment Program for NASA Astronauts

    NASA Technical Reports Server (NTRS)

    Semones, E. J.; Bahadori, A. A.; Picco, C. E.; Shavers, M. R.; Flores-McLaughlin, J.

    2011-01-01

    To perform dosimetry and risk assessment, NASA collects astronaut ionizing radiation exposure data from space flight, medical imaging and therapy, aviation training activities and prior occupational exposure histories. Career risk of exposure induced death (REID) from radiation is limited to 3 percent at a 95 percent confidence level. The Radiation Health Office at Johnson Space Center (JSC) is implementing a program to integrate the gathering, storage, analysis and reporting of astronaut ionizing radiation dose and risk data and records. This work has several motivations, including more efficient analyses and greater flexibility in testing and adopting new methods for evaluating risks. The foundation for these improvements is a set of software tools called the Astronaut Radiation Exposure Analysis System (AREAS). AREAS is a series of MATLAB(Registered TradeMark)-based dose and risk analysis modules that interface with an enterprise level SQL Server database by means of a secure web service. It communicates with other JSC medical and space weather databases to maintain data integrity and consistency across systems. AREAS is part of a larger NASA Space Medicine effort, the Mission Medical Integration Strategy, with the goal of collecting accurate, high-quality and detailed astronaut health data, and then securely, timely and reliably presenting it to medical support personnel. The modular approach to the AREAS design accommodates past, current, and future sources of data from active and passive detectors, space radiation transport algorithms, computational phantoms and cancer risk models. Revisions of the cancer risk model, new radiation detection equipment and improved anthropomorphic computational phantoms can be incorporated. Notable hardware updates include the Radiation Environment Monitor (which uses Medipix technology to report real-time, on-board dosimetry measurements), an updated Tissue-Equivalent Proportional Counter, and the Southwest Research Institute Radiation Assessment Detector. Also, the University of Florida hybrid phantoms, which are flexible in morphometry and positioning, are being explored as alternatives to the current NASA computational phantoms.

  20. European development experience on energy storage wheels for space

    NASA Technical Reports Server (NTRS)

    Robinson, A. A.

    1984-01-01

    High speed fiber composite rotors suspended by contactless magnetic bearings were produced. European industry has acquired expertise in the study and fabrication of energy storage wheels and magnetic suspension systems for space. Sufficient energy density performance for space viability is being achieved on fully representative hardware. Stress cycle testing to demonstrate life capability and the development of burst containment structures remains to be done and is the next logical step.

  1. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  2. Advanced long term cryogenic storage systems

    NASA Technical Reports Server (NTRS)

    Brown, Norman S.

    1987-01-01

    Long term, cryogenic fluid storage facilities will be required to support future space programs such as the space-based Orbital Transfer Vehicle (OTV), Telescopes, and Laser Systems. An orbital liquid oxygen/liquid hydrogen storage system with an initial capacity of approximately 200,000 lb will be required. The storage facility tank design must have the capability of fluid acquisition in microgravity and limit cryogen boiloff due to environmental heating. Cryogenic boiloff management features, minimizing Earth-to-orbit transportation costs, will include advanced thick multilayer insulation/integrated vapor cooled shield concepts, low conductance support structures, and refrigeration/reliquefaction systems. Contracted study efforts are under way to develop storage system designs, technology plans, test article hardware designs, and develop plans for ground/flight testing.

  3. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage

    PubMed Central

    Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-01-01

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query. PMID:29652810

  4. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage.

    PubMed

    Guo, Yeting; Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-04-13

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query.

  5. 4. PHOTOCOPY OF DRAWING (1976 STRUCTURAL AND ELECTRICAL DRAWING BY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. PHOTOCOPY OF DRAWING (1976 STRUCTURAL AND ELECTRICAL DRAWING BY THE SPACE AND MISSILE TEST CENTER, VAFB, USAF) STRUCTURAL AND ELECTRICAL DIAGRAM FOR EQUIPMENT STORAGE BUILDING, SHEET S-26 - Vandenberg Air Force Base, Space Launch Complex 3, Storage Shed, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  6. Performance evaluation of the Engineering Analysis and Data Systems (EADS) 2

    NASA Technical Reports Server (NTRS)

    Debrunner, Linda S.

    1994-01-01

    The Engineering Analysis and Data System (EADS)II (1) was installed in March 1993 to provide high performance computing for science and engineering at Marshall Space Flight Center (MSFC). EADS II increased the computing capabilities over the existing EADS facility in the areas of throughput and mass storage. EADS II includes a Vector Processor Compute System (VPCS), a Virtual Memory Compute System (CFS), a Common Output System (COS), as well as Image Processing Station, Mini Super Computers, and Intelligent Workstations. These facilities are interconnected by a sophisticated network system. This work considers only the performance of the VPCS and the CFS. The VPCS is a Cray YMP. The CFS is implemented on an RS 6000 using the UniTree Mass Storage System. To better meet the science and engineering computing requirements, EADS II must be monitored, its performance analyzed, and appropriate modifications for performance improvement made. Implementing this approach requires tool(s) to assist in performance monitoring and analysis. In Spring 1994, PerfStat 2.0 was purchased to meet these needs for the VPCS and the CFS. PerfStat(2) is a set of tools that can be used to analyze both historical and real-time performance data. Its flexible design allows significant user customization. The user identifies what data is collected, how it is classified, and how it is displayed for evaluation. Both graphical and tabular displays are supported. The capability of the PerfStat tool was evaluated, appropriate modifications to EADS II to optimize throughput and enhance productivity were suggested and implemented, and the effects of these modifications on the systems performance were observed. In this paper, the PerfStat tool is described, then its use with EADS II is outlined briefly. Next, the evaluation of the VPCS, as well as the modifications made to the system are described. Finally, conclusions are drawn and recommendations for future worked are outlined.

  7. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  8. NASA Tech Briefs, January 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics covered include: Flexible Skins Containing Integrated Sensors and Circuitry; Artificial Hair Cells for Sensing Flows; Video Guidance Sensor and Time-of-Flight Rangefinder; Optical Beam-Shear Sensors; Multiple-Agent Air/Ground Autonomous Exploration Systems; A 640 512-Pixel Portable Long-Wavelength Infrared Camera; An Array of Optical Receivers for Deep-Space Communications; Microstrip Antenna Arrays on Multilayer LCP Substrates; Applications for Subvocal Speech; Multiloop Rapid-Rise/Rapid Fall High-Voltage Power Supply; The PICWidget; Fusing Symbolic and Numerical Diagnostic Computations; Probabilistic Reasoning for Robustness in Automated Planning; Short-Term Forecasting of Radiation Belt and Ring Current; JMS Proxy and C/C++ Client SDK; XML Flight/Ground Data Dictionary Management; Cross-Compiler for Modeling Space-Flight Systems; Composite Elastic Skins for Shape-Changing Structures; Glass/Ceramic Composites for Sealing Solid Oxide Fuel Cells; Aligning Optical Fibers by Means of Actuated MEMS Wedges; Manufacturing Large Membrane Mirrors at Low Cost; Double-Vacuum-Bag Process for Making Resin- Matrix Composites; Surface Bacterial-Spore Assay Using Tb3+/DPA Luminescence; Simplified Microarray Technique for Identifying mRNA in Rare Samples; High-Resolution, Wide-Field-of-View Scanning Telescope; Multispectral Imager With Improved Filter Wheel and Optics; Integral Radiator and Storage Tank; Compensation for Phase Anisotropy of a Metal Reflector; Optical Characterization of Molecular Contaminant Films; Integrated Hardware and Software for No-Loss Computing; Decision-Tree Formulation With Order-1 Lateral Execution; GIS Methodology for Planning Planetary-Rover Operations; Optimal Calibration of the Spitzer Space Telescope; Automated Detection of Events of Scientific Interest; Representation-Independent Iteration of Sparse Data Arrays; Mission Operations of the Mars Exploration Rovers; and More About Software for No-Loss Computing.

  9. Compact Holographic Data Storage

    NASA Technical Reports Server (NTRS)

    Chao, T. H.; Reyes, G. F.; Zhou, H.

    2001-01-01

    NASA's future missions would require massive high-speed onboard data storage capability to Space Science missions. For Space Science, such as the Europa Lander mission, the onboard data storage requirements would be focused on maximizing the spacecraft's ability to survive fault conditions (i.e., no loss in stored science data when spacecraft enters the 'safe mode') and autonomously recover from them during NASA's long-life and deep space missions. This would require the development of non-volatile memory. In order to survive in the stringent environment during space exploration missions, onboard memory requirements would also include: (1) survive a high radiation environment (1 Mrad), (2) operate effectively and efficiently for a very long time (10 years), and (3) sustain at least a billion write cycles. Therefore, memory technologies requirements of NASA's Earth Science and Space Science missions are large capacity, non-volatility, high-transfer rate, high radiation resistance, high storage density, and high power efficiency. JPL, under current sponsorship from NASA Space Science and Earth Science Programs, is developing a high-density, nonvolatile and rad-hard Compact Holographic Data Storage (CHDS) system to enable large-capacity, high-speed, low power consumption, and read/write of data in a space environment. The entire read/write operation will be controlled with electrooptic mechanism without any moving parts. This CHDS will consist of laser diodes, photorefractive crystal, spatial light modulator, photodetector array, and I/O electronic interface. In operation, pages of information would be recorded and retrieved with random access and high-speed. The nonvolatile, rad-hard characteristics of the holographic memory will provide a revolutionary memory technology meeting the high radiation challenge facing the Europa Lander mission. Additional information is contained in the original extended abstract.

  10. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    NASA Astrophysics Data System (ADS)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are relevant European Research infrastructure in the field of Earth Science (EPOS and ICOS), Bioinformatics (BBMRI and ELIXIR) and Space Physics (EISCAT-3D). The first outcome of this activity has been the definition of a generic use case that captures the typical user scenario with respect the integrated use of the EGI and EUDAT infrastructures. This generic use case allows a user to instantiate a set of Virtual Machine images on the EGI Federated Cloud to perform computational jobs that analyse data previously stored on EUDAT long-term storage systems. The results of such analysis can be staged back to EUDAT storages, and if needed, allocated with Permanent identifyers (PIDs) for future use. The implementation of this generic use case requires the following integration activities between EGI and EUDAT: (1) harmonisation of the user authentication and authorisation models, (2) implementing interface connectors between the relevant EGI and EUDAT services, particularly EGI Cloud compute facilities and EUDAT long-term storage and PID systems. In the presentation, the collected user requirements and the implementation status of the universal use case will be showed. Furthermore, how the universal use case is currently applied to satisfy EPOS and ICOS needs will be described.

  11. 3D hierarchical spatial representation and memory of multimodal sensory data

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.

  12. USING COMPUTER MODELS TO DETERMINE THE EFFECT OF STORAGE ON WATER QUALITY

    EPA Science Inventory

    Studies have indicated that water quality is degraded as a result of long residence times in storage tanks, highlighting the importance of tank design, location, and operation. Computer models, developed to explain some of the mixing and distrribution issues associated with tank...

  13. 76 FR 22682 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-22

    ...: Maintained in file folders and computer storage media. Retrievability: Retrieved by name and/or Social... folders and computer storage media.'' * * * * * System Manager(s) and address: Delete entry and replace... provide their full name, Social Security Number (SSN), any details which may assist in locating records...

  14. Energy storage and thermal control system design status. [for space station power supplies

    NASA Technical Reports Server (NTRS)

    Simons, Stephen N.; Willhoite, Bryan C.; Van Ommering, Gert

    1989-01-01

    The Space Station Freedom electric power system (EPS) will initially rely on photovoltaics for power generation and Ni/H2 batteries for electrical energy storage. The current design for the development status of two major subsystems in the PV Power Module is discussed. The energy storage subsystem comprised of high capacity Ni/H2 batteries and the single-phase thermal control system that rejects the excess heat generated by the batteries and other components associated with power generation andstorage is described.

  15. Challenges Encountered Using Ophthalmic Anesthetics in Space Medicine

    NASA Technical Reports Server (NTRS)

    Bayuse, T.; Law, J.; Alexander, D.; Moynihan, S.; LeBlanc, C.; Langford, K.; Magalhaes, L.

    2015-01-01

    On orbit, ophthalmic anesthetics are used for tonometry and off-nominal corneal examinations. Proparacaine has been flown traditionally. However, the manufacturers recently changed its storage requirements from room temperature storage to refrigerated storage to preserve stability and prolong the shelf-life. Since refrigeration on orbit is not readily available and there were stability concerns about flying proparacaine unrefrigerated, tetracaine was selected as an alternative ophthalmic anesthetic in 2013. We will discuss the challenges encountered flying and using these anesthetics on the International Space Station.

  16. Space-time VMS computation of wind-turbine rotor and tower aerodynamics

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; McIntyre, Spenser; Kostov, Nikolay; Kolesar, Ryan; Habluetzel, Casey

    2014-01-01

    We present the space-time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent flows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of flows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational flexibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

  17. Space-Time VMS Computation of Wind-Turbine Rotor and Tower Aerodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Spenser W.

    This thesis is on the space{time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent ows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of ows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational exibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

  18. Space Station thermal storage/refrigeration system research and development

    NASA Astrophysics Data System (ADS)

    Dean, W. G.; Karu, Z. S.

    1993-02-01

    Space Station thermal loading conditions represent an order of magnitude increase over current and previous spacecraft such as Skylab, Apollo, Pegasus III, Lunar Rover Vehicle, and Lockheed TRIDENT missiles. Thermal storage units (TSU's) were successfully used on these as well as many applications for ground based solar energy storage applications. It is desirable to store thermal energy during peak loading conditions as an alternative to providing increased radiator surface area which adds to the weight of the system. Basically, TSU's store heat by melting a phase change material (PCM) such as a paraffin. The physical property data for the PCM's used in the design of these TSU's is well defined in the literature. Design techniques are generally well established for the TSU's. However, the Space Station provides a new challenge in the application of these data and techniques because of three factors: the large size of the TSU required, the integration of the TSU for the Space Station thermal management concept with its diverse opportunities for storage application, and the TSU's interface with a two-phase (liquid/vapor) thermal bus/central heat rejection system. The objective in the thermal storage research and development task was to design, fabricate, and test a demonstration unit. One test article was to be a passive thermal storage unit capable of storing frozen food at -20 F for a minimum of 90 days. A second unit was to be capable of storing frozen biological samples at -94 F, again for a minimum of 90 days. The articles developed were compatible with shuttle mission conditions, including safety and handling by astronauts. Further, storage rack concepts were presented so that these units can be integrated into Space Station logistics module storage racks. The extreme sensitivity of spacecraft radiator systems design-to-heat rejection temperature requirements is well known. A large radiator area penalty is incurred if low temperatures are accommodated via a single centralized radiator system. As per the scope of work of this task, the applicability of refrigeration system tailored to meet the specialized requirements of storage of food and biological samples was investigated. The issues addressed were the anticipated power consumption and feasible designs and cycles for meeting specific storage requirements. Further, development issues were assessed related to the operation of vapor compression systems in micro-gravity addressing separation of vapor and liquid phases (via capillary systems).

  19. Space Station thermal storage/refrigeration system research and development

    NASA Technical Reports Server (NTRS)

    Dean, W. G.; Karu, Z. S.

    1993-01-01

    Space Station thermal loading conditions represent an order of magnitude increase over current and previous spacecraft such as Skylab, Apollo, Pegasus III, Lunar Rover Vehicle, and Lockheed TRIDENT missiles. Thermal storage units (TSU's) were successfully used on these as well as many applications for ground based solar energy storage applications. It is desirable to store thermal energy during peak loading conditions as an alternative to providing increased radiator surface area which adds to the weight of the system. Basically, TSU's store heat by melting a phase change material (PCM) such as a paraffin. The physical property data for the PCM's used in the design of these TSU's is well defined in the literature. Design techniques are generally well established for the TSU's. However, the Space Station provides a new challenge in the application of these data and techniques because of three factors: the large size of the TSU required, the integration of the TSU for the Space Station thermal management concept with its diverse opportunities for storage application, and the TSU's interface with a two-phase (liquid/vapor) thermal bus/central heat rejection system. The objective in the thermal storage research and development task was to design, fabricate, and test a demonstration unit. One test article was to be a passive thermal storage unit capable of storing frozen food at -20 F for a minimum of 90 days. A second unit was to be capable of storing frozen biological samples at -94 F, again for a minimum of 90 days. The articles developed were compatible with shuttle mission conditions, including safety and handling by astronauts. Further, storage rack concepts were presented so that these units can be integrated into Space Station logistics module storage racks. The extreme sensitivity of spacecraft radiator systems design-to-heat rejection temperature requirements is well known. A large radiator area penalty is incurred if low temperatures are accommodated via a single centralized radiator system. As per the scope of work of this task, the applicability of refrigeration system tailored to meet the specialized requirements of storage of food and biological samples was investigated. The issues addressed were the anticipated power consumption and feasible designs and cycles for meeting specific storage requirements. Further, development issues were assessed related to the operation of vapor compression systems in micro-gravity addressing separation of vapor and liquid phases (via capillary systems).

  20. Distributed metadata in a high performance computing environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination thatmore » a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.« less

  1. Development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification

    NASA Astrophysics Data System (ADS)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  2. 14 CFR 27.1353 - Storage battery design and installation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Storage battery design and installation. 27... Equipment § 27.1353 Storage battery design and installation. (a) Each storage battery must be designed and... result when the battery is recharged (after previous complete discharge)— (1) At maximum regulated...

  3. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  4. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  5. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  6. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  7. 14 CFR 27.1353 - Storage battery design and installation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Storage battery design and installation. 27... Equipment § 27.1353 Storage battery design and installation. (a) Each storage battery must be designed and... result when the battery is recharged (after previous complete discharge)— (1) At maximum regulated...

  8. 14 CFR 27.1353 - Storage battery design and installation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Storage battery design and installation. 27... Equipment § 27.1353 Storage battery design and installation. (a) Each storage battery must be designed and... result when the battery is recharged (after previous complete discharge)— (1) At maximum regulated...

  9. 14 CFR 27.1353 - Storage battery design and installation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Storage battery design and installation. 27... Equipment § 27.1353 Storage battery design and installation. (a) Each storage battery must be designed and... result when the battery is recharged (after previous complete discharge)— (1) At maximum regulated...

  10. 14 CFR 27.1353 - Storage battery design and installation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Storage battery design and installation. 27... Equipment § 27.1353 Storage battery design and installation. (a) Each storage battery must be designed and... result when the battery is recharged (after previous complete discharge)— (1) At maximum regulated...

  11. Integration of end-user Cloud storage for CMS analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  12. Integration of end-user Cloud storage for CMS analysis

    DOE PAGES

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...

    2017-05-19

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  13. Cryogenic Selective Surface - How Cold Can We Go?

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert; Nurge, Mark

    2015-01-01

    Selective surfaces have wavelength dependent emissivitya bsorption. These surfaces can be designed to reflect solar radiation, while maximizing infrared emittance, yielding a cooling effect even in sunlight. On earth cooling to -50 C below ambient has been achieved, but in space, outside of the atmosphere, theory using ideal materials has predicted a maximum cooling to 40 K! If this result holds up for real world materials and conditions, then superconducting systems and cryogenic storage can be achieved in space without active cooling. Such a result would enable long term cryogenic storage in deep space and the use of large scale superconducting systems for such applications as galactic cosmic radiation (GCR) shielding and large scale energy storage.

  14. Flywheel Energy Storage Technology Being Developed

    NASA Technical Reports Server (NTRS)

    Wolff, Frederick J.

    2001-01-01

    A flywheel energy storage system was spun to 60,000 rpm while levitated on magnetic bearings. This system is being developed as an energy-efficient replacement for chemical battery systems. Used in groups, the flywheels can have two functions providing attitude control for a spacecraft in orbit as well as providing energy storage. The first application for which the NASA Glenn Research Center is developing the flywheel is the International Space Station, where a two-flywheel system will replace one of the nickel-hydrogen battery strings in the space station's power system. The 60,000-rpm development rotor is about one-eighth the size that will be needed for the space station (0.395 versus 3.07 kWhr).

  15. Techniques for shuttle trajectory optimization

    NASA Technical Reports Server (NTRS)

    Edge, E. R.; Shieh, C. J.; Powers, W. F.

    1973-01-01

    The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.

  16. Computer Storage and Retrieval of Position - Dependent Data.

    DTIC Science & Technology

    1982-06-01

    This thesis covers the design of a new digital database system to replace the merged (observation and geographic location) record, one file per cruise...68 "The Digital Data Library System: Library Storage and Retrieval of Digital Geophysical Data" by Robert C. Groan) provided a relatively simple...dependent, ’geophysical’ data. The system is operational on a Digital Equipment Corporation VAX-11/780 computer. Values of measured and computed

  17. In-Storage Embedded Accelerator for Sparse Pattern Processing

    DTIC Science & Technology

    2016-09-13

    computation . As a result, a very small processor could be used and still make full use of storage device bandwidth. When the host software sends...Rean Griffith, Anthony D. Joseph, Randy Katz, Andy Konwinski, Gunho Lee et al. "A view of cloud computing ."Communications of the ACM 53, no. 4 (2010...Laboratory, * MIT Computer Science & Artificial Intelligence Laboratory Abstract— We present a novel system architecture for sparse pattern

  18. User's manual: Computer-aided design programs for inductor-energy-storage dc-to-dc electronic power converters

    NASA Technical Reports Server (NTRS)

    Huffman, S.

    1977-01-01

    Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.

  19. Multiwell CO2 injectivity: impact of boundary conditions and brine extraction on geologic CO2 storage efficiency and pressure buildup.

    PubMed

    Heath, Jason E; McKenna, Sean A; Dewers, Thomas A; Roach, Jesse D; Kobos, Peter H

    2014-01-21

    CO2 storage efficiency is a metric that expresses the portion of the pore space of a subsurface geologic formation that is available to store CO2. Estimates of storage efficiency for large-scale geologic CO2 storage depend on a variety of factors including geologic properties and operational design. These factors govern estimates on CO2 storage resources, the longevity of storage sites, and potential pressure buildup in storage reservoirs. This study employs numerical modeling to quantify CO2 injection well numbers, well spacing, and storage efficiency as a function of geologic formation properties, open-versus-closed boundary conditions, and injection with or without brine extraction. The set of modeling runs is important as it allows the comparison of controlling factors on CO2 storage efficiency. Brine extraction in closed domains can result in storage efficiencies that are similar to those of injection in open-boundary domains. Geomechanical constraints on downhole pressure at both injection and extraction wells lower CO2 storage efficiency as compared to the idealized scenario in which the same volumes of CO2 and brine are injected and extracted, respectively. Geomechanical constraints should be taken into account to avoid potential damage to the storage site.

  20. Experiences with explicit finite-difference schemes for complex fluid dynamics problems on STAR-100 and CYBER-203 computers

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.

    1982-01-01

    Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.

  1. The mass storage testing laboratory at GSFC

    NASA Technical Reports Server (NTRS)

    Venkataraman, Ravi; Williams, Joel; Michaud, David; Gu, Heng; Kalluri, Atri; Hariharan, P. C.; Kobler, Ben; Behnke, Jeanne; Peavey, Bernard

    1998-01-01

    Industry-wide benchmarks exist for measuring the performance of processors (SPECmarks), and of database systems (Transaction Processing Council). Despite storage having become the dominant item in computing and IT (Information Technology) budgets, no such common benchmark is available in the mass storage field. Vendors and consultants provide services and tools for capacity planning and sizing, but these do not account for the complete set of metrics needed in today's archives. The availability of automated tape libraries, high-capacity RAID systems, and high- bandwidth interconnectivity between processor and peripherals has led to demands for services which traditional file systems cannot provide. File Storage and Management Systems (FSMS), which began to be marketed in the late 80's, have helped to some extent with large tape libraries, but their use has introduced additional parameters affecting performance. The aim of the Mass Storage Test Laboratory (MSTL) at Goddard Space Flight Center is to develop a test suite that includes not only a comprehensive check list to document a mass storage environment but also benchmark code. Benchmark code is being tested which will provide measurements for both baseline systems, i.e. applications interacting with peripherals through the operating system services, and for combinations involving an FSMS. The benchmarks are written in C, and are easily portable. They are initially being aimed at the UNIX Open Systems world. Measurements are being made using a Sun Ultra 170 Sparc with 256MB memory running Solaris 2.5.1 with the following configuration: 4mm tape stacker on SCSI 2 Fast/Wide; 4GB disk device on SCSI 2 Fast/Wide; and Sony Petaserve on Fast/Wide differential SCSI 2.

  2. High-temperature thermal storage systems for advanced solar receivers materials selections

    NASA Astrophysics Data System (ADS)

    Wilson, D. F.; Devan, J. H.; Howell, M.

    1990-09-01

    Advanced space power systems that use solar energy and Brayton or Stirling heat engines require thermal energy storage (TES) systems to operate continuously through periods of shade. The receiver storage units, key elements in both Brayton and Stirling systems, are designed to use the latent heat of fusion of phase-change materials (PCMs). The power systems under current consideration for near-future National Aeronautics and Space Administration space missions require working fluid temperatures in the 1100 to 1400 K range. The PCMs under current investigation that gave liquid temperatures within this range are the fluoride family of salts. However, these salts have low thermal conductivity, which causes large temperature gradients in the storage systems. Improvements can be obtained, however, with the use of thermal conductivity enhancements or metallic PCMs. In fact, if suitable containment materials can be found, the use of metallic PCMs would virtually eliminate the orbit associated temperature variations in TES systems. The high thermal conductivity and generally low volume change on melting of germanium and alloys based on silicon make them attractive for storage of thermal energy in space power systems. An approach to solving the containment problem, involving both chemical and physical compatibility, preparation of NiSi/NiSi2, and initial results for containment of germanium and NiSi/NiSi2, are presented.

  3. High-temperature thermal storage systems for advanced solar receivers materials selections

    NASA Technical Reports Server (NTRS)

    Wilson, D. F.; Devan, J. H.; Howell, M.

    1990-01-01

    Advanced space power systems that use solar energy and Brayton or Stirling heat engines require thermal energy storage (TES) systems to operate continuously through periods of shade. The receiver storage units, key elements in both Brayton and Stirling systems, are designed to use the latent heat of fusion of phase-change materials (PCMs). The power systems under current consideration for near-future National Aeronautics and Space Administration space missions require working fluid temperatures in the 1100 to 1400 K range. The PCMs under current investigation that gave liquid temperatures within this range are the fluoride family of salts. However, these salts have low thermal conductivity, which causes large temperature gradients in the storage systems. Improvements can be obtained, however, with the use of thermal conductivity enhancements or metallic PCMs. In fact, if suitable containment materials can be found, the use of metallic PCMs would virtually eliminate the orbit associated temperature variations in TES systems. The high thermal conductivity and generally low volume change on melting of germanium and alloys based on silicon make them attractive for storage of thermal energy in space power systems. An approach to solving the containment problem, involving both chemical and physical compatibility, preparation of NiSi/NiSi2, and initial results for containment of germanium and NiSi/NiSi2, are presented.

  4. The future of memory

    NASA Astrophysics Data System (ADS)

    Marinella, M.

    In the not too distant future, the traditional memory and storage hierarchy of may be replaced by a single Storage Class Memory (SCM) device integrated on or near the logic processor. Traditional magnetic hard drives, NAND flash, DRAM, and higher level caches (L2 and up) will be replaced with a single high performance memory device. The Storage Class Memory paradigm will require high speed (< 100 ns read/write), excellent endurance (> 1012), nonvolatility (retention > 10 years), and low switching energies (< 10 pJ per switch). The International Technology Roadmap for Semiconductors (ITRS) has recently evaluated several potential candidates SCM technologies, including Resistive (or Redox) RAM, Spin Torque Transfer RAM (STT-MRAM), and phase change memory (PCM). All of these devices show potential well beyond that of current flash technologies and research efforts are underway to improve the endurance, write speeds, and scalabilities to be on-par with DRAM. This progress has interesting implications for space electronics: each of these emerging device technologies show excellent resistance to the types of radiation typically found in space applications. Commercially developed, high density storage class memory-based systems may include a memory that is physically radiation hard, and suitable for space applications without major shielding efforts. This paper reviews the Storage Class Memory concept, emerging memory devices, and possible applicability to radiation hardened electronics for space.

  5. Alkaline water electrolysis technology for Space Station regenerative fuel cell energy storage

    NASA Technical Reports Server (NTRS)

    Schubert, F. H.; Hoberecht, M. A.; Le, M.

    1986-01-01

    The regenerative fuel cell system (RFCS), designed for application to the Space Station energy storage system, is based on state-of-the-art alkaline electrolyte technology and incorporates a dedicated fuel cell system (FCS) and water electrolysis subsystem (WES). In the present study, emphasis is placed on the WES portion of the RFCS. To ensure RFCS availability for the Space Station, the RFCS Space Station Prototype design was undertaken which included a 46-cell 0.93 cu m static feed water electrolysis module and three integrated mechanical components.

  6. Protecting Location Privacy for Outsourced Spatial Data in Cloud Storage

    PubMed Central

    Gui, Xiaolin; An, Jian; Zhao, Jianqiang; Zhang, Xuejun

    2014-01-01

    As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC∗) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC∗ and DSC are more secure than SHC, and DSC achieves the best index generation performance. PMID:25097865

  7. Application and design of solar photovoltaic system

    NASA Astrophysics Data System (ADS)

    Tianze, Li; Hengwei, Lu; Chuan, Jiang; Luan, Hou; Xia, Zhang

    2011-02-01

    Solar modules, power electronic equipments which include the charge-discharge controller, the inverter, the test instrumentation and the computer monitoring, and the storage battery or the other energy storage and auxiliary generating plant make up of the photovoltaic system which is shown in the thesis. PV system design should follow to meet the load supply requirements, make system low cost, seriously consider the design of software and hardware, and make general software design prior to hardware design in the paper. To take the design of PV system for an example, the paper gives the analysis of the design of system software and system hardware, economic benefit, and basic ideas and steps of the installation and the connection of the system. It elaborates on the information acquisition, the software and hardware design of the system, the evaluation and optimization of the system. Finally, it shows the analysis and prospect of the application of photovoltaic technology in outer space, solar lamps, freeways and communications.

  8. Protecting location privacy for outsourced spatial data in cloud storage.

    PubMed

    Tian, Feng; Gui, Xiaolin; An, Jian; Yang, Pan; Zhao, Jianqiang; Zhang, Xuejun

    2014-01-01

    As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC(∗)) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC(∗) and DSC are more secure than SHC, and DSC achieves the best index generation performance.

  9. A Two-Dimensional Linear Bicharacteristic FDTD Method

    NASA Technical Reports Server (NTRS)

    Beggs, John H.

    2002-01-01

    The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics. The LBS has previously been extended to treat lossy materials for one-dimensional problems. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to include the Perfectly Matched Layer boundary condition with no added storage or complexity. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional free-space electromagnetic propagation and scattering problems. This paper extends the LBS to the two-dimensional case. Results are presented for point source radiation problems, and the FDTD algorithm is chosen as a convenient reference for comparison.

  10. Pilot Project for Spaceborne Massive Optical Storage Devices

    NASA Technical Reports Server (NTRS)

    Chen, Y. J.

    1996-01-01

    A space bound storage device has many special requirements. In addition to large storage capacity, fas read/ write time, and high reliability, it also needs to have small volume, light weight, low power consumption, radiation hardening, ability to operate in extreme temperature ranges, etc. Holographic optical recording technology, which has been making major advancements in recent years, is an extremely promising candidate. The goal of this pilot project is to demonstrate a laboratory bench-top holographic optical recording storage system (HORSS) based on nonlinear polymer films 1 and/or other advanced photo-refractive materials. This system will be used as a research vehicle to study relevant optical properties of novel holographic optical materials, to explore massive optical storage technologies based on the photo-refractive effect and to evaluate the feasibility of developing a massive storage system, based on holographic optical recording technology, for a space bound experiment in the near future.

  11. 3. PHOTOCOPY OF DRAWING (1976 CIVIL ENGINEERING DRAWING BY THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. PHOTOCOPY OF DRAWING (1976 CIVIL ENGINEERING DRAWING BY THE SPACE AND MISSILE TEST CENTER, VAFB, USAF) PARTIAL SITE PLAN, EQUIPMENT STORAGE BUILDING, PARKING AREA OVERLAY, AND NEW ROAD, SHEET C4 - Vandenberg Air Force Base, Space Launch Complex 3, Storage Shed, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  12. Compactor for Space Toilet

    NASA Technical Reports Server (NTRS)

    Autrey, David (Inventor); Morrison, Terrell Lee (Inventor); Kaufman, Cory (Inventor)

    2017-01-01

    A toilet for use on a space vehicle has a toilet bowl having a storage canister at a remote end for receiving human waste. The compactor includes a cable connected to a lever which pulls the cable in a direction forcing the compactor into the storage canister to compact the captured waste when the lever is actuated.

  13. Proceedings of the 4th International Conference and Exhibition: World Congress on Superconductivity, volume 1

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar (Editor); Burnham, Calvin (Editor)

    1995-01-01

    The papers presented at the 4th International Conference Exhibition: World Congress on Superconductivity held at the Marriott Orlando World Center, Orlando, Florida, are contained in this document and encompass the research, technology, applications, funding, political, and social aspects of superconductivity. Specifically, the areas covered included: high-temperature materials; thin films; C-60 based superconductors; persistent magnetic fields and shielding; fabrication methodology; space applications; physical applications; performance characterization; device applications; weak link effects and flux motion; accelerator technology; superconductivity energy; storage; future research and development directions; medical applications; granular superconductors; wire fabrication technology; computer applications; technical and commercial challenges, and power and energy applications.

  14. Advanced sensors and instrumentation

    NASA Technical Reports Server (NTRS)

    Calloway, Raymond S.; Zimmerman, Joe E.; Douglas, Kevin R.; Morrison, Rusty

    1990-01-01

    NASA is currently investigating the readiness of Advanced Sensors and Instrumentation to meet the requirements of new initiatives in space. The following technical objectives and technologies are briefly discussed: smart and nonintrusive sensors; onboard signal and data processing; high capacity and rate adaptive data acquisition systems; onboard computing; high capacity and rate onboard storage; efficient onboard data distribution; high capacity telemetry; ground and flight test support instrumentation; power distribution; and workstations, video/lighting. The requirements for high fidelity data (accuracy, frequency, quantity, spatial resolution) in hostile environments will continue to push the technology developers and users to extend the performance of their products and to develop new generations.

  15. Proceedings of the 4th International Conference and Exhibition: World Congress on Superconductivity, Volume 2

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar (Editor); Burnham, Calvin (Editor)

    1995-01-01

    This document contains papers presented at the 4th International Conference Exhibition: World Congress on Superconductivity held June 27-July 1, 1994 in Orlando, Florida. These documents encompass research, technology, applications, funding, political, and social aspects of superconductivity. The areas covered included: high-temperature materials; thin films; C-60 based superconductors; persistent magnetic fields and shielding; fabrication methodology; space applications; physical applications; performance characterization; device applications; weak link effects and flux motion; accelerator technology; superconductivity energy; storage; future research and development directions; medical applications; granular superconductors; wire fabrication technology; computer applications; technical and commercial challenges; and power and energy applications.

  16. A Cost-Benefit Study of Doing Astrophysics On The Cloud: Production of Image Mosaics

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Good, J. C. Deelman, E.; Singh, G. Livny, M.

    2009-09-01

    Utility grids such as the Amazon EC2 and Amazon S3 clouds offer computational and storage resources that can be used on-demand for a fee by compute- and data-intensive applications. The cost of running an application on such a cloud depends on the compute, storage and communication resources it will provision and consume. Different execution plans of the same application may result in significantly different costs. We studied via simulation the cost performance trade-offs of different execution and resource provisioning plans by creating, under the Amazon cloud fee structure, mosaics with the Montage image mosaic engine, a widely used data- and compute-intensive application. Specifically, we studied the cost of building mosaics of 2MASS data that have sizes of 1, 2 and 4 square degrees, and a 2MASS all-sky mosaic. These are examples of mosaics commonly generated by astronomers. We also study these trade-offs in the context of the storage and communication fees of Amazon S3 when used for long-term application data archiving. Our results show that by provisioning the right amount of storage and compute resources cost can be significantly reduced with no significant impact on application performance.

  17. Experimental evaluation of passive cooling using phase change materials (PCM) for reducing overheating in public building

    NASA Astrophysics Data System (ADS)

    Ahmed, Abdullahi; Mateo-Garcia, Monica; McGough, Danny; Caratella, Kassim; Ure, Zafer

    2018-02-01

    Indoor Environmental Quality (IEQ) is essential for the health and productivity of building users. The risk of overheating in buildings is increasing due to increased density of occupancy of people and heat emitting equipment, increase in ambient temperature due to manifestation of climate change or changes in urban micro-climate. One of the solutions to building overheating is to inject some exposed thermal mass into the interior of the building. There are many different types of thermal storage materials which typically includes sensible heat storage materials such as concrete, bricks, rocks etc. It is very difficult to increase the thermal mass of existing buildings using these sensible heat storage materials. Alternative to these, there are latent heat storage materials called Phase Change Materials (PCM), which have high thermal storage capacity per unit volume of materials making them easy to implement within retrofit project. The use of Passive Cooling Thermal Energy Storage (TES) systems in the form of PCM PlusICE Solutions has been investigated in occupied spaces to improve indoor environmental quality. The work has been carried out using experimental set-up in existing spaces and monitored through the summer the months. The rooms have been monitored using wireless temperature and humidity sensors. There appears to be significant improvement in indoor temperature of up to 5°K in the room with the PCM compared to the monitored control spaces. The success of PCM for passive cooling is strongly dependent on the ventilation strategy employed in the spaces. The use of night time cooling to purge the stored thermal energy is essential for improved efficacy of the systems to reduce overheating in the spaces. The investigation is carried within the EU funded RESEEPEE project.

  18. Optimization algorithms for large-scale multireservoir hydropower systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiew, K.L.

    Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another.more » The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.« less

  19. National Aeronautics and Space Administration Biological Specimen Repository

    NASA Technical Reports Server (NTRS)

    McMonigal, Kathleen A.; Pietrzyk, Robert a.; Johnson, Mary Anne

    2008-01-01

    The National Aeronautics and Space Administration Biological Specimen Repository (Repository) is a storage bank that is used to maintain biological specimens over extended periods of time and under well-controlled conditions. Samples from the International Space Station (ISS), including blood and urine, will be collected, processed and archived during the preflight, inflight and postflight phases of ISS missions. This investigation has been developed to archive biosamples for use as a resource for future space flight related research. The International Space Station (ISS) provides a platform to investigate the effects of microgravity on human physiology prior to lunar and exploration class missions. The storage of crewmember samples from many different ISS flights in a single repository will be a valuable resource with which researchers can study space flight related changes and investigate physiological markers. The development of the National Aeronautics and Space Administration Biological Specimen Repository will allow for the collection, processing, storage, maintenance, and ethical distribution of biosamples to meet goals of scientific and programmatic relevance to the space program. Archiving of the biosamples will provide future research opportunities including investigating patterns of physiological changes, analysis of components unknown at this time or analyses performed by new methodologies.

  20. Pigs in cyberspace

    NASA Technical Reports Server (NTRS)

    Moravec, Hans

    1993-01-01

    Exploration and colonization of the universe awaits, but Earth-adapted biological humans are ill-equipped to respond to the challenge. Machines have gone farther and seen more, limited though they presently are by insect-like behavior inflexibility. As they become smarter over the coming decades, space will be theirs. Organizations of robots of ever increasing intelligence and sensory and motor ability will expand and transform what they occupy, working with matter, space and time. As they grow, a smaller and smaller fraction of their territory will be undeveloped frontier. Competitive success will depend more and more on using already available matter and space in ever more refined and useful forms. The process, analogous to the miniaturization that makes today's computers a trillion times more powerful than the mechanical calculators of the past, will gradually transform all activity from grossly physical homesteading of raw nature, to minimum-energy quantum transactions of computation. The final frontier will be urbanized, ultimately into an arena where every bit of activity is a meaningful computation: the inhabited portion of the universe will be transformed into a cyberspace. Because it will use resources more efficiently, a mature cyberspace of the distant future will be effectively much bigger than the present physical universe. While only an infinitesimal fraction of existing matter and space is doing interesting work, in a well developed cyberspace every bit will be part of a relevant computation or storing a useful datum. Over time, more compact and faster ways of using space and matter will be invented, and used to restructure the cyberspace, effectively increasing the amount of computational spacetime per unit of physical spacetime. Computational speed-ups will affect the subjective experience of entities in the cyberspace in a paradoxical way. At first glimpse, there is no subjective effect, because everything, inside and outside the individual, speeds up equally. But, more subtly, speed-up produces an expansion of the cyber universe, because, as thought accelerates, more subjective time passes during the fixed (probably lightspeed) physical transit time of a message between a given pair of locations - so those fixed locations seem to grow farther apart. Also, as information storage is made continually more efficient through both denser utilization of matter and more efficient encodings, there will be increasingly more cyber-stuff between any two points. The effect may somewhat resemble the continuous-creation process in the old steady-state theory of the physical universe of Hoyle, Bondi and Gold, where hydrogen atoms appear just fast enough throughout the expanding cosmos to maintain a constant density.

  1. Build It: Will They Come?

    NASA Astrophysics Data System (ADS)

    Corrie, Brian; Zimmerman, Todd

    Scientific research is fundamentally collaborative in nature, and many of today's complex scientific problems require domain expertise in a wide range of disciplines. In order to create research groups that can effectively explore such problems, research collaborations are often formed that involve colleagues at many institutions, sometimes spanning a country and often spanning the world. An increasingly common manifestation of such a collaboration is the collaboratory (Bos et al., 2007), a “…center without walls in which the nation's researchers can perform research without regard to geographical location — interacting with colleagues, accessing instrumentation, sharing data and computational resources, and accessing information from digital libraries.” In order to bring groups together on such a scale, a wide range of components need to be available to researchers, including distributed computer systems, remote instrumentation, data storage, collaboration tools, and the financial and human resources to operate and run such a system (National Research Council, 1993). Media Spaces, as both a technology and a social facilitator, have the potential to meet many of these needs. In this chapter, we focus on the use of scientific media spaces (SMS) as a tool for supporting collaboration in scientific research. In particular, we discuss the design, deployment, and use of a set of SMS environments deployed by WestGrid and one of its collaborating organizations, the Centre for Interdisciplinary Research in the Mathematical and Computational Sciences (IRMACS) over a 5-year period.

  2. Using technology to support investigations in the electronic age: tracking hackers to large scale international computer fraud

    NASA Astrophysics Data System (ADS)

    McFall, Steve

    1994-03-01

    With the increase in business automation and the widespread availability and low cost of computer systems, law enforcement agencies have seen a corresponding increase in criminal acts involving computers. The examination of computer evidence is a new field of forensic science with numerous opportunities for research and development. Research is needed to develop new software utilities to examine computer storage media, expert systems capable of finding criminal activity in large amounts of data, and to find methods of recovering data from chemically and physically damaged computer storage media. In addition, defeating encryption and password protection of computer files is also a topic requiring more research and development.

  3. Profiling of differentially expressed genes critical to storage root development in hydroponically and in-vitro grown sweetpotato for space farming

    NASA Astrophysics Data System (ADS)

    Egnin, M.; Gao, H.; He, G.; Woullard, F.; Mortley, D.; Scoffield, J.; Bey, B.; Quain, M.; Prakash, C. S.; Bonsi, C.

    Environment is known to have significant effects on the nutrient content and quality of crop plants especially through its impact on the temporal and spatial expression of genes Little is known about the molecular changes and harvest index in plants in response to microgravity Sweetpotato underline Ipomoea underline batatas L Lam is one of the most important root crops and an excellent NASA crop for space farming to provide essential nutrients to sustain human life on long-term space missions The initiation and development of storage root formation is one of the most critical processes determining yield of sweetpotato The molecular mechanism of storage root initiation and development in sweetpotato is poorly understood To this end knowledge of gravity perception the genetic and molecular nature of the induction process of storage root will tremendously help improve on sweetpotato harvest index for space farming cDNA-AFLP techniques were employed to investigate temporal and spatial expressions to gain molecular insights and identify transcripts differentially expressed during early stages of sweetpotato storage root development Two hydroponically grown cultivars using Nutrient Film Technology NFT and microstorage roots were evaluated TU-82-155 an early maturing 90 DAP with orange flesh and tinge red skin and PI318846-3 a late maturing 135 DAP with white flesh and off-yellow skin were compared for differential genes expression during storage root development at 14 21 28 35 and 45 DAP Total RNA was isolated from

  4. Leak checker data logging system

    DOEpatents

    Gannon, J.C.; Payne, J.J.

    1996-09-03

    A portable, high speed, computer-based data logging system for field testing systems or components located some distance apart employs a plurality of spaced mass spectrometers and is particularly adapted for monitoring the vacuum integrity of a long string of a superconducting magnets such as used in high energy particle accelerators. The system provides precise tracking of a gas such as helium through the magnet string when the helium is released into the vacuum by monitoring the spaced mass spectrometers allowing for control, display and storage of various parameters involved with leak detection and localization. A system user can observe the flow of helium through the magnet string on a real-time basis hour the exact moment of opening of the helium input valve. Graph reading can be normalized to compensate for magnet sections that deplete vacuum faster than other sections between testing to permit repetitive testing of vacuum integrity in reduced time. 18 figs.

  5. Leak checker data logging system

    DOEpatents

    Gannon, Jeffrey C.; Payne, John J.

    1996-01-01

    A portable, high speed, computer-based data logging system for field testing systems or components located some distance apart employs a plurality of spaced mass spectrometers and is particularly adapted for monitoring the vacuum integrity of a long string of a superconducting magnets such as used in high energy particle accelerators. The system provides precise tracking of a gas such as helium through the magnet string when the helium is released into the vacuum by monitoring the spaced mass spectrometers allowing for control, display and storage of various parameters involved with leak detection and localization. A system user can observe the flow of helium through the magnet string on a real-time basis hour the exact moment of opening of the helium input valve. Graph reading can be normalized to compensate for magnet sections that deplete vacuum faster than other sections between testing to permit repetitive testing of vacuum integrity in reduced time.

  6. An Advanced Hierarchical Hybrid Environment for Reliability and Performance Modeling

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco

    2003-01-01

    The key issue we intended to address in our proposed research project was the ability to model and study logical and probabilistic aspects of large computer systems. In particular, we wanted to focus mostly on automatic solution algorithms based on a state-space exploration as their first step, in addition to the more traditional discrete-event simulation approaches commonly employed in industry. One explicitly-stated goal was to extend by several orders of magnitude the size of models that can be solved exactly, using a combination of techniques: 1) Efficient exploration and storage of the state space using new data structures that require an amount of memory sublinear in the number states; and 2) Exploitation of the existing symmetries in the matrices describing the system behavior using Kronecker operators. Not only we have been successful in achieving the above goals, but we exceeded them in many respects.

  7. Development of an Ontology to Model Medical Errors, Information Needs, and the Clinical Communication Space

    PubMed Central

    Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.

    2002-01-01

    Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.

  8. A new momentum management controller for the space station

    NASA Technical Reports Server (NTRS)

    Wie, B.; Byun, K. W.; Warren, V. W.

    1988-01-01

    A new approach to CMG (control moment gyro) momentum management and attitude control of the Space Station is developed. The control algorithm utilizes both the gravity-gradient and gyroscopic torques to seek torque equilibrium attitude in the presence of secular and cyclic disturbances. Depending upon mission requirements, either pitch attitude or pitch-axis CMG momentum can be held constant: yaw attitude and roll-axis CMG momentum can be held constant, while roll attitude and yaw-axis CMG momentum cannot be held constant. As a result, the overall attitude and CMG momentum oscillations caused by cyclic aero-dynamic disturbances are minimized. A state feedback controller with minimal computer storage requirement for gain scheduling is also developed. The overall closed-loop system is stable for + or - 30 percent inertia matrix variations and has more than + or - 10 dB and 45 deg stability margins in each loop.

  9. Single image super-resolution via an iterative reproducing kernel Hilbert space method.

    PubMed

    Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu

    2016-11-01

    Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.

  10. Storing files in a parallel computing system based on user or application specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.

    2016-03-29

    Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on onemore » or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.« less

  11. System and Method for Providing a Climate Data Persistence Service

    NASA Technical Reports Server (NTRS)

    Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)

    2018-01-01

    A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.

  12. High-performance equation solvers and their impact on finite element analysis

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. Dale, Jr.

    1990-01-01

    The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number of operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.

  13. High-performance equation solvers and their impact on finite element analysis

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. D., Jr.

    1992-01-01

    The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number od operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.

  14. A review of computer aided interpretation technology for the evaluation of radiographs of aluminum welds

    NASA Technical Reports Server (NTRS)

    Lloyd, J. F., Sr.

    1987-01-01

    Industrial radiography is a well established, reliable means of providing nondestructive structural integrity information. The majority of industrial radiographs are interpreted by trained human eyes using transmitted light and various visual aids. Hundreds of miles of radiographic information are evaluated, documented and archived annually. In many instances, there are serious considerations in terms of interpreter fatigue, subjectivity and limited archival space. Quite often it is difficult to quickly retrieve radiographic information for further analysis or investigation. Methods of improving the quality and efficiency of the radiographic process are being explored, developed and incorporated whenever feasible. High resolution cameras, digital image processing, and mass digital data storage offer interesting possibilities for improving the industrial radiographic process. A review is presented of computer aided radiographic interpretation technology in terms of how it could be used to enhance the radiographic interpretation process in evaluating radiographs of aluminum welds.

  15. Study of hypervelocity meteoroid impact on orbital space stations

    NASA Technical Reports Server (NTRS)

    Leimbach, K. R.; Prozan, R. J.

    1973-01-01

    Structural damage resulting in hypervelocity impact of a meteorite on a spacecraft is discussed. Of particular interest is the backside spallation caused by such a collision. To treat this phenomenon two numerical schemes were developed in the course of this study to compute the elastic-plastic flow fracture of a solid. The numerical schemes are a five-point finite difference scheme and a four-node finite element scheme. The four-node finite element scheme proved to be less sensitive to the type of boundary conditions and loadings. Although further development work is needed to improve the program versatility (generalization of the network topology, secondary storage for large systems, improving of the coding to reduce the run time, etc.), the basic framework is provided for a utilitarian computer program which may be used in a wide variety of situations. Analytic results showing the program output are given for several test cases.

  16. The IBM PC at NASA Ames

    NASA Technical Reports Server (NTRS)

    Peredo, James P.

    1988-01-01

    Like many large companies, Ames relies very much on its computing power to get work done. And, like many other large companies, finding the IBM PC a reliable tool, Ames uses it for many of the same types of functions as other companies. Presentation and clarification needs demand much of graphics packages. Programming and text editing needs require simpler, more-powerful packages. The storage space needed by NASA's scientists and users for the monumental amounts of data that Ames needs to keep demand the best database packages that are large and easy to use. Availability to the Micom Switching Network combines the powers of the IBM PC with the capabilities of other computers and mainframes and allows users to communicate electronically. These four primary capabilities of the PC are vital to the needs of NASA's users and help to continue and support the vast amounts of work done by the NASA employees.

  17. 3D simulation of floral oil storage in the scopa of South American insects

    NASA Astrophysics Data System (ADS)

    Ruettgers, Alexander; Griebel, Michael; Pastrik, Lars; Schmied, Heiko; Wittmann, Dieter; Scherrieble, Andreas; Dinkelmann, Albrecht; Stegmaier, Thomas; InstituteNumerical Simulation Team; Institute of Crop Science; Resource Conservation Team; Institute of Textile Technology; Process Engineering Team

    2014-11-01

    Several species of bees in South America possess structures to store and transport floral oils. By using closely spaced hairs at their back legs, the so called scopa, these bees can absorb and release oil droplets without loss. The high efficiency of this process is a matter of ongoing research. Basing on recent x-ray microtomography scans from the scopa of these bees at the Institute of Textile Technology and Process Engineering Denkendorf, we build a three-dimensional computer model. Using NaSt3DGPF, a two-phase flow solver developed at the Institute for Numerical Simulation of the University of Bonn, we perform massively parallel flow simulations with the complex micro-CT data. In this talk, we discuss the results of our simulations and the transfer of the x-ray measurement into a computer model. This research was funded under GR 1144/18-1 by the Deutsche Forschungsgemeinschaft (DFG).

  18. In-Space Propellant Production Using Water

    NASA Technical Reports Server (NTRS)

    Notardonato, William; Johnson, Wesley; Swanger, Adam; McQuade, William

    2012-01-01

    A new era of space exploration is being planned. Manned exploration architectures under consideration require the long term storage of cryogenic propellants in space, and larger science mission directorate payloads can be delivered using cryogenic propulsion stages. Several architecture studies have shown that in-space cryogenic propulsion depots offer benefits including lower launch costs, smaller launch vehicles, and enhanced mission flexibility. NASA is currently planning a Cryogenic Propellant Storage and Transfer (CPST) technology demonstration mission that will use existing technology to demonstrate long duration storage, acquisition, mass gauging, and transfer of liquid hydrogen in low Earth orbit. This mission will demonstrate key technologies, but the CPST architecture is not designed for optimal mission operations for a true propellant depot. This paper will consider cryogenic propellant depots that are designed for operability. The operability principles considered are reusability, commonality, designing for the unique environment of space, and use of active control systems, both thermal and fluid. After considering these operability principles, a proposed depot architecture will be presented that uses water launch and on orbit electrolysis and liquefaction. This could serve as the first true space factory. Critical technologies needed for this depot architecture, including on orbit electrolysis, zero-g liquefaction and storage, rendezvous and docking, and propellant transfer, will be discussed and a developmental path forward will be presented. Finally, use of the depot to support the NASA Science Mission Directorate exploration goals will be presented.

  19. Cyclic injection, storage, and withdrawal of heated water in a sandstone aquifer at St. Paul, Minnesota--Analysis of thermal data and nonisothermal modeling of short-term test cycles

    USGS Publications Warehouse

    Miller, Robert T.; Delin, G.N.

    2002-01-01

    In May 1980, the University of Minnesota began a project to evaluate the feasibility of storing heated water (150 degrees Celsius) in the Franconia-Ironton Galesville aquifer (183 to 245 meters below land surface) and later recovering it for space heating. The University's steam-generation facilities supplied high-temperature water for injection. The Aquifer Thermal-Energy Storage system is a doublet-well design in which the injection-withdrawal wells are spaced approximately 250 meters apart. Water was pumped from one of the wells through a heat exchanger, where heat was added or removed. This water was then injected back into the aquifer through the other well. Four short-term test cycles were completed. Each cycle consisted of approximately equal durations of injection and withdrawal ranging from 5.25 to 8.01 days. Equal rates of injection and withdrawal, ranging from 17.4 to 18.6 liters per second, were maintained for each short-term test cycle. Average injection temperatures ranged from 88.5 to 117.9 degrees Celsius. Temperature graphs for selected depths at individual observation wells indicate that the Ironton and Galesville Sandstones received and stored more thermal energy than the upper part of the Franconia Formation. Clogging of the Ironton Sandstone was possibly due to precipitation of calcium carbonate or movement of fine-grain material or both. Vertical-profile plots indicate that the effects of buoyancy flow were small within the aquifer. A three-dimensional, anisotropic, nonisothermal, ground-water-flow, and thermal-energy-transport model was constructed to simulate the four short-term test cycles. The model was used to simulate the entire short-term testing period of approximately 400 days. The only model properties varied during model calibration were longitudinal and transverse thermal dispersivities, which, for final calibration, were simulated as 3.3 and 0.33 meters, respectively. The model was calibrated by comparing model-computed results to (1) measured temperatures at selected altitudes in four observation wells, (2) measured temperatures at the production well, and (3) calculated thermal efficiencies of the aquifer. Model-computed withdrawal-water temperatures were within an average of about 3 percent of measured values and model-computed aquifer-thermal efficiencies were within an average of about 5 percent of calculated values for the short-term test cycles. These data indicate that the model accurately simulated thermal-energy storage within the Franconia-Ironton-Galesville aquifer.

  20. NASA Office of Aeronautics and Space Technology Summer Workshop. Volume 4: Power technology panel

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Technology requirements in the areas of energy sources and conversion, power processing, distribution, conversion, and transmission, and energy storage are identified for space shuttle payloads. It is concluded that the power system technology currently available is adequate to accomplish all missions in the 1973 Mission Model, but that further development is needed to support space opportunities of the future as identified by users. Space experiments are proposed in the following areas: power generation in space, advanced photovoltaic energy converters, solar and nuclear thermoelectric technology, nickel-cadmium batteries, flywheels (mechanical storage), satellite-to-ground transmission and reconversion systems, and regenerative fuel cells.

  1. Lightweight cryogenic-compatible pressure vessels for vehicular fuel storage

    DOEpatents

    Aceves, Salvador; Berry, Gene; Weisberg, Andrew H.

    2004-03-23

    A lightweight, cryogenic-compatible pressure vessel for flexibly storing cryogenic liquid fuels or compressed gas fuels at cryogenic or ambient temperatures. The pressure vessel has an inner pressure container enclosing a fuel storage volume, an outer container surrounding the inner pressure container to form an evacuated space therebetween, and a thermal insulator surrounding the inner pressure container in the evacuated space to inhibit heat transfer. Additionally, vacuum loss from fuel permeation is substantially inhibited in the evacuated space by, for example, lining the container liner with a layer of fuel-impermeable material, capturing the permeated fuel in the evacuated space, or purging the permeated fuel from the evacuated space.

  2. Methods and devices for determining quality of services of storage systems

    DOEpatents

    Seelam, Seetharami R [Yorktown Heights, NY; Teller, Patricia J [Las Cruces, NM

    2012-01-17

    Methods and systems for allowing access to computer storage systems. Multiple requests from multiple applications can be received and processed efficiently to allow traffic from multiple customers to access the storage system concurrently.

  3. Development of a phase-change thermal storage system using modified anhydrous sodium hydroxide for solar electric power generation

    NASA Technical Reports Server (NTRS)

    Cohen, B. M.; Rice, R. E.; Rowny, P. E.

    1978-01-01

    A thermal storage system for use in solar power electricity generation was investigated analytically and experimentally. The thermal storage medium is principally anhydrous NaOH with 8% NaNO3 and 0.2% MnO2. Heat is charged into storage at 584 K and discharged from storage at 582 K by Therminol-66. Physical and thermophysical properties of the storage medium were measured. A mathematical simulation and computer program describing the operation of the system were developed. A 1/10 scale model of a system capable of storing and delivering 3.1 x 10 to the 6th power kJ of heat was designed, built, and tested. Tests included steady state charging, discharging, idling, and charge-discharge conditions simulating a solar daily cycle. Experimental data and computer-predicted results are correlated. A reference design including cost estimates of the full-size system was developed.

  4. Observer Interface Analysis for Standardization to a Cloud Based Real-Time Space Situational Awareness (SSA)

    NASA Astrophysics Data System (ADS)

    Eilers, J.

    2013-09-01

    The interface analysis from an observer of space objects makes a standard necessary. This standardized dataset serves as input for a cloud based service, which aimed for a near real-time Space Situational Awareness (SSA) system. The system contains all advantages of a cloud based solution, like redundancy, scalability and an easy way to distribute information. For the standard based on the interface analysis of the observer, the information can be separated in three parts. One part is the information about the observer e.g. a ground station. The next part is the information about the sensors that are used by the observer. And the last part is the data from the detected object. Backbone of the SSA System is the cloud based service which includes the consistency check for the observed objects, a database for the objects, the algorithms and analysis as well as the visualization of the results. This paper also provides an approximation of the needed computational power, data storage and a financial approach to deliver this service to a broad community. In this context cloud means, neither the user nor the observer has to think about the infrastructure of the calculation environment. The decision if the IT-infrastructure will be built by a conglomerate of different nations or rented on the marked should be based on an efficiency analysis. Also combinations are possible like starting on a rented cloud and then go to a private cloud owned by the government. One of the advantages of a cloud solution is the scalability. There are about 3000 satellites in space, 900 of them are active, and in total there are about ~17.000 detected space objects orbiting earth. But for the computation it is not a N(active) to N problem it is more N(active) to N(apo peri) quantity of N(all). Instead of 15.3 million possible collisions to calculate a computation of only approx. 2.3 million possible collisions must be done. In general, this Space Situational Awareness System can be used as a tool for satellite system owner for collision avoidance.

  5. A computer system for the storage and retrieval of gravity data, Kingdom of Saudi Arabia

    USGS Publications Warehouse

    Godson, Richard H.; Andreasen, Gordon H.

    1974-01-01

    A computer system has been developed for the systematic storage and retrieval of gravity data. All pertinent facts relating to gravity station measurements and computed Bouguer values may be retrieved either by project name or by geographical coordinates. Features of the system include visual display in the form of printer listings of gravity data and printer plots of station locations. The retrieved data format interfaces with the format of GEOPAC, a system of computer programs designed for the analysis of geophysical data.

  6. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    NASA Astrophysics Data System (ADS)

    Read, A.; Taga, A.; O-Saada, F.; Pajchel, K.; Samset, B. H.; Cameron, D.

    2008-07-01

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation.

  7. Quality Parameters of Six Cultivars of Blueberry Using Computer Vision

    PubMed Central

    Celis Cofré, Daniela; Silva, Patricia; Enrione, Javier; Osorio, Fernando

    2013-01-01

    Background. Blueberries are considered an important source of health benefits. This work studied six blueberry cultivars: “Duke,” “Brigitta”, “Elliott”, “Centurion”, “Star,” and “Jewel”, measuring quality parameters such as °Brix, pH, moisture content using standard techniques and shape, color, and fungal presence obtained by computer vision. The storage conditions were time (0–21 days), temperature (4 and 15°C), and relative humidity (75 and 90%). Results. Significant differences (P < 0.05) were detected between fresh cultivars in pH, °Brix, shape, and color. However, the main parameters which changed depending on storage conditions, increasing at higher temperature, were color (from blue to red) and fungal presence (from 0 to 15%), both detected using computer vision, which is important to determine a shelf life of 14 days for all cultivars. Similar behavior during storage was obtained for all cultivars. Conclusion. Computer vision proved to be a reliable and simple method to objectively determine blueberry decay during storage that can be used as an alternative approach to currently used subjective measurements. PMID:26904598

  8. Mass Storage and Retrieval at Rome Laboratory

    NASA Technical Reports Server (NTRS)

    Kann, Joshua L.; Canfield, Brady W.; Jamberdino, Albert A.; Clarke, Bernard J.; Daniszewski, Ed; Sunada, Gary

    1996-01-01

    As the speed and power of modern digital computers continues to advance, the demands on secondary mass storage systems grow. In many cases, the limitations of existing mass storage reduce the overall effectiveness of the computing system. Image storage and retrieval is one important area where improved storage technologies are required. Three dimensional optical memories offer the advantage of large data density, on the order of 1 Tb/cm(exp 3), and faster transfer rates because of the parallel nature of optical recording. Such a system allows for the storage of multiple-Gbit sized images, which can be recorded and accessed at reasonable rates. Rome Laboratory is currently investigating several techniques to perform three-dimensional optical storage including holographic recording, two-photon recording, persistent spectral-hole burning, multi-wavelength DNA recording, and the use of bacteriorhodopsin as a recording material. In this paper, the current status of each of these on-going efforts is discussed. In particular, the potential payoffs as well as possible limitations are addressed.

  9. Effective Use of Existing Space in Libraries.

    ERIC Educational Resources Information Center

    Brown, Nancy A.

    1981-01-01

    Discusses the effective use of stack space through weeding, storage, microfilm, and selection; study space based on student population; and service space by reorganization of staff, collections, and study space. Three references are noted. (CHC)

  10. Densities of some molten fluoride salt mixtures suitable for heat storage in space power applications

    NASA Technical Reports Server (NTRS)

    Misra, Ajay K.

    1988-01-01

    Liquid densities were determined for a number of fluoride salt mixtures suitable for heat storage in space power applications, using a procedure that consisted of measuring the loss of weight of an inert bob in the melt. The density apparatus was calibrated with pure LiF and NaF at different temperatures. Density data for safe binary and ternary fluoride salt eutectics and congruently melting intermediate compounds are presented. In addition, a comparison was made between the volumetric heat storage capacity of different salt mixtures.

  11. Energy Storage: Batteries and Fuel Cells for Exploration

    NASA Technical Reports Server (NTRS)

    Manzo, Michelle A.; Miller, Thomas B.; Hoberecht, Mark A.; Baumann, Eric D.

    2007-01-01

    NASA's Vision for Exploration requires safe, human-rated, energy storage technologies with high energy density, high specific energy and the ability to perform in a variety of unique environments. The Exploration Technology Development Program is currently supporting the development of battery and fuel cell systems that address these critical technology areas. Specific technology efforts that advance these systems and optimize their operation in various space environments are addressed in this overview of the Energy Storage Technology Development Project. These technologies will support a new generation of more affordable, more reliable, and more effective space systems.

  12. Long-term cryogenic space storage system

    NASA Technical Reports Server (NTRS)

    Hopkins, R. A.; Chronic, W. L.

    1973-01-01

    Discussion of the design, fabrication and testing of a 225-cu ft spherical cryogenic storage system for long-term subcritical applications under zero-g conditions in storing subcritical cryogens for space vehicle propulsion systems. The insulation system design, the analytical methods used, and the correlation between the performance test results and analytical predictions are described. The best available multilayer insulation materials and state-of-the-art thermal protection concepts were applied in the design, providing a boiloff rate of 0.152 lb/hr, or 0.032% per day, and an overall heat flux of 0.066 Btu/sq ft hr based on a 200 sq ft surface area. A six to eighteen month cryogenic storage is provided by this system for space applications.

  13. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  14. Computational Design of Non-natural Sugar Alcohols to Increase Thermal Storage Density: Beyond Existing Organic Phase Change Materials.

    PubMed

    Inagaki, Taichi; Ishida, Toyokazu

    2016-09-14

    Thermal storage, a technology that enables us to control thermal energy, makes it possible to reuse a huge amount of waste heat, and materials with the ability to treat larger thermal energy are in high demand for energy-saving societies. Sugar alcohols are now one promising candidate for phase change materials (PCMs) because of their large thermal storage density. In this study, we computationally design experimentally unknown non-natural sugar alcohols and predict their thermal storage density as a basic step toward the development of new high performance PCMs. The non-natural sugar alcohol molecules are constructed in silico in accordance with the previously suggested molecular design guidelines: linear elongation of a carbon backbone, separated distribution of OH groups, and even numbers of carbon atoms. Their crystal structures are then predicted using the random search method and first-principles calculations. Our molecular simulation results clearly demonstrate that the non-natural sugar alcohols have potential ability to have thermal storage density up to ∼450-500 kJ/kg, which is significantly larger than the maximum thermal storage density of the present known organic PCMs (∼350 kJ/kg). This computational study suggests that, even in the case of H-bonded molecular crystals where the electrostatic energy contributes mainly to thermal storage density, the molecular distortion and van der Waals energies are also important factors to increase thermal storage density. In addition, the comparison between the three eight-carbon non-natural sugar alcohol isomers indicates that the selection of preferable isomers is also essential for large thermal storage density.

  15. Mass storage: The key to success in high performance computing

    NASA Technical Reports Server (NTRS)

    Lee, Richard R.

    1993-01-01

    There are numerous High Performance Computing & Communications Initiatives in the world today. All are determined to help solve some 'Grand Challenges' type of problem, but each appears to be dominated by the pursuit of higher and higher levels of CPU performance and interconnection bandwidth as the approach to success, without any regard to the impact of Mass Storage. My colleagues and I at Data Storage Technologies believe that all will have their performance against their goals ultimately measured by their ability to efficiently store and retrieve the 'deluge of data' created by end-users who will be using these systems to solve Scientific Grand Challenges problems, and that the issue of Mass Storage will become then the determinant of success or failure in achieving each projects goals. In today's world of High Performance Computing and Communications (HPCC), the critical path to success in solving problems can only be traveled by designing and implementing Mass Storage Systems capable of storing and manipulating the truly 'massive' amounts of data associated with solving these challenges. Within my presentation I will explore this critical issue and hypothesize solutions to this problem.

  16. Optimization of a Brayton cryocooler for ZBO liquid hydrogen storage in space

    NASA Astrophysics Data System (ADS)

    Deserranno, D.; Zagarola, M.; Li, X.; Mustafi, S.

    2014-11-01

    NASA is evaluating and developing technology for long-term storage of cryogenic propellant in space. A key technology is a cryogenic refrigerator which intercepts heat loads to the storage tank, resulting in a reduced- or zero-boil-off condition. Turbo-Brayton cryocoolers are particularly well suited for cryogen storage applications because the technology scales well to high capacities and low temperatures. In addition, the continuous-flow nature of the cycle allows direct cooling of the cryogen storage tank without mass and power penalties associated with a cryogenic heat transport system. To quantify the benefits and mature the cryocooler technology, Creare Inc. performed a design study and technology demonstration effort for NASA on a 20 W, 20 K cryocooler for liquid hydrogen storage. During the design study, we optimized these key components: three centrifugal compressors, a modular high-capacity plate-fin recuperator, and a single-stage turboalternator. The optimization of the compressors and turboalternator were supported by component testing. The optimized cryocooler has an overall flight mass of 88 kg and a specific power of 61 W/W. The coefficient of performance of the cryocooler is 23% of the Carnot cycle. This is significantly better performance than any 20 K space cryocooler existing or under development.

  17. A study of the applicability/compatibility of inertial energy storage systems to future space missions

    NASA Technical Reports Server (NTRS)

    Weldon, W. F.

    1980-01-01

    The applicability/compatibility of inertial energy storage systems like the homopolar generator (HPG) and the compensated pulsed alternator (CPA) to future space missions is explored. Areas of CPA and HPG design requiring development for space applications are identified. The manner in which acceptance parameters of the CPA and HPG scale with operating parameters of the machines are explored and the types of electrical loads which are compatible with the CPA and HPG are examined. Potential applications including the magnetoplasmadynamic (MPD) thruster, pulsed data transmission, laser ranging, welding and electromagnetic space launch are discussed.

  18. POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.; Betts, W.

    2015-12-01

    The STAR online computing infrastructure has become an intensive dynamic system used for first-hand data collection and analysis resulting in a dense collection of data output. As we have transitioned to our current state, inefficient, limited storage systems have become an impediment to fast feedback to online shift crews. Motivation for a centrally accessible, scalable and redundant distributed storage system had become a necessity in this environment. OpenStack Swift Object Storage and Ceph Object Storage are two eye-opening technologies as community use and development have led to success elsewhere. In this contribution, OpenStack Swift and Ceph have been put to the test with single and parallel I/O tests, emulating real world scenarios for data processing and workflows. The Ceph file system storage, offering a POSIX compliant file system mounted similarly to an NFS share was of particular interest as it aligned with our requirements and was retained as our solution. I/O performance tests were run against the Ceph POSIX file system and have presented surprising results indicating true potential for fast I/O and reliability. STAR'S online compute farm historical use has been for job submission and first hand data analysis. The goal of reusing the online compute farm to maintain a storage cluster and job submission will be an efficient use of the current infrastructure.

  19. Reliable, Memory Speed Storage for Cluster Computing Frameworks

    DTIC Science & Technology

    2014-06-16

    specification API that can capture computations in many of today’s popular data -parallel computing models, e.g., MapReduce and SQL. We also ported the Hadoop ...today’s big data workloads: • Immutable data : Data is immutable once written, since dominant underlying storage systems, such as HDFS [3], only support...network transfers, so reads can be data -local. • Program size vs. data size: In big data processing, the same operation is repeatedly applied on massive

  20. 36 CFR 1234.14 - What are the requirements for environmental controls for records storage facilities?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false What are the requirements for environmental controls for records storage facilities? 1234.14 Section 1234.14 Parks, Forests, and Public... storage space that is designed to preserve them for their full retention period. New records storage...

Top