Sample records for data storage

  1. Archive Storage Media Alternatives.

    ERIC Educational Resources Information Center

    Ranade, Sanjay

    1990-01-01

    Reviews requirements for a data archive system and describes storage media alternatives that are currently available. Topics discussed include data storage; data distribution; hierarchical storage architecture, including inline storage, online storage, nearline storage, and offline storage; magnetic disks; optical disks; conventional magnetic…

  2. Stand-alone digital data storage control system including user control interface

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth D. (Inventor); Gray, David L. (Inventor)

    1994-01-01

    A storage control system includes an apparatus and method for user control of a storage interface to operate a storage medium to store data obtained by a real-time data acquisition system. Digital data received in serial format from the data acquisition system is first converted to a parallel format and then provided to the storage interface. The operation of the storage interface is controlled in accordance with instructions based on user control input from a user. Also, a user status output is displayed in accordance with storage data obtained from the storage interface. By allowing the user to control and monitor the operation of the storage interface, a stand-alone, user-controllable data storage system is provided for storing the digital data obtained by a real-time data acquisition system.

  3. An object-based storage model for distributed remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng

    2006-10-01

    It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.

  4. Minimally buffered data transfers between nodes in a data communications network

    DOEpatents

    Miller, Douglas R.

    2015-06-23

    Methods, apparatus, and products for minimally buffered data transfers between nodes in a data communications network are disclosed that include: receiving, by a messaging module on an origin node, a storage identifier, a origin data type, and a target data type, the storage identifier specifying application storage containing data, the origin data type describing a data subset contained in the origin application storage, the target data type describing an arrangement of the data subset in application storage on a target node; creating, by the messaging module, origin metadata describing the origin data type; selecting, by the messaging module from the origin application storage in dependence upon the origin metadata and the storage identifier, the data subset; and transmitting, by the messaging module to the target node, the selected data subset for storing in the target application storage in dependence upon the target data type without temporarily buffering the data subset.

  5. Architecture and method for a burst buffer using flash technology

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung

    2016-03-15

    A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.

  6. Globally distributed software defined storage (proposal)

    NASA Astrophysics Data System (ADS)

    Shevel, A.; Khoruzhnikov, S.; Grudinin, V.; Sadov, O.; Kairkanov, A.

    2017-10-01

    The volume of the coming data in HEP is growing. The volume of the data to be held for a long time is growing as well. Large volume of data - big data - is distributed around the planet. The methods, approaches how to organize and manage the globally distributed data storage are required. The distributed storage has several examples for personal needs like own-cloud.org, pydio.com, seafile.com, sparkleshare.org. For enterprise-level there is a number of systems: SWIFT - distributed storage systems (part of Openstack), CEPH and the like which are mostly object storage. When several data center’s resources are integrated, the organization of data links becomes very important issue especially if several parallel data links between data centers are used. The situation in data centers and in data links may vary each hour. All that means each part of distributed data storage has to be able to rearrange usage of data links and storage servers in each data center. In addition, for each customer of distributed storage different requirements could appear. The above topics are planned to be discussed in data storage proposal.

  7. Towards rewritable multilevel optical data storage in single nanocrystals.

    PubMed

    Riesen, Nicolas; Pan, Xuanzhao; Badek, Kate; Ruan, Yinlan; Monro, Tanya M; Zhao, Jiangbo; Ebendorff-Heidepriem, Heike; Riesen, Hans

    2018-04-30

    Novel approaches for digital data storage are imperative, as storage capacities are drastically being outpaced by the exponential growth in data generation. Optical data storage represents the most promising alternative to traditional magnetic and solid-state data storage. In this paper, a novel and energy efficient approach to optical data storage using rare-earth ion doped inorganic insulators is demonstrated. In particular, the nanocrystalline alkaline earth halide BaFCl:Sm is shown to provide great potential for multilevel optical data storage. Proof-of-concept demonstrations reveal for the first time that these phosphors could be used for rewritable, multilevel optical data storage on the physical dimensions of a single nanocrystal. Multilevel information storage is based on the very efficient and reversible conversion of Sm 3+ to Sm 2+ ions upon exposure to UV-C light. The stored information is then read-out using confocal optics by employing the photoluminescence of the Sm 2+ ions in the nanocrystals, with the signal strength depending on the UV-C fluence used during the write step. The latter serves as the mechanism for multilevel data storage in the individual nanocrystals, as demonstrated in this paper. This data storage platform has the potential to be extended to 2D and 3D memory for storage densities that could potentially approach petabyte/cm 3 levels.

  8. High Density Digital Data Storage System

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth D., II; Gray, David L.; Rowland, Wayne D.

    1991-01-01

    The High Density Digital Data Storage System was designed to provide a cost effective means for storing real-time data from the field-deployable digital acoustic measurement system. However, the high density data storage system is a standalone system that could provide a storage solution for many other real time data acquisition applications. The storage system has inputs for up to 20 channels of 16-bit digital data. The high density tape recorders presently being used in the storage system are capable of storing over 5 gigabytes of data at overall transfer rates of 500 kilobytes per second. However, through the use of data compression techniques the system storage capacity and transfer rate can be doubled. Two tape recorders have been incorporated into the storage system to produce a backup tape of data in real-time. An analog output is provided for each data channel as a means of monitoring the data as it is being recorded.

  9. From the surface to volume: concepts for the next generation of optical-holographic data-storage materials.

    PubMed

    Bruder, Friedrich-Karl; Hagen, Rainer; Rölle, Thomas; Weiser, Marc-Stephan; Fäcke, Thomas

    2011-05-09

    Optical data storage has had a major impact on daily life since its introduction to the market in 1982. Compact discs (CDs), digital versatile discs (DVDs), and Blu-ray discs (BDs) are universal data-storage formats with the advantage that the reading and writing of the digital data does not require contact and is therefore wear-free. These formats allow convenient and fast data access, high transfer rates, and electricity-free data storage with low overall archiving costs. The driving force for development in this area is the constant need for increased data-storage capacity and transfer rate. The use of holographic principles for optical data storage is an elegant way to increase the storage capacity and the transfer rate, because by this technique the data can be stored in the volume of the storage material and, moreover, it can be optically processed in parallel. This Review describes the fundamental requirements for holographic data-storage materials and compares the general concepts for the materials used. An overview of the performance of current read-write devices shows how far holographic data storage has already been developed. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Proactive replica checking to assure reliability of data in cloud storage with minimum replication

    NASA Astrophysics Data System (ADS)

    Murarka, Damini; Maheswari, G. Uma

    2017-11-01

    The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.

  11. Interactive Educational Multimedia: Coping with the Need for Increasing Data Storage.

    ERIC Educational Resources Information Center

    Malhotra, Yogesh; Erickson, Ranel E.

    1994-01-01

    Discusses the storage requirements for data forms used in interactive multimedia education and presently available storage devices. Highlights include characteristics of educational multimedia; factors determining data storage requirements; storage devices for video and audio needs; laserdiscs and videodiscs; compact discs; magneto-optical drives;…

  12. Analysis and Research on Spatial Data Storage Model Based on Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Hu, Yong

    2017-12-01

    In this paper, the data processing and storage characteristics of cloud computing are analyzed and studied. On this basis, a cloud computing data storage model based on BP neural network is proposed. In this data storage model, it can carry out the choice of server cluster according to the different attributes of the data, so as to complete the spatial data storage model with load balancing function, and have certain feasibility and application advantages.

  13. Tenth Goddard Conference on Mass Storage Systems and Technologies in Cooperation with the Nineteenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    2002-01-01

    This document contains copies of those technical papers received in time for publication prior to the Tenth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Nineteenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center April 15-18, 2002. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the ingest, storage, and management of large volumes of data. The Conference encourages all interested organizations to discuss long-term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long-term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, storage networking with emphasis on IP storage, performance, standards, site reports, and vendor solutions. Tutorials will be available on perpendicular magnetic recording, object based storage, storage virtualization and IP storage.

  14. Electron trapping data storage system and applications

    NASA Technical Reports Server (NTRS)

    Brower, Daniel; Earman, Allen; Chaffin, M. H.

    1993-01-01

    The advent of digital information storage and retrieval has led to explosive growth in data transmission techniques, data compression alternatives, and the need for high capacity random access data storage. Advances in data storage technologies are limiting the utilization of digitally based systems. New storage technologies will be required which can provide higher data capacities and faster transfer rates in a more compact format. Magnetic disk/tape and current optical data storage technologies do not provide these higher performance requirements for all digital data applications. A new technology developed at the Optex Corporation out-performs all other existing data storage technologies. The Electron Trapping Optical Memory (ETOM) media is capable of storing as much as 14 gigabytes of uncompressed data on a single, double-sided 54 inch disk with a data transfer rate of up to 12 megabits per second. The disk is removable, compact, lightweight, environmentally stable, and robust. Since the Write/Read/Erase (W/R/E) processes are carried out 100 percent photonically, no heating of the recording media is required. Therefore, the storage media suffers no deleterious effects from repeated Write/Read/Erase cycling.

  15. Daily GRACE storage anomaly data for characterization of dynamic storage-discharge relationships of natural drainage basins

    NASA Astrophysics Data System (ADS)

    Sharma, D.; Patnaik, S.; Reager, J. T., II; Biswal, B.

    2017-12-01

    Despite the fact that streamflow occurs mainly due to depletion of storage, our knowledge on how a drainage basin stores and releases water is very limited because of measurement limitations. As a result storage has largely remained an elusive entity in hydrological analysis and modelling. A window of opportunity, however, is given to us by GRACE satellite mission that provides storage anomaly (TWSA) data for the entire globe. Many studies have used TWSA data for storage-discharge analysis, uncovering a range of potential applications of TWSA data. Here we argue that the capability of GRACE satellite mission has not been fully explored as most of the studies in the past have performed storage-discharge analysis using monthly TWSA data for large river basins. With such coarse data we are quite unlikely to fully understand variation of storage and discharge in space and time. In this study, we therefore use daily TWSA data for several mid-sized catchments and perform storage-discharge analysis. Daily storage-discharge relationship is highly dynamic, which generates large amount of scatter in storage-discharge plots. Yet a careful analysis of those scatter plots reveals interesting information on storage-discharge relationships of basins, particularly by looking at the relationships during individual recession events. It is observed that storage-discharge relationship is exponential in nature, contrary to the general assumption that the relationship is linear. We find that there is a strong relationship between power-law recession coefficient and initial storage (TWSA at the beginning of recession event). Furthermore, appreciable relationships are observed between recession coefficient and past TWSA values implying that storage takes time to deplete completely. Overall, insights drawn from this study expands our knowledge on how discharge is dynamically linked to storage.

  16. Telemetry data storage systems technology for the Space Station Freedom era

    NASA Technical Reports Server (NTRS)

    Dalton, John T.

    1989-01-01

    This paper examines the requirements and functions of the telemetry-data recording and storage systems, and the data-storage-system technology projected for the Space Station, with particular attention given to the Space Optical Disk Recorder, an on-board storage subsystem based on 160 gigabit erasable optical disk units each capable of operating at 300 M bits per second. Consideration is also given to storage systems for ground transport recording, which include systems for data capture, buffering, processing, and delivery on the ground. These can be categorized as the first in-first out storage, the fast random-access storage, and the slow access with staging. Based on projected mission manifests and data rates, the worst case requirements were developed for these three storage architecture functions. The results of the analysis are presented.

  17. Using semantic data modeling techniques to organize an object-oriented database for extending the mass storage model

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik

    1991-01-01

    A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.

  18. Two-stage optical recording: photoinduced birefringence and surface-mediated bits storage in bisazo-containing copolymers towards ultrahigh data memory.

    PubMed

    Hu, Yanlei; Wu, Dong; Li, Jiawen; Huang, Wenhao; Chu, Jiaru

    2016-10-03

    Ultrahigh density data storage is in high demand in the current age of big data and thus motivates many innovative storage technologies. Femtosecond laser induced multi-dimensional optical data storage is an appealing method to fulfill the demand of ultrahigh storage capacity. Here we report a femtosecond laser induced two-stage optical storage in bisazobenzene copolymer films by manipulating the recording energies. Different mechanisms can be selected for specified memory use: two-photon isomerization (TPI) and laser induced surface deformation. Giant birefringence can be generated by TPI and brings about high signal-to-noise ratio (>20 dB) multi-dimensional reversible storage. Polarization-dependent surface deformation arises when increasing the recording energy, which not only facilitates the multi-level storage by black bits (dots), but also enhances the bits' readout signal and storing stability. This facile bits recording method, which enables completely different recording mechanisms in an identical storage medium, paves the way for sustainable big data storage.

  19. High volume data storage architecture analysis

    NASA Technical Reports Server (NTRS)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  20. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  1. Electron trapping optical data storage system and applications

    NASA Technical Reports Server (NTRS)

    Brower, Daniel; Earman, Allen; Chaffin, M. H.

    1993-01-01

    A new technology developed at Optex Corporation out-performs all other existing data storage technologies. The Electron Trapping Optical Memory (ETOM) media stores 14 gigabytes of uncompressed data on a single, double-sided 130 mm disk with a data transfer rate of up to 120 megabits per second. The disk is removable, compact, lightweight, environmentally stable, and robust. Since the Write/Read/Erase (W/R/E) processes are carried out photonically, no heating of the recording media is required. Therefore, the storage media suffers no deleterious effects from repeated W/R/E cycling. This rewritable data storage technology has been developed for use as a basis for numerous data storage products. Industries that can benefit from the ETOM data storage technologies include: satellite data and information systems, broadcasting, video distribution, image processing and enhancement, and telecommunications. Products developed for these industries are well suited for the demanding store-and-forward buffer systems, data storage, and digital video systems needed for these applications.

  2. Storages Are Not Forever

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cambria, Erik; Chattopadhyay, Anupam; Linn, Eike

    Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. Here, we chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering outmore » noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.« less

  3. Storages Are Not Forever

    DOE PAGES

    Cambria, Erik; Chattopadhyay, Anupam; Linn, Eike; ...

    2017-05-27

    Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. Here, we chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering outmore » noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.« less

  4. High-Density Digital Data Storage System

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth D.; Gray, David L.

    1995-01-01

    High-density digital data storage system designed for cost-effective storage of large amounts of information acquired during experiments. System accepts up to 20 channels of 16-bit digital data with overall transfer rates of 500 kilobytes per second. Data recorded on 8-millimeter magnetic tape in cartridges, each capable of holding up to five gigabytes of data. Each cartridge mounted on one of two tape drives. Operator chooses to use either or both of drives. One drive used for primary storage of data while other can be used to make a duplicate record of data. Alternatively, other drive serves as backup data-storage drive when primary one fails.

  5. Public storage for the Open Science Grid

    NASA Astrophysics Data System (ADS)

    Levshina, T.; Guru, A.

    2014-06-01

    The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.

  6. Online mass storage system detailed requirements document

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The requirements for an online high density magnetic tape data storage system that can be implemented in a multipurpose, multihost environment is set forth. The objective of the mass storage system is to provide a facility for the compact storage of large quantities of data and to make this data accessible to computer systems with minimum operator handling. The results of a market survey and analysis of candidate vendor who presently market high density tape data storage systems are included.

  7. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  8. Reliable data storage system design and implementation for acoustic logging while drilling

    NASA Astrophysics Data System (ADS)

    Hao, Xiaolong; Ju, Xiaodong; Wu, Xiling; Lu, Junqiang; Men, Baiyong; Yao, Yongchao; Liu, Dong

    2016-12-01

    Owing to the limitations of real-time transmission, reliable downhole data storage and fast ground reading have become key technologies in developing tools for acoustic logging while drilling (LWD). In order to improve the reliability of the downhole storage system in conditions of high temperature, intensive shake and periodic power supply, improvements were made in terms of hardware and software. In hardware, we integrated the storage system and data acquisition control module into one circuit board, to reduce the complexity of the storage process, by adopting the controller combination of digital signal processor and field programmable gate array. In software, we developed a systematic management strategy for reliable storage. Multiple-backup independent storage was employed to increase the data redundancy. A traditional error checking and correction (ECC) algorithm was improved and we embedded the calculated ECC code into all management data and waveform data. A real-time storage algorithm for arbitrary length data was designed to actively preserve the storage scene and ensure the independence of the stored data. The recovery procedure of management data was optimized to realize reliable self-recovery. A new bad block management idea of static block replacement and dynamic page mark was proposed to make the period of data acquisition and storage more balanced. In addition, we developed a portable ground data reading module based on a new reliable high speed bus to Ethernet interface to achieve fast reading of the logging data. Experiments have shown that this system can work stably below 155 °C with a periodic power supply. The effective ground data reading rate reaches 1.375 Mbps with 99.7% one-time success rate at room temperature. This work has high practical application significance in improving the reliability and field efficiency of acoustic LWD tools.

  9. Mass storage systems for data transport in the early space station era 1992-1998

    NASA Technical Reports Server (NTRS)

    Carper, Richard (Editor); Dalton, John (Editor); Healey, Mike (Editor); Kempster, Linda (Editor); Martin, John (Editor); Mccaleb, Fred (Editor); Sobieski, Stanley (Editor); Sos, John (Editor)

    1987-01-01

    NASA's Space Station Program will provide a vehicle to deploy an unprecedented number of data producing experiments and operational devices. Peak down link data rates are expected to be in the 500 megabit per second range and the daily data volume could reach 2.4 terabytes. Such startling requirements inspired an internal NASA study to determine if economically viable data storage solutions are likely to be available to support the Ground Data Transport segment of the NASA data system. To derive the requirements for data storage subsystems, several alternative data transport architectures were identified with different degrees of decentralization. Data storage operations at each subsystem were categorized based on access time and retrieval functions, and reduced to the following types of subsystems: First in First out (FIFO) storage, fast random access storage, and slow access with staging. The study showed that industry funded magnetic and optical storage technology has a reasonable probability of meeting these requirements. There are, however, system level issues that need to be addressed in the near term.

  10. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  11. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  12. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  13. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  14. Data storage technology comparisons

    NASA Technical Reports Server (NTRS)

    Katti, Romney R.

    1990-01-01

    The role of data storage and data storage technology is an integral, though conceptually often underestimated, portion of data processing technology. Data storage is important in the mass storage mode in which generated data is buffered for later use. But data storage technology is also important in the data flow mode when data are manipulated and hence required to flow between databases, datasets and processors. This latter mode is commonly associated with memory hierarchies which support computation. VLSI devices can reasonably be defined as electronic circuit devices such as channel and control electronics as well as highly integrated, solid-state devices that are fabricated using thin film deposition technology. VLSI devices in both capacities play an important role in data storage technology. In addition to random access memories (RAM), read-only memories (ROM), and other silicon-based variations such as PROM's, EPROM's, and EEPROM's, integrated devices find their way into a variety of memory technologies which offer significant performance advantages. These memory technologies include magnetic tape, magnetic disk, magneto-optic disk, and vertical Bloch line memory. In this paper, some comparison between selected technologies will be made to demonstrate why more than one memory technology exists today, based for example on access time and storage density at the active bit and system levels.

  15. The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    1993-01-01

    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction.

  16. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center

    PubMed Central

    Dou, Chao

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 PMID:28090205

  17. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center.

    PubMed

    Miao, Beibei; Dou, Chao; Jin, Xuebo

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 .

  18. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question:more » Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms« less

  19. Working and Net Available Shell Storage Capacity

    EIA Publications

    2017-01-01

    Working and Net Available Shell Storage Capacity is the U.S. Energy Information Administration’s (EIA) report containing storage capacity data for crude oil, petroleum products, and selected biofuels. The report includes tables detailing working and net available shell storage capacity by type of facility, product, and Petroleum Administration for Defense District (PAD District). Net available shell storage capacity is broken down further to show the percent for exclusive use by facility operators and the percent leased to others. Crude oil storage capacity data are also provided for Cushing, Oklahoma, an important crude oil market center. Data are released twice each year near the end of May (data for March 31) and near the end of November (data for September 30).

  20. 21 CFR 58.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Storage and retrieval of records and data. 58.190...) There shall be archives for orderly storage and expedient retrieval of all raw data, documentation... GENERAL GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Records and Reports § 58.190 Storage...

  1. 21 CFR 58.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Storage and retrieval of records and data. 58.190...) There shall be archives for orderly storage and expedient retrieval of all raw data, documentation... GENERAL GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Records and Reports § 58.190 Storage...

  2. 21 CFR 58.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Storage and retrieval of records and data. 58.190...) There shall be archives for orderly storage and expedient retrieval of all raw data, documentation... GENERAL GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Records and Reports § 58.190 Storage...

  3. 21 CFR 58.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Storage and retrieval of records and data. 58.190...) There shall be archives for orderly storage and expedient retrieval of all raw data, documentation... GENERAL GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Records and Reports § 58.190 Storage...

  4. 21 CFR 58.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Storage and retrieval of records and data. 58.190...) There shall be archives for orderly storage and expedient retrieval of all raw data, documentation... GENERAL GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Records and Reports § 58.190 Storage...

  5. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  6. 28 CFR 115.289 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Community Confinement Facilities Data Collection and Review § 115.289 Data storage, publication, and destruction. (a) The agency shall ensure that data collected...

  7. 28 CFR 115.289 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Community Confinement Facilities Data Collection and Review § 115.289 Data storage, publication, and destruction. (a) The agency shall ensure that data collected...

  8. 28 CFR 115.289 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Community Confinement Facilities Data Collection and Review § 115.289 Data storage, publication, and destruction. (a) The agency shall ensure that data collected...

  9. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE PAGES

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  10. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  11. The Petascale Data Storage Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Garth; Long, Darrell; Honeyman, Peter

    2013-07-01

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.

  12. LVFS: A Big Data File Storage Bridge for the HPC Community

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.

    2015-12-01

    Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.

  13. Set processing in a network environment. [data bases and magnetic disks and tapes

    NASA Technical Reports Server (NTRS)

    Hardgrave, W. T.

    1975-01-01

    A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.

  14. 10 CFR 1016.21 - Protection of Restricted Data in storage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Protection of Restricted Data in storage. 1016.21 Section 1016.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.21 Protection of Restricted Data in storage. (a) Persons who possess Restricted Data...

  15. 10 CFR 1016.21 - Protection of Restricted Data in storage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Protection of Restricted Data in storage. 1016.21 Section 1016.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.21 Protection of Restricted Data in storage. (a) Persons who possess Restricted Data...

  16. 10 CFR 1016.21 - Protection of Restricted Data in storage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Protection of Restricted Data in storage. 1016.21 Section 1016.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.21 Protection of Restricted Data in storage. (a) Persons who possess Restricted Data...

  17. 28 CFR 115.389 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Juvenile Facilities Data Collection and Review § 115.389 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115...

  18. 28 CFR 115.189 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Lockups Data Collection and Review § 115.189 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115.187 are...

  19. 28 CFR 115.389 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Juvenile Facilities Data Collection and Review § 115.389 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115...

  20. 28 CFR 115.189 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Lockups Data Collection and Review § 115.189 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115.187 are...

  1. 28 CFR 115.89 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Adult Prisons and Jails Data Collection and Review § 115.89 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to...

  2. 28 CFR 115.89 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Adult Prisons and Jails Data Collection and Review § 115.89 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to...

  3. 28 CFR 115.89 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Adult Prisons and Jails Data Collection and Review § 115.89 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to...

  4. 28 CFR 115.189 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Lockups Data Collection and Review § 115.189 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115.187 are...

  5. 28 CFR 115.389 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Juvenile Facilities Data Collection and Review § 115.389 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115...

  6. 10 CFR 1016.21 - Protection of Restricted Data in storage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Protection of Restricted Data in storage. 1016.21 Section 1016.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.21 Protection of Restricted Data in storage. (a) Persons who possess Restricted Data...

  7. 10 CFR 1016.21 - Protection of Restricted Data in storage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Protection of Restricted Data in storage. 1016.21 Section 1016.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.21 Protection of Restricted Data in storage. (a) Persons who possess Restricted Data...

  8. EMASS (tm): An expandable solution for NASA space data storage needs

    NASA Technical Reports Server (NTRS)

    Peterson, Anthony L.; Cardwell, P. Larry

    1992-01-01

    The data acquisition, distribution, processing, and archiving requirements of NASA and other U.S. Government data centers present significant data management challenges that must be met in the 1990's. The Earth Observing System (EOS) project alone is expected to generate daily data volumes greater than 2 Terabytes (2(10)(exp 12) Bytes). As the scientific community makes use of this data their work product will result in larger, increasingly complex data sets to be further exploited and managed. The challenge for data storage systems is to satisfy the initial data management requirements with cost effective solutions that provide for planned growth. This paper describes the expandable architecture of the E-Systems Modular Automated Storage System (EMASS (TM)), a mass storage system which is designed to support NASA's data capture, storage, distribution, and management requirements into the 21st century.

  9. EMASS (trademark): An expandable solution for NASA space data storage needs

    NASA Technical Reports Server (NTRS)

    Peterson, Anthony L.; Cardwell, P. Larry

    1991-01-01

    The data acquisition, distribution, processing, and archiving requirements of NASA and other U.S. Government data centers present significant data management challenges that must be met in the 1990's. The Earth Observing System (EOS) project alone is expected to generate daily data volumes greater than 2 Terabytes (2 x 10(exp 12) Bytes). As the scientific community makes use of this data, their work will result in larger, increasingly complex data sets to be further exploited and managed. The challenge for data storage systems is to satisfy the initial data management requirements with cost effective solutions that provide for planned growth. The expendable architecture of the E-Systems Modular Automated Storage System (EMASS(TM)), a mass storage system which is designed to support NASA's data capture, storage, distribution, and management requirements into the 21st century is described.

  10. Eighth Goddard Conference on Mass Storage Systems and Technologies in Cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    2000-01-01

    This document contains copies of those technical papers received in time for publication prior to the Eighth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center March 27-30, 2000. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, new technology with a special emphasis on holographic storage, performance, standards, site reports, vendor solutions. Tutorials will be available on stability of optical media, disk subsystem performance evaluation, I/O and storage tuning, functionality and performance evaluation of file systems for storage area networks.

  11. Combined Acquisition/Processing For Data Reduction

    NASA Astrophysics Data System (ADS)

    Kruger, Robert A.

    1982-01-01

    Digital image processing systems necessarily consist of three components: acquisition, storage/retrieval and processing. The acquisition component requires the greatest data handling rates. By coupling together the acquisition witn some online hardwired processing, data rates and capacities for short term storage can be reduced. Furthermore, long term storage requirements can be reduced further by appropriate processing and editing of image data contained in short term memory. The net result could be reduced performance requirements for mass storage, processing and communication systems. Reduced amounts of data also snouid speed later data analysis and diagnostic decision making.

  12. High Burnup Dry Storage Cask Research and Development Project, Final Test Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2014-02-27

    EPRI is leading a project team to develop and implement the first five years of a Test Plan to collect data from a SNF dry storage system containing high burnup fuel.12 The Test Plan defined in this document outlines the data to be collected, and the storage system design, procedures, and licensing necessary to implement the Test Plan.13 The main goals of the proposed test are to provide confirmatory data14 for models, future SNF dry storage cask design, and to support license renewals and new licenses for ISFSIs. To provide data that is most relevant to high burnup fuel inmore » dry storage, the design of the test storage system must mimic real conditions that high burnup SNF experiences during all stages of dry storage: loading, cask drying, inert gas backfilling, and transfer to the ISFSI for multi-year storage.15 Along with other optional modeling, SETs, and SSTs, the data collected in this Test Plan can be used to evaluate the integrity of dry storage systems and the high burnup fuel contained therein over many decades. It should be noted that the Test Plan described in this document discusses essential activities that go beyond the first five years of Test Plan implementation.16 The first five years of the Test Plan include activities up through loading the cask, initiating the data collection, and beginning the long-term storage period at the ISFSI. The Test Plan encompasses the overall project that includes activities that may not be completed until 15 or more years from now, including continued data collection, shipment of the Research Project Cask to a Fuel Examination Facility, opening the cask at the Fuel Examination Facility, and examining the high burnup fuel after the initial storage period.« less

  13. Remote direct memory access

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.

    2012-12-11

    Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation.

  14. 40 CFR 792.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 33 2013-07-01 2013-07-01 false Specimen and data storage facilities..., for the storage and retrieval of all raw data and specimens from completed studies. ... SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS Facilities § 792.51 Specimen and data...

  15. 40 CFR 792.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 32 2014-07-01 2014-07-01 false Specimen and data storage facilities..., for the storage and retrieval of all raw data and specimens from completed studies. ... SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS Facilities § 792.51 Specimen and data...

  16. 40 CFR 792.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 32 2011-07-01 2011-07-01 false Specimen and data storage facilities..., for the storage and retrieval of all raw data and specimens from completed studies. ... SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS Facilities § 792.51 Specimen and data...

  17. 40 CFR 792.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 33 2012-07-01 2012-07-01 false Specimen and data storage facilities..., for the storage and retrieval of all raw data and specimens from completed studies. ... SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS Facilities § 792.51 Specimen and data...

  18. Recent Advances of Flexible Data Storage Devices Based on Organic Nanoscaled Materials.

    PubMed

    Zhou, Li; Mao, Jingyu; Ren, Yi; Han, Su-Ting; Roy, Vellaisamy A L; Zhou, Ye

    2018-03-01

    Following the trend of miniaturization as per Moore's law, and facing the strong demand of next-generation electronic devices that should be highly portable, wearable, transplantable, and lightweight, growing endeavors have been made to develop novel flexible data storage devices possessing nonvolatile ability, high-density storage, high-switching speed, and reliable endurance properties. Nonvolatile organic data storage devices including memory devices on the basis of floating-gate, charge-trapping, and ferroelectric architectures, as well as organic resistive memory are believed to be favorable candidates for future data storage applications. In this Review, typical information on device structure, memory characteristics, device operation mechanisms, mechanical properties, challenges, and recent progress of the above categories of flexible data storage devices based on organic nanoscaled materials is summarized. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. High-performance metadata indexing and search in petascale data storage systems

    NASA Astrophysics Data System (ADS)

    Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.

    2008-07-01

    Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.

  20. Parallel checksumming of data chunks of a shared data object using a log-structured file system

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-09-06

    Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.

  1. 40 CFR 60.116b - Monitoring of operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... range. (e) Available data on the storage temperature may be used to determine the maximum true vapor...: (i) Available data on the Reid vapor pressure and the maximum expected storage temperature based on... Liquid Storage Vessels (Including Petroleum Liquid Storage Vessels) for Which Construction...

  2. A privacy-preserving solution for compressed storage and selective retrieval of genomic data.

    PubMed

    Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S; Molyneaux, Adam; Xu, Zhenyu; Fellay, Jacques; Steinmetz, Lars M; Hubaux, Jean-Pierre

    2016-12-01

    In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients' complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. © 2016 Huang et al.; Published by Cold Spring Harbor Laboratory Press.

  3. A privacy-preserving solution for compressed storage and selective retrieval of genomic data

    PubMed Central

    Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S.; Molyneaux, Adam; Xu, Zhenyu; Hubaux, Jean-Pierre

    2016-01-01

    In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients’ complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. PMID:27789525

  4. 21 CFR 58.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Specimen and data storage facilities. 58.51..., for the storage and retrieval of all raw data and specimens from completed studies. ... GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Facilities § 58.51 Specimen and data...

  5. 21 CFR 58.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Specimen and data storage facilities. 58.51..., for the storage and retrieval of all raw data and specimens from completed studies. ... GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Facilities § 58.51 Specimen and data...

  6. 21 CFR 58.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Specimen and data storage facilities. 58.51..., for the storage and retrieval of all raw data and specimens from completed studies. ... GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Facilities § 58.51 Specimen and data...

  7. 40 CFR 792.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....190 Storage and retrieval of records and data. (a) All raw data, documentation, records, protocols... 40 Protection of Environment 33 2012-07-01 2012-07-01 false Storage and retrieval of records and data. 792.190 Section 792.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  8. 40 CFR 792.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....190 Storage and retrieval of records and data. (a) All raw data, documentation, records, protocols... 40 Protection of Environment 32 2011-07-01 2011-07-01 false Storage and retrieval of records and data. 792.190 Section 792.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  9. 40 CFR 792.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ....190 Storage and retrieval of records and data. (a) All raw data, documentation, records, protocols... 40 Protection of Environment 33 2013-07-01 2013-07-01 false Storage and retrieval of records and data. 792.190 Section 792.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  10. 40 CFR 792.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ....190 Storage and retrieval of records and data. (a) All raw data, documentation, records, protocols... 40 Protection of Environment 32 2014-07-01 2014-07-01 false Storage and retrieval of records and data. 792.190 Section 792.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  11. 21 CFR 58.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Specimen and data storage facilities. 58.51..., for the storage and retrieval of all raw data and specimens from completed studies. ... GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Facilities § 58.51 Specimen and data...

  12. 21 CFR 58.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Specimen and data storage facilities. 58.51..., for the storage and retrieval of all raw data and specimens from completed studies. ... GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Facilities § 58.51 Specimen and data...

  13. Storage and retrieval of medical images from data warehouses

    NASA Astrophysics Data System (ADS)

    Tikekar, Rahul V.; Fotouhi, Farshad A.; Ragan, Don P.

    1995-11-01

    As our applications continue to become more sophisticated, the demand for more storage continues to rise. Hence many businesses are looking toward data warehousing technology to satisfy their storage needs. A warehouse is different from a conventional database and hence deserves a different approach while storing data that might be retrieved at a later point in time. In this paper we look at the problem of storing and retrieving medical image data from a warehouse. We regard the warehouse as a pyramid with fast storage devices at the top and slower storage devices at the bottom. Our approach is to store the most needed information abstract at the top of the pyramid and more detailed and storage consuming data toward the end of the pyramid. This information is linked for browsing purposes. In a similar fashion, during the retrieval of data, the user is given a sample representation with browse option of the detailed data and, as required, more and more details are made available.

  14. A Study of Practical Proxy Reencryption with a Keyword Search Scheme considering Cloud Storage Structure

    PubMed Central

    Lee, Im-Yeong

    2014-01-01

    Data outsourcing services have emerged with the increasing use of digital information. They can be used to store data from various devices via networks that are easy to access. Unlike existing removable storage systems, storage outsourcing is available to many users because it has no storage limit and does not require a local storage medium. However, the reliability of storage outsourcing has become an important topic because many users employ it to store large volumes of data. To protect against unethical administrators and attackers, a variety of cryptography systems are used, such as searchable encryption and proxy reencryption. However, existing searchable encryption technology is inconvenient for use in storage outsourcing environments where users upload their data to be shared with others as necessary. In addition, some existing schemes are vulnerable to collusion attacks and have computing cost inefficiencies. In this paper, we analyze existing proxy re-encryption with keyword search. PMID:24693240

  15. A study of practical proxy reencryption with a keyword search scheme considering cloud storage structure.

    PubMed

    Lee, Sun-Ho; Lee, Im-Yeong

    2014-01-01

    Data outsourcing services have emerged with the increasing use of digital information. They can be used to store data from various devices via networks that are easy to access. Unlike existing removable storage systems, storage outsourcing is available to many users because it has no storage limit and does not require a local storage medium. However, the reliability of storage outsourcing has become an important topic because many users employ it to store large volumes of data. To protect against unethical administrators and attackers, a variety of cryptography systems are used, such as searchable encryption and proxy reencryption. However, existing searchable encryption technology is inconvenient for use in storage outsourcing environments where users upload their data to be shared with others as necessary. In addition, some existing schemes are vulnerable to collusion attacks and have computing cost inefficiencies. In this paper, we analyze existing proxy re-encryption with keyword search.

  16. Device and methods for writing and erasing analog information in small memory units via voltage pulses

    DOEpatents

    El Gabaly Marquez, Farid; Talin, Albert Alec

    2018-04-17

    Devices and methods for non-volatile analog data storage are described herein. In an exemplary embodiment, an analog memory device comprises a potential-carrier source layer, a barrier layer deposited on the source layer, and at least two storage layers deposited on the barrier layer. The memory device can be prepared to write and read data via application of a biasing voltage between the source layer and the storage layers, wherein the biasing voltage causes potential-carriers to migrate into the storage layers. After initialization, data can be written to the memory device by application of a voltage pulse between two storage layers that causes potential-carriers to migrate from one storage layer to another. A difference in concentration of potential carriers caused by migration of potential-carriers between the storage layers results in a voltage that can be measured in order to read the written data.

  17. Optimising LAN access to grid enabled storage elements

    NASA Astrophysics Data System (ADS)

    Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.

    2008-07-01

    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.

  18. Influence of technology on magnetic tape storage device characteristics

    NASA Technical Reports Server (NTRS)

    Gniewek, John J.; Vogel, Stephen M.

    1994-01-01

    There are available today many data storage devices that serve the diverse application requirements of the consumer, professional entertainment, and computer data processing industries. Storage technologies include semiconductors, several varieties of optical disk, optical tape, magnetic disk, and many varieties of magnetic tape. In some cases, devices are developed with specific characteristics to meet specification requirements. In other cases, an existing storage device is modified and adapted to a different application. For magnetic tape storage devices, examples of the former case are 3480/3490 and QIC device types developed for the high end and low end segments of the data processing industry respectively, VHS, Beta, and 8 mm formats developed for consumer video applications, and D-1, D-2, D-3 formats developed for professional video applications. Examples of modified and adapted devices include 4 mm, 8 mm, 12.7 mm and 19 mm computer data storage devices derived from consumer and professional audio and video applications. With the conversion of the consumer and professional entertainment industries from analog to digital storage and signal processing, there have been increasing references to the 'convergence' of the computer data processing and entertainment industry technologies. There has yet to be seen, however, any evidence of convergence of data storage device types. There are several reasons for this. The diversity of application requirements results in varying degrees of importance for each of the tape storage characteristics.

  19. Holographic memory for high-density data storage and high-speed pattern recognition

    NASA Astrophysics Data System (ADS)

    Gu, Claire

    2002-09-01

    As computers and the internet become faster and faster, more and more information is transmitted, received, and stored everyday. The demand for high density and fast access time data storage is pushing scientists and engineers to explore all possible approaches including magnetic, mechanical, optical, etc. Optical data storage has already demonstrated its potential in the competition against other storage technologies. CD and DVD are showing their advantages in the computer and entertainment market. What motivated the use of optical waves to store and access information is the same as the motivation for optical communication. Light or an optical wave has an enormous capacity (or bandwidth) to carry information because of its short wavelength and parallel nature. In optical storage, there are two types of mechanism, namely localized and holographic memories. What gives the holographic data storage an advantage over localized bit storage is the natural ability to read the stored information in parallel, therefore, meeting the demand for fast access. Another unique feature that makes the holographic data storage attractive is that it is capable of performing associative recall at an incomparable speed. Therefore, volume holographic memory is particularly suitable for high-density data storage and high-speed pattern recognition. In this paper, we review previous works on volume holographic memories and discuss the challenges for this technology to become a reality.

  20. Review of ultra-high density optical storage technologies for big data center

    NASA Astrophysics Data System (ADS)

    Hao, Ruan; Liu, Jie

    2016-10-01

    In big data center, optical storage technologies have many advantages, such as energy saving and long lifetime. However, how to improve the storage density of optical storage is still a huge challenge. Maybe the multilayer optical storage technology is the good candidate for big data center in the years to come. Due to the number of layers is primarily limited by transmission of each layer, the largest capacities of the multilayer disc are around 1 TB/disc and 10 TB/ cartridge. Holographic data storage (HDS) is a volumetric approach, but its storage capacity is also strictly limited by the diffractive nature of light. For a holographic disc with total thickness of 1.5mm, its potential capacities are not more than 4TB/disc and 40TB/ cartridge. In recent years, the development of super resolution optical storage technology has attracted more attentions. Super-resolution photoinduction-inhibition nanolithography (SPIN) technology with 9 nm feature size and 52nm two-line resolution was reported 3 years ago. However, turning this exciting principle into a real storage system is a huge challenge. It can be expected that in the future, the capacities of 10TB/disc and 100TB/cartridge can be achieved. More importantly, due to breaking the diffraction limit of light, SPIN technology will open the door to improve the optical storage capacity steadily to meet the need of the developing big data center.

  1. Trade-off study of data storage technologies

    NASA Technical Reports Server (NTRS)

    Kadyszewski, R. V.

    1977-01-01

    The need to store and retrieve large quantities of data at modest cost has generated the need for an economical, compact, archival mass storage system. Very significant improvements in the state-of-the-art of mass storage systems have been accomplished through the development of a number of magnetic, electro-optical, and other related devices. This study was conducted in order to do a trade-off between these data storage devices and the related technologies in order to determine an optimum approach for an archival mass data storage system based upon a comparison of the projected capabilities and characteristics of these devices to yield operational systems in the early 1980's.

  2. Towards the Interoperability of Web, Database, and Mass Storage Technologies for Petabyte Archives

    NASA Technical Reports Server (NTRS)

    Moore, Reagan; Marciano, Richard; Wan, Michael; Sherwin, Tom; Frost, Richard

    1996-01-01

    At the San Diego Supercomputer Center, a massive data analysis system (MDAS) is being developed to support data-intensive applications that manipulate terabyte sized data sets. The objective is to support scientific application access to data whether it is located at a Web site, stored as an object in a database, and/or storage in an archival storage system. We are developing a suite of demonstration programs which illustrate how Web, database (DBMS), and archival storage (mass storage) technologies can be integrated. An application presentation interface is being designed that integrates data access to all of these sources. We have developed a data movement interface between the Illustra object-relational database and the NSL UniTree archival storage system running in a production mode at the San Diego Supercomputer Center. With this interface, an Illustra client can transparently access data on UniTree under the control of the Illustr DBMS server. The current implementation is based on the creation of a new DBMS storage manager class, and a set of library functions that allow the manipulation and migration of data stored as Illustra 'large objects'. We have extended this interface to allow a Web client application to control data movement between its local disk, the Web server, the DBMS Illustra server, and the UniTree mass storage environment. This paper describes some of the current approaches successfully integrating these technologies. This framework is measured against a representative sample of environmental data extracted from the San Diego Ba Environmental Data Repository. Practical lessons are drawn and critical research areas are highlighted.

  3. Unbreakable distributed storage with quantum key distribution network and password-authenticated secret sharing

    PubMed Central

    Fujiwara, M.; Waseda, A.; Nojima, R.; Moriai, S.; Ogata, W.; Sasaki, M.

    2016-01-01

    Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir’s (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km). PMID:27363566

  4. Unbreakable distributed storage with quantum key distribution network and password-authenticated secret sharing.

    PubMed

    Fujiwara, M; Waseda, A; Nojima, R; Moriai, S; Ogata, W; Sasaki, M

    2016-07-01

    Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir's (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km).

  5. Goddard Conference on Mass Storage Systems and Technologies, volume 2

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor)

    1993-01-01

    Papers and viewgraphs from the conference are presented. Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional discussion topics addressed the evolution of the identifiable unit for processing (file, granule, data set, or some similar object) as data ingestion rates increase dramatically, and the present state of the art in mass storage technology.

  6. Use of Schema on Read in Earth Science Data Archives

    NASA Technical Reports Server (NTRS)

    Hegde, Mahabaleshwara; Smit, Christine; Pilone, Paul; Petrenko, Maksym; Pham, Long

    2017-01-01

    Traditionally, NASA Earth Science data archives have file-based storage using proprietary data file formats, such as HDF and HDF-EOS, which are optimized to support fast and efficient storage of spaceborne and model data as they are generated. The use of file-based storage essentially imposes an indexing strategy based on data dimensions. In most cases, NASA Earth Science data uses time as the primary index, leading to poor performance in accessing data in spatial dimensions. For example, producing a time series for a single spatial grid cell involves accessing a large number of data files. With exponential growth in data volume due to the ever-increasing spatial and temporal resolution of the data, using file-based archives poses significant performance and cost barriers to data discovery and access. Storing and disseminating data in proprietary data formats imposes an additional access barrier for users outside the mainstream research community. At the NASA Goddard Earth Sciences Data Information Services Center (GES DISC), we have evaluated applying the schema-on-read principle to data access and distribution. We used Apache Parquet to store geospatial data, and have exposed data through Amazon Web Services (AWS) Athena, AWS Simple Storage Service (S3), and Apache Spark. Using the schema-on-read approach allows customization of indexing spatially or temporally to suit the data access pattern. The storage of data in open formats such as Apache Parquet has widespread support in popular programming languages. A wide range of solutions for handling big data lowers the access barrier for all users. This presentation will discuss formats used for data storage, frameworks with This presentation will discuss formats used for data storage, frameworks with support for schema-on-read used for data access, and common use cases covering data usage patterns seen in a geospatial data archive.

  7. Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations: Data used in Geosphere Journal Article

    DOE Data Explorer

    Thomas A. Buscheck

    2015-06-01

    This data submission is for Phase 2 of Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations, which focuses on multi-fluid (CO2 and brine) geothermal energy production and diurnal bulk energy storage in geologic settings that are suitable for geologic CO2 storage. This data submission includes all data used in the Geosphere Journal article by Buscheck et al (2016). All assumptions are discussed in that article.

  8. Balloon-borne video cassette recorders for digital data storage

    NASA Technical Reports Server (NTRS)

    Althouse, W. E.; Cook, W. R.

    1985-01-01

    A high speed, high capacity digital data storage system was developed for a new balloon-borne gamma-ray telescope. The system incorporates economical consumer products: the portable video cassette recorder (VCR) and a relatively newer item - the digital audio processor. The in-flight recording system employs eight VCRs and will provide a continuous data storage rate of 1.4 megabits/sec throughout a 40 hour balloon flight. Data storage capacity is 25 gigabytes and power consumption is only 10 watts.

  9. Distributed trace using central performance counter memory

    DOEpatents

    Satterfield, David L; Sexton, James C

    2013-10-22

    A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.

  10. Distributed trace using central performance counter memory

    DOEpatents

    Satterfield, David L.; Sexton, James C.

    2013-01-22

    A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.

  11. ICI optical data storage tape: An archival mass storage media

    NASA Technical Reports Server (NTRS)

    Ruddick, Andrew J.

    1993-01-01

    At the 1991 Conference on Mass Storage Systems and Technologies, ICI Imagedata presented a paper which introduced ICI Optical Data Storage Tape. This paper placed specific emphasis on the media characteristics and initial data was presented which illustrated the archival stability of the media. More exhaustive analysis that was carried out on the chemical stability of the media is covered. Equally important, it also addresses archive management issues associated with, for example, the benefits of reduced rewind requirements to accommodate tape relaxation effects that result from careful tribology control in ICI Optical Tape media. ICI Optical Tape media was designed to meet the most demanding requirements of archival mass storage. It is envisaged that the volumetric data capacity, long term stability and low maintenance characteristics demonstrated will have major benefits in increasing reliability and reducing the costs associated with archival storage of large data volumes.

  12. Surface-Enhanced Raman Optical Data Storage system

    DOEpatents

    Vo-Dinh, T.

    1991-03-12

    A method and apparatus for a Surface-Enhanced Raman Optical Data Storage (SERODS) System are disclosed. A medium which exhibits the Surface Enhanced Raman Scattering (SERS) phenomenon has data written onto its surface of microenvironment by means of a write-on procedure which disturbs the surface or microenvironment of the medium and results in the medium having a changed SERS emission when excited. The write-on procedure is controlled by a signal that corresponds to the data to be stored so that the disturbed regions on the storage device (e.g., disk) represent the data. After the data is written onto the storage device it is read by exciting the surface of the storage device with an appropriate radiation source and detecting changes in the SERS emission to produce a detection signal. The data is then reproduced from the detection signal. 5 figures.

  13. Surface-enhanced raman optical data storage system

    DOEpatents

    Vo-Dinh, Tuan

    1991-01-01

    A method and apparatus for a Surface-Enhanced Raman Optical Data Storage (SERODS) System is disclosed. A medium which exhibits the Surface Enhanced Raman Scattering (SERS) phenomenon has data written onto its surface of microenvironment by means of a write-on procedure which disturbs the surface or microenvironment of the medium and results in the medium having a changed SERS emission when excited. The write-on procedure is controlled by a signal that corresponds to the data to be stored so that the disturbed regions on the storage device (e.g., disk) represent the data. After the data is written onto the storage device it is read by exciting the surface of the storage device with an appropriate radiation source and detecting changes in the SERS emission to produce a detection signal. The data is then reproduced from the detection signal.

  14. Research Data Storage: A Framework for Success. ECAR Working Group Paper

    ERIC Educational Resources Information Center

    Blair, Douglas; Dawson, Barbara E.; Fary, Michael; Hillegas, Curtis W.; Hopkins, Brian W.; Lyons, Yolanda; McCullough, Heather; McMullen, Donald F.; Owen, Kim; Ratliff, Mark; Williams, Harry

    2014-01-01

    The EDUCAUSE Center for Analysis and Research Data Management Working Group (ECAR-DM) has created a framework for research data storage as an aid for higher education institutions establishing and evaluating their institution's research data storage efforts. This paper describes areas for consideration and suggests graduated criteria to assist in…

  15. 76 FR 2707 - In the Matter of Certain Data Storage Products and Components Thereof; Notice of Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-14

    ... complaint filed by Data Network Storage, LLC of Newport Beach, California (``DNS''). 75 FR 71736 (Nov. 24... States after importation of certain data storage products and components thereof by reason of... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-748] In the Matter of Certain Data...

  16. Emerging Network Storage Management Standards for Intelligent Data Storage Subsystems

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    This paper discusses the need for intelligent storage devices and subsystems that can provide data integrity metadata, the content of the existing data integrity standard for optical disks and techniques and metadata to verify stored data on optical tapes developed by the Association for Information and Image Management (AIIM) Optical Tape Committee.

  17. 21 CFR 801.435 - User labeling for latex condoms.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... this section. If the data from tests following real time storage described in paragraph (d)(3) of this... data based upon real time storage and testing and have such storage and testing data available for... products are formed from latex films. (b) Data show that the material integrity of latex condoms degrade...

  18. 21 CFR 801.435 - User labeling for latex condoms.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... this section. If the data from tests following real time storage described in paragraph (d)(3) of this... data based upon real time storage and testing and have such storage and testing data available for... products are formed from latex films. (b) Data show that the material integrity of latex condoms degrade...

  19. 21 CFR 801.435 - User labeling for latex condoms.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... this section. If the data from tests following real time storage described in paragraph (d)(3) of this... data based upon real time storage and testing and have such storage and testing data available for... products are formed from latex films. (b) Data show that the material integrity of latex condoms degrade...

  20. 21 CFR 801.435 - User labeling for latex condoms.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... this section. If the data from tests following real time storage described in paragraph (d)(3) of this... data based upon real time storage and testing and have such storage and testing data available for... products are formed from latex films. (b) Data show that the material integrity of latex condoms degrade...

  1. Digital super-resolution holographic data storage based on Hermitian symmetry for achieving high areal density.

    PubMed

    Nobukawa, Teruyoshi; Nomura, Takanori

    2017-01-23

    Digital super-resolution holographic data storage based on Hermitian symmetry is proposed to store digital data in a tiny area of a medium. In general, reducing a recording area with an aperture leads to the improvement in the storage capacity of holographic data storage. Conventional holographic data storage systems however have a limitation in reducing a recording area. This limitation is called a Nyquist size. Unlike the conventional systems, our proposed system can overcome the limitation with the help of a digital holographic technique and digital signal processing. Experimental result shows that the proposed system can record and retrieve a hologram in a smaller area than the Nyquist size on the basis of Hermitian symmetry.

  2. Linear phase encoding for holographic data storage with a single phase-only spatial light modulator.

    PubMed

    Nobukawa, Teruyoshi; Nomura, Takanori

    2016-04-01

    A linear phase encoding is presented for realizing a compact and simple holographic data storage system with a single spatial light modulator (SLM). This encoding method makes it possible to modulate a complex amplitude distribution with a single phase-only SLM in a holographic storage system. In addition, an undesired light due to the imperfection of an SLM can be removed by spatial frequency filtering with a Nyquist aperture. The linear phase encoding is introduced to coaxial holographic data storage. The generation of a signal beam using linear phase encoding is experimentally verified in an interferometer. In a coaxial holographic data storage system, single data recording, shift selectivity, and shift multiplexed recording are experimentally demonstrated.

  3. Storage quality-of-service in cloud-based scientific environments: a standardization approach

    NASA Astrophysics Data System (ADS)

    Millar, Paul; Fuhrmann, Patrick; Hardt, Marcus; Ertl, Benjamin; Brzezniak, Maciej

    2017-10-01

    When preparing the Data Management Plan for larger scientific endeavors, PIs have to balance between the most appropriate qualities of storage space along the line of the planned data life-cycle, its price and the available funding. Storage properties can be the media type, implicitly determining access latency and durability of stored data, the number and locality of replicas, as well as available access protocols or authentication mechanisms. Negotiations between the scientific community and the responsible infrastructures generally happen upfront, where the amount of storage space, media types, like: disk, tape and SSD and the foreseeable data life-cycles are negotiated. With the introduction of cloud management platforms, both in computing and storage, resources can be brokered to achieve the best price per unit of a given quality. However, in order to allow the platform orchestrator to programmatically negotiate the most appropriate resources, a standard vocabulary for different properties of resources and a commonly agreed protocol to communicate those, has to be available. In order to agree on a basic vocabulary for storage space properties, the storage infrastructure group in INDIGO-DataCloud together with INDIGO-associated and external scientific groups, created a working group under the umbrella of the Research Data Alliance (RDA). As communication protocol, to query and negotiate storage qualities, the Cloud Data Management Interface (CDMI) has been selected. Necessary extensions to CDMI are defined in regular meetings between INDIGO and the Storage Network Industry Association (SNIA). Furthermore, INDIGO is contributing to the SNIA CDMI reference implementation as the basis for interfacing the various storage systems in INDIGO to the agreed protocol and to provide an official Open-Source skeleton for systems not being maintained by INDIGO partners.

  4. The Analysis of RDF Semantic Data Storage Optimization in Large Data Era

    NASA Astrophysics Data System (ADS)

    He, Dandan; Wang, Lijuan; Wang, Can

    2018-03-01

    With the continuous development of information technology and network technology in China, the Internet has also ushered in the era of large data. In order to obtain the effective acquisition of information in the era of large data, it is necessary to optimize the existing RDF semantic data storage and realize the effective query of various data. This paper discusses the storage optimization of RDF semantic data under large data.

  5. National assessment of geologic carbon dioxide storage resources: data

    USGS Publications Warehouse

    ,

    2013-01-01

    In 2012, the U.S. Geological Survey (USGS) completed the national assessment of geologic carbon dioxide storage resources. Its data and results are reported in three publications: the assessment data publication (this report), the assessment results publication (U.S. Geological Survey Geologic Carbon Dioxide Storage Resources Assessment Team, 2013a, USGS Circular 1386), and the assessment summary publication (U.S. Geological Survey Geologic Carbon Dioxide Storage Resources Assessment Team, 2013b, USGS Fact Sheet 2013–3020). This data publication supports the results publication and contains (1) individual storage assessment unit (SAU) input data forms with all input parameters and details on the allocation of the SAU surface land area by State and general land-ownership category; (2) figures representing the distribution of all storage classes for each SAU; (3) a table containing most input data and assessment result values for each SAU; and (4) a pairwise correlation matrix specifying geological and methodological dependencies between SAUs that are needed for aggregation of results.

  6. Sixth Goddard Conference on Mass Storage Systems and Technologies Held in Cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    1998-01-01

    This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence.

  7. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  8. Low-cost high performance distributed data storage for multi-channel observations

    NASA Astrophysics Data System (ADS)

    Liu, Ying-bo; Wang, Feng; Deng, Hui; Ji, Kai-fan; Dai, Wei; Wei, Shou-lin; Liang, Bo; Zhang, Xiao-li

    2015-10-01

    The New Vacuum Solar Telescope (NVST) is a 1-m solar telescope that aims to observe the fine structures in both the photosphere and the chromosphere of the Sun. The observational data acquired simultaneously from one channel for the chromosphere and two channels for the photosphere bring great challenges to the data storage of NVST. The multi-channel instruments of NVST, including scientific cameras and multi-band spectrometers, generate at least 3 terabytes data per day and require high access performance while storing massive short-exposure images. It is worth studying and implementing a storage system for NVST which would balance the data availability, access performance and the cost of development. In this paper, we build a distributed data storage system (DDSS) for NVST and then deeply evaluate the availability of real-time data storage on a distributed computing environment. The experimental results show that two factors, i.e., the number of concurrent read/write and the file size, are critically important for improving the performance of data access on a distributed environment. Referring to these two factors, three strategies for storing FITS files are presented and implemented to ensure the access performance of the DDSS under conditions of multi-host write and read simultaneously. The real applications of the DDSS proves that the system is capable of meeting the requirements of NVST real-time high performance observational data storage. Our study on the DDSS is the first attempt for modern astronomical telescope systems to store real-time observational data on a low-cost distributed system. The research results and corresponding techniques of the DDSS provide a new option for designing real-time massive astronomical data storage system and will be a reference for future astronomical data storage.

  9. 43 CFR 3138.11 - How do I apply for a subsurface storage agreement?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... participation factor for all parties to the subsurface storage agreement; and (11) Supporting data (geologic maps showing the storage formation, reservoir data, etc.) demonstrating the capability of the reservoir... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false How do I apply for a subsurface storage...

  10. 43 CFR 3138.11 - How do I apply for a subsurface storage agreement?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... participation factor for all parties to the subsurface storage agreement; and (11) Supporting data (geologic maps showing the storage formation, reservoir data, etc.) demonstrating the capability of the reservoir... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false How do I apply for a subsurface storage...

  11. 43 CFR 3138.11 - How do I apply for a subsurface storage agreement?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... participation factor for all parties to the subsurface storage agreement; and (11) Supporting data (geologic maps showing the storage formation, reservoir data, etc.) demonstrating the capability of the reservoir... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false How do I apply for a subsurface storage...

  12. 43 CFR 3138.11 - How do I apply for a subsurface storage agreement?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... participation factor for all parties to the subsurface storage agreement; and (11) Supporting data (geologic maps showing the storage formation, reservoir data, etc.) demonstrating the capability of the reservoir... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false How do I apply for a subsurface storage...

  13. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Specimen and data storage facilities. 160.51 Section 160.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space...

  14. Optical memory system technology. Citations from the International Aerospace Abstracts data base

    NASA Technical Reports Server (NTRS)

    Zollars, G. F.

    1980-01-01

    Approximately 213 citations from the international literature which concern the development of the optical data storage system technology are presented. Topics covered include holographic computer storage devices, crystal, magneto, and electro-optics, imaging techniques, in addition to optical data processing and storage.

  15. iSDS: a self-configurable software-defined storage system for enterprise

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen

    2018-01-01

    Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.

  16. Goddard Conference on Mass Storage Systems and Technologies, Volume 1

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor)

    1993-01-01

    Copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in Sep. 1992 are included. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems (data ingestion rates now approach the order of terabytes per day). Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional topics addressed the evolution of the identifiable unit for processing purposes as data ingestion rates increase dramatically, and the present state of the art in mass storage technology.

  17. 48 CFR 2452.204-70 - Preservation of, and access to, contract records (tangible and electronically stored information...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... or data storage). ESI devices and media include, but are not be limited to: (1) Computers (mainframe...) Personal data assistants (PDAs); (5) External data storage devices including portable devices (e.g., flash drive); and (6) Data storage media (magnetic, e.g., tape; optical, e.g., compact disc, microfilm, etc...

  18. 48 CFR 2452.204-70 - Preservation of, and access to, contract records (tangible and electronically stored information...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... or data storage). ESI devices and media include, but are not be limited to: (1) Computers (mainframe...) Personal data assistants (PDAs); (5) External data storage devices including portable devices (e.g., flash drive); and (6) Data storage media (magnetic, e.g., tape; optical, e.g., compact disc, microfilm, etc...

  19. Depth enhancement of ion sensitized data

    DOEpatents

    Lamartine, Bruce C.

    2001-01-01

    A process of fabricating a durable data storage medium is disclosed, the durable data storage medium capable of storing, digital or alphanumeric characters as well as graphical shapes or characters. Additionally, a durable data storage medium including a substrate having etched characters therein is disclosed, the substrate characterized as containing detectable residual amounts of ions used in the preparation process.

  20. Preservation Environments

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.

    2004-01-01

    The long-term preservation of digital entities requires mechanisms to manage the authenticity of massive data collections that are written to archival storage systems. Preservation environments impose authenticity constraints and manage the evolution of the storage system technology by building infrastructure independent solutions. This seeming paradox, the need for large archives, while avoiding dependence upon vendor specific solutions, is resolved through use of data grid technology. Data grids provide the storage repository abstractions that make it possible to migrate collections between vendor specific products, while ensuring the authenticity of the archived data. Data grids provide the software infrastructure that interfaces vendor-specific storage archives to preservation environments.

  1. Damsel: A Data Model Storage Library for Exascale Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koziol, Quincey

    The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.

  2. Data on subsurface storage of liquid waste near Pensacola, Florida, 1963-1980

    USGS Publications Warehouse

    Hull, R.W.; Martin, J.B.

    1982-01-01

    Since 1963, when industrial waste was first injected into the subsurface in northwest Florida, considerable data have been collected relating to the geochemistry of subsurface waste storage. This report presents hydrogeologic data on two subsurface waste storage. This report presents hydrogeologic data on two subsurface storage systems near Pensacola, Fla., which inject liquid industrial waste through deep wells into a saline aquifer. Injection sites are described giving a history of well construction, injection, and testing; geologic data from cores and grab samples; hydrographs of injection rates, volume, pressure, and water levels; and chemical and physical data from water-quality samples collected from injection and monitor wells. (USGS)

  3. Multilevel recording of complex amplitude data pages in a holographic data storage system using digital holography.

    PubMed

    Nobukawa, Teruyoshi; Nomura, Takanori

    2016-09-05

    A holographic data storage system using digital holography is proposed to record and retrieve multilevel complex amplitude data pages. Digital holographic techniques are capable of modulating and detecting complex amplitude distribution using current electronic devices. These techniques allow the development of a simple, compact, and stable holographic storage system that mainly consists of a single phase-only spatial light modulator and an image sensor. As a proof-of-principle experiment, complex amplitude data pages with binary amplitude and four-level phase are recorded and retrieved. Experimental results show the feasibility of the proposed holographic data storage system.

  4. Surface-Enhanced Raman Optical Data Storage system

    DOEpatents

    Vo-Dinh, T.

    1994-06-28

    An improved Surface-Enhanced Raman Optical Data Storage System (SERODS) is disclosed. In the improved system, entities capable of existing in multiple reversible states are present on the storage device. Such entities result in changed Surface-Enhanced Raman Scattering (SERS) when localized state changes are effected in less than all of the entities. Therefore, by changing the state of entities in localized regions of a storage device, the SERS emissions in such regions will be changed. When a write-on device is controlled by a data signal, such a localized regions of changed SERS emissions will correspond to the data written on the device. The data may be read by illuminating the surface of the storage device with electromagnetic radiation of an appropriate frequency and detecting the corresponding SERS emissions. Data may be deleted by reversing the state changes of entities in regions where the data was initially written. In application, entities may be individual molecules which allows for the writing of data at the molecular level. A read/write/delete head utilizing near-field quantum techniques can provide for a write/read/delete device capable of effecting state changes in individual molecules, thus providing for the effective storage of data at the molecular level. 18 figures.

  5. Surface-enhanced raman optical data storage system

    DOEpatents

    Vo-Dinh, Tuan

    1994-01-01

    An improved Surface-Enhanced Raman Optical Data Storage System (SERODS) is disclosed. In the improved system, entities capable of existing in multiple reversible states are present on the storage device. Such entities result in changed Surface-Enhanced Raman Scattering (SERS) when localized state changes are effected in less than all of the entities. Therefore, by changing the state of entities in localized regions of a storage device, the SERS emissions in such regions will be changed. When a write-on device is controlled by a data signal, such a localized regions of changed SERS emissions will correspond to the data written on the device. The data may be read by illuminating the surface of the storage device with electromagnetic radiation of an appropriate frequency and detecting the corresponding SERS emissions. Data may be deleted by reversing the state changes of entities in regions where the data was initially written. In application, entities may be individual molecules which allows for the writing of data at the molecular level. A read/write/delete head utilizing near-field quantum techniques can provide for a write/read/delete device capable of effecting state changes in individual molecules, thus providing for the effective storage of data at the molecular level.

  6. Incorporating Oracle on-line space management with long-term archival technology

    NASA Technical Reports Server (NTRS)

    Moran, Steven M.; Zak, Victor J.

    1996-01-01

    The storage requirements of today's organizations are exploding. As computers continue to escalate in processing power, applications grow in complexity and data files grow in size and in number. As a result, organizations are forced to procure more and more megabytes of storage space. This paper focuses on how to expand the storage capacity of a Very Large Database (VLDB) cost-effectively within a Oracle7 data warehouse system by integrating long term archival storage sub-systems with traditional magnetic media. The Oracle architecture described in this paper was based on an actual proof of concept for a customer looking to store archived data on optical disks yet still have access to this data without user intervention. The customer had a requirement to maintain 10 years worth of data on-line. Data less than a year old still had the potential to be updated thus will reside on conventional magnetic disks. Data older than a year will be considered archived and will be placed on optical disks. The ability to archive data to optical disk and still have access to that data provides the system a means to retain large amounts of data that is readily accessible yet significantly reduces the cost of total system storage. Therefore, the cost benefits of archival storage devices can be incorporated into the Oracle storage medium and I/O subsystem without loosing any of the functionality of transaction processing, yet at the same time providing an organization access to all their data.

  7. Use of HSM with Relational Databases

    NASA Technical Reports Server (NTRS)

    Breeden, Randall; Burgess, John; Higdon, Dan

    1996-01-01

    Hierarchical storage management (HSM) systems have evolved to become a critical component of large information storage operations. They are built on the concept of using a hierarchy of storage technologies to provide a balance in performance and cost. In general, they migrate data from expensive high performance storage to inexpensive low performance storage based on frequency of use. The predominant usage characteristic is that frequency of use is reduced with age and in most cases quite rapidly. The result is that HSM provides an economical means for managing and storing massive volumes of data. Inherent in HSM systems is system managed storage, where the system performs most of the work with minimum operations personnel involvement. This automation is generally extended to include: backup and recovery, data duplexing to provide high availability, and catastrophic recovery through use of off-site storage.

  8. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  9. ENERGY STAR Certified Data Center Storage

    EPA Pesticide Factsheets

    Certified models meet all ENERGY STAR requirements as listed in the Version 1.0 ENERGY STAR Program Requirements for Data Center Storage that are effective as of December 2, 2013. A detailed listing of key efficiency criteria are available at http://www.energystar.gov/certified-products/detail/data_center_storage

  10. Storage Optimization of Educational System Data

    ERIC Educational Resources Information Center

    Boja, Catalin

    2006-01-01

    There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…

  11. A new tape product for optical data storage

    NASA Technical Reports Server (NTRS)

    Larsen, T. L.; Woodard, F. E.; Pace, S. J.

    1993-01-01

    A new tape product has been developed for optical data storage. Laser data recording is based on hole or pit formation in a low melting metallic alloy system. The media structure, sputter deposition process, and media characteristics, including write sensitivity, error rates, wear resistance, and archival storage are discussed.

  12. 40 CFR 160.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... retained. (b) There shall be archives for orderly storage and expedient retrieval of all raw data... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Storage and retrieval of records and data. 160.190 Section 160.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  13. 76 FR 40749 - Agency Information Collection Activities: Records and Supporting Data: Importation, Receipt...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-11

    ...] Agency Information Collection Activities: Records and Supporting Data: Importation, Receipt, Storage, and... collection. (2) Title of the Form/Collection: Records and Supporting Data: Importation, Receipt, Storage, and... importation, manufacture, receipt, storage, and disposition of all explosive materials covered under 18 U.S.C...

  14. 40 CFR 160.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... retained. (b) There shall be archives for orderly storage and expedient retrieval of all raw data... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Storage and retrieval of records and data. 160.190 Section 160.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  15. 40 CFR 160.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... retained. (b) There shall be archives for orderly storage and expedient retrieval of all raw data... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Storage and retrieval of records and data. 160.190 Section 160.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  16. 49 CFR 242.205 - Identification of certified persons and recordkeeping.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... adequate to ensure the integrity of the electronic data storage system, including the prevention of unauthorized access to the program logic or the list; (2) The program and data storage system must be protected... system employed by the railroad for data storage permits reasonable access and retrieval of the...

  17. 40 CFR 160.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... retained. (b) There shall be archives for orderly storage and expedient retrieval of all raw data... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Storage and retrieval of records and data. 160.190 Section 160.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  18. 49 CFR 242.205 - Identification of certified persons and recordkeeping.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... adequate to ensure the integrity of the electronic data storage system, including the prevention of unauthorized access to the program logic or the list; (2) The program and data storage system must be protected... system employed by the railroad for data storage permits reasonable access and retrieval of the...

  19. 49 CFR 242.205 - Identification of certified persons and recordkeeping.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... adequate to ensure the integrity of the electronic data storage system, including the prevention of unauthorized access to the program logic or the list; (2) The program and data storage system must be protected... system employed by the railroad for data storage permits reasonable access and retrieval of the...

  20. Balloon-borne video cassette recorders for digital data storage

    NASA Technical Reports Server (NTRS)

    Althouse, W. E.; Cook, W. R.

    1985-01-01

    A high-speed, high-capacity digital data storage system has been developed for a new balloon-borne gamma-ray telescope. The system incorporates sophisticated, yet easy to use and economical consumer products: the portable video cassette recorder (VCR) and a relatively newer item - the digital audio processor. The in-flight recording system employs eight VCRs and will provide a continuous data storage rate of 1.4 megabits/sec throughout a 40 hour balloon flight. Data storage capacity is 25 gigabytes and power consumption is only 10 watts.

  1. Optical Data Storage Capabilities of Bacteriorhodopsin

    NASA Technical Reports Server (NTRS)

    Gary, Charles

    1998-01-01

    We present several measurements of the data storage capability of bacteriorhodopsin films to help establish the baseline performance of this material as a medium for holographic data storage. In particular, we examine the decrease in diffraction efficiency with the density of holograms stored at one location in the film, and we also analyze the recording schedule needed to produce a set of equal intensity holograms at a single location in the film. Using this information along with the assumptions about the performance of the optical system, we can estimate potential data storage densities in bacteriorhodopsin.

  2. Carbon storage in forests and peatlands of Russia

    Treesearch

    V.A. Alexeyev; R.A. Birdsey; [Editors

    1998-01-01

    Contains information about carbon storage in the vegetation, soils, and peatlands of Russia. Estimates of carbon storage in forests are derived from statistical data from the 1988 national forest inventory of Russia and from other sources. Methods are presented for converting data on timber stock into phytomass of tree stands, and for estimating carbon storage in...

  3. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  4. Data Access Based on a Guide Map of the Underwater Wireless Sensor Network

    PubMed Central

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Cheng, Albert M. K.

    2017-01-01

    Underwater wireless sensor networks (UWSNs) represent an area of increasing research interest, as data storage, discovery, and query of UWSNs are always challenging issues. In this paper, a data access based on a guide map (DAGM) method is proposed for UWSNs. In DAGM, the metadata describes the abstracts of data content and the storage location. The center ring is composed of nodes according to the shortest average data query path in the network in order to store the metadata, and the data guide map organizes, diffuses and synchronizes the metadata in the center ring, providing the most time-saving and energy-efficient data query service for the user. For this method, firstly the data is stored in the UWSN. The storage node is determined, the data is transmitted from the sensor node (data generation source) to the storage node, and the metadata is generated for it. Then, the metadata is sent to the center ring node that is the nearest to the storage node and the data guide map organizes the metadata, diffusing and synchronizing it to the other center ring nodes. Finally, when there is query data in any user node, the data guide map will select a center ring node nearest to the user to process the query sentence, and based on the shortest transmission delay and lowest energy consumption, data transmission routing is generated according to the storage location abstract in the metadata. Hence, specific application data transmission from the storage node to the user is completed. The simulation results demonstrate that DAGM has advantages with respect to data access time and network energy consumption. PMID:29039757

  5. Data Access Based on a Guide Map of the Underwater Wireless Sensor Network.

    PubMed

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Song, Houbing; Wang, Hongbin; Ma, Xuefei; Cheng, Albert M K

    2017-10-17

    Underwater wireless sensor networks (UWSNs) represent an area of increasing research interest, as data storage, discovery, and query of UWSNs are always challenging issues. In this paper, a data access based on a guide map (DAGM) method is proposed for UWSNs. In DAGM, the metadata describes the abstracts of data content and the storage location. The center ring is composed of nodes according to the shortest average data query path in the network in order to store the metadata, and the data guide map organizes, diffuses and synchronizes the metadata in the center ring, providing the most time-saving and energy-efficient data query service for the user. For this method, firstly the data is stored in the UWSN. The storage node is determined, the data is transmitted from the sensor node (data generation source) to the storage node, and the metadata is generated for it. Then, the metadata is sent to the center ring node that is the nearest to the storage node and the data guide map organizes the metadata, diffusing and synchronizing it to the other center ring nodes. Finally, when there is query data in any user node, the data guide map will select a center ring node nearest to the user to process the query sentence, and based on the shortest transmission delay and lowest energy consumption, data transmission routing is generated according to the storage location abstract in the metadata. Hence, specific application data transmission from the storage node to the user is completed. The simulation results demonstrate that DAGM has advantages with respect to data access time and network energy consumption.

  6. Demonstration of fully enabled data center subsystem with embedded optical interconnect

    NASA Astrophysics Data System (ADS)

    Pitwon, Richard; Worrall, Alex; Stevens, Paul; Miller, Allen; Wang, Kai; Schmidtke, Katharine

    2014-03-01

    The evolution of data storage communication protocols and corresponding in-system bandwidth densities is set to impose prohibitive cost and performance constraints on future data storage system designs, fuelling proposals for hybrid electronic and optical architectures in data centers. The migration of optical interconnect into the system enclosure itself can substantially mitigate the communications bottlenecks resulting from both the increase in data rate and internal interconnect link lengths. In order to assess the viability of embedding optical links within prevailing data storage architectures, we present the design and assembly of a fully operational data storage array platform, in which all internal high speed links have been implemented optically. This required the deployment of mid-board optical transceivers, an electro-optical midplane and proprietary pluggable optical connectors for storage devices. We present the design of a high density optical layout to accommodate the midplane interconnect requirements of a data storage enclosure with support for 24 Small Form Factor (SFF) solid state or rotating disk drives and the design of a proprietary optical connector and interface cards, enabling standard drives to be plugged into an electro-optical midplane. Crucially, we have also modified the platform to accommodate longer optical interconnect lengths up to 50 meters in order to investigate future datacenter architectures based on disaggregation of modular subsystems. The optically enabled data storage system has been fully validated for both 6 Gb/s and 12 Gb/s SAS data traffic conveyed along internal optical links.

  7. Ultra-high density optical data storage in common transparent plastics.

    PubMed

    Kallepalli, Deepak L N; Alshehri, Ali M; Marquez, Daniela T; Andrzejewski, Lukasz; Scaiano, Juan C; Bhardwaj, Ravi

    2016-05-25

    The ever-increasing demand for high data storage capacity has spurred research on development of innovative technologies and new storage materials. Conventional GByte optical discs (DVDs and Bluray) can be transformed into ultrahigh capacity storage media by encoding multi-level and multiplexed information within the three dimensional volume of a recording medium. However, in most cases the recording medium had to be photosensitive requiring doping with photochromic molecules or nanoparticles in a multilayer stack or in the bulk material. Here, we show high-density data storage in commonly available plastics without any special material preparation. A pulsed laser was used to record data in micron-sized modified regions. Upon excitation by the read laser, each modified region emits fluorescence whose intensity represents 32 grey levels corresponding to 5 bits. We demonstrate up to 20 layers of embedded data. Adjusting the read laser power and detector sensitivity storage capacities up to 0.2 TBytes can be achieved in a standard 120 mm disc.

  8. GRACE, GLDAS and measured groundwater data products show water storage loss in Western Jilin, China.

    PubMed

    Moiwo, Juana Paul; Lu, Wenxi; Tao, Fulu

    2012-01-01

    Water storage depletion is a worsening hydrological problem that limits agricultural production in especially arid/semi-arid regions across the globe. Quantifying water storage dynamics is critical for developing water resources management strategies that are sustainable and protective of the environment. This study uses GRACE (Gravity Recovery and Climate Experiment), GLDAS (Global Land Data Assimilation System) and measured groundwater data products to quantify water storage in Western Jilin (a proxy for semi-arid wetland ecosystems) for the period from January 2002 to December 2009. Uncertainty/bias analysis shows that the data products have an average error <10% (p < 0.05). Comparisons of the storage variables show favorable agreements at various temporal cycles, with R(2) = 0.92 and RMSE = 7.43 mm at the average seasonal cycle. There is a narrowing soil moisture storage change, a widening groundwater storage loss, and an overall storage depletion of 0.85 mm/month in the region. There is possible soil-pore collapse, and land subsidence due to storage depletion in the study area. Invariably, storage depletion in this semi-arid region could have negative implications for agriculture, valuable/fragile wetland ecosystems and people's livelihoods. For sustainable restoration and preservation of wetland ecosystems in the region, it is critical to develop water resources management strategies that limit groundwater extraction rate to that of recharge rate.

  9. The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity

    NASA Astrophysics Data System (ADS)

    Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo

    2015-05-01

    The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.

  10. RAID Disk Arrays for High Bandwidth Applications

    NASA Technical Reports Server (NTRS)

    Moren, Bill

    1996-01-01

    High bandwidth applications require large amounts of data transferred to/from storage devices at extremely high data rates. Further, these applications often are 'real time' in which access to the storage device must take place on the schedule of the data source, not the storage. A good example is a satellite downlink - the volume of data is quite large and the data rates quite high (dozens of MB/sec). Further, a telemetry downlink must take place while the satellite is overhead. A storage technology which is ideally suited to these types of applications is redundant arrays of independent discs (RAID). Raid storage technology, while offering differing methodologies for a variety of applications, supports the performance and redundancy required in real-time applications. Of the various RAID levels, RAID-3 is the only one which provides high data transfer rates under all operating conditions, including after a drive failure.

  11. Eternal 5D data storage by ultrafast laser writing in glass

    NASA Astrophysics Data System (ADS)

    Zhang, J.; ČerkauskaitÄ--, A.; Drevinskas, R.; Patel, A.; Beresna, M.; Kazansky, P. G.

    2016-03-01

    Securely storing large amounts of information over relatively short timescales of 100 years, comparable to the span of the human memory, is a challenging problem. Conventional optical data storage technology used in CDs and DVDs has reached capacities of hundreds of gigabits per square inch, but its lifetime is limited to a decade. DNA based data storage can hold hundreds of terabytes per gram, but the durability is limited. The major challenge is the lack of appropriate combination of storage technology and medium possessing the advantages of both high capacity and long lifetime. The recording and retrieval of the digital data with a nearly unlimited lifetime was implemented by femtosecond laser nanostructuring of fused quartz. The storage allows unprecedented properties including hundreds of terabytes per disc data capacity, thermal stability up to 1000 °C, and virtually unlimited lifetime at room temperature opening a new era of eternal data archiving.

  12. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  13. Analyzing the Impact of Storage Shortage on Data Availability in Decentralized Online Social Networks

    PubMed Central

    He, Ligang; Liao, Xiangke; Huang, Chenlin

    2014-01-01

    Maintaining data availability is one of the biggest challenges in decentralized online social networks (DOSNs). The existing work often assumes that the friends of a user can always contribute to the sufficient storage capacity to store all data. However, this assumption is not always true in today's online social networks (OSNs) due to the fact that nowadays the users often use the smart mobile devices to access the OSNs. The limitation of the storage capacity in mobile devices may jeopardize the data availability. Therefore, it is desired to know the relation between the storage capacity contributed by the OSN users and the level of data availability that the OSNs can achieve. This paper addresses this issue. In this paper, the data availability model over storage capacity is established. Further, a novel method is proposed to predict the data availability on the fly. Extensive simulation experiments have been conducted to evaluate the effectiveness of the data availability model and the on-the-fly prediction. PMID:24892095

  14. Analyzing the impact of storage shortage on data availability in decentralized online social networks.

    PubMed

    Fu, Songling; He, Ligang; Liao, Xiangke; Li, Kenli; Huang, Chenlin

    2014-01-01

    Maintaining data availability is one of the biggest challenges in decentralized online social networks (DOSNs). The existing work often assumes that the friends of a user can always contribute to the sufficient storage capacity to store all data. However, this assumption is not always true in today's online social networks (OSNs) due to the fact that nowadays the users often use the smart mobile devices to access the OSNs. The limitation of the storage capacity in mobile devices may jeopardize the data availability. Therefore, it is desired to know the relation between the storage capacity contributed by the OSN users and the level of data availability that the OSNs can achieve. This paper addresses this issue. In this paper, the data availability model over storage capacity is established. Further, a novel method is proposed to predict the data availability on the fly. Extensive simulation experiments have been conducted to evaluate the effectiveness of the data availability model and the on-the-fly prediction.

  15. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    NASA Astrophysics Data System (ADS)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16 channels with 240MB/s network throughput.When it is integrated,sharing system can provide 1020MB/s write speed simultaneously.When the master storage server fails, the backup storage server takes over the normal service.The literacy of client will not be affected,in which switching time is less than 5s.The design and integrated storage system meet users requirements. Anyway, all-fiber way is too expensive in SAN; SCSI hard disk transfer rate may still be the bottleneck in the development of the entire storage system. Stornext can provide users with efficient sharing, management, automatic archiving of large numbers of files and hardware solutions. It occupies a leading position in big data management. Storage is the most popular sharing shareware, and there are drawbacks in Stornext: Firstly, Stornext software is expensive, in which charge by the sites. When the network scale is large, the purchase cost will be very high. Secondly, the parameters of Stornext software are more demands on the skills of technical staff. If there is a problem, it is difficult to exclude.

  16. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    NASA Astrophysics Data System (ADS)

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  17. 47 CFR 73.1840 - Retention of logs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... suits upon such claims. (b) Logs may be retained on microfilm, microfiche or other data-storage systems... of logs, stored on data-storage systems, to full-size copies, is required of licensees if requested... converting to a data-storage system pursuant to the requirements of § 73.1800 (c) and (d), (§ 73.1800...

  18. 17 CFR 232.501 - Modular submissions and segmented filings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) One or more electronic format documents may be submitted for storage in the non-public EDGAR data... data storage area at any time, not to exceed a total of one megabyte of digital information. If an...-public EDGAR data storage area for assembly as a segmented filing. (2) Segments shall be submitted no...

  19. 47 CFR 73.1840 - Retention of logs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... suits upon such claims. (b) Logs may be retained on microfilm, microfiche or other data-storage systems... of logs, stored on data-storage systems, to full-size copies, is required of licensees if requested... converting to a data-storage system pursuant to the requirements of § 73.1800 (c) and (d), (§ 73.1800...

  20. 47 CFR 73.1840 - Retention of logs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... suits upon such claims. (b) Logs may be retained on microfilm, microfiche or other data-storage systems... of logs, stored on data-storage systems, to full-size copies, is required of licensees if requested... converting to a data-storage system pursuant to the requirements of § 73.1800 (c) and (d), (§ 73.1800...

  1. 17 CFR 232.501 - Modular submissions and segmented filings.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...) One or more electronic format documents may be submitted for storage in the non-public EDGAR data... data storage area at any time, not to exceed a total of one megabyte of digital information. If an...-public EDGAR data storage area for assembly as a segmented filing. (2) Segments shall be submitted no...

  2. 17 CFR 232.501 - Modular submissions and segmented filings.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) One or more electronic format documents may be submitted for storage in the non-public EDGAR data... data storage area at any time, not to exceed a total of one megabyte of digital information. If an...-public EDGAR data storage area for assembly as a segmented filing. (2) Segments shall be submitted no...

  3. 47 CFR 73.1840 - Retention of logs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... suits upon such claims. (b) Logs may be retained on microfilm, microfiche or other data-storage systems... of logs, stored on data-storage systems, to full-size copies, is required of licensees if requested... converting to a data-storage system pursuant to the requirements of § 73.1800 (c) and (d), (§ 73.1800...

  4. 17 CFR 232.501 - Modular submissions and segmented filings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...) One or more electronic format documents may be submitted for storage in the non-public EDGAR data... data storage area at any time, not to exceed a total of one megabyte of digital information. If an...-public EDGAR data storage area for assembly as a segmented filing. (2) Segments shall be submitted no...

  5. 47 CFR 73.1840 - Retention of logs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... suits upon such claims. (b) Logs may be retained on microfilm, microfiche or other data-storage systems... of logs, stored on data-storage systems, to full-size copies, is required of licensees if requested... converting to a data-storage system pursuant to the requirements of § 73.1800 (c) and (d), (§ 73.1800...

  6. 17 CFR 232.501 - Modular submissions and segmented filings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...) One or more electronic format documents may be submitted for storage in the non-public EDGAR data... data storage area at any time, not to exceed a total of one megabyte of digital information. If an...-public EDGAR data storage area for assembly as a segmented filing. (2) Segments shall be submitted no...

  7. Overview of Probe-based Storage Technologies

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Yang, Ci Hui; Wen, Jing; Gong, Si Di; Peng, Yuan Xiu

    2016-07-01

    The current world is in the age of big data where the total amount of global digital data is growing up at an incredible rate. This indeed necessitates a drastic enhancement on the capacity of conventional data storage devices that are, however, suffering from their respective physical drawbacks. Under this circumstance, it is essential to aggressively explore and develop alternative promising mass storage devices, leading to the presence of probe-based storage devices. In this paper, the physical principles and the current status of several different probe storage devices, including thermo-mechanical probe memory, magnetic probe memory, ferroelectric probe memory, and phase-change probe memory, are reviewed in details, as well as their respective merits and weakness. This paper provides an overview of the emerging probe memories potentially for next generation storage device so as to motivate the exploration of more innovative technologies to push forward the development of the probe storage devices.

  8. Overview of Probe-based Storage Technologies.

    PubMed

    Wang, Lei; Yang, Ci Hui; Wen, Jing; Gong, Si Di; Peng, Yuan Xiu

    2016-12-01

    The current world is in the age of big data where the total amount of global digital data is growing up at an incredible rate. This indeed necessitates a drastic enhancement on the capacity of conventional data storage devices that are, however, suffering from their respective physical drawbacks. Under this circumstance, it is essential to aggressively explore and develop alternative promising mass storage devices, leading to the presence of probe-based storage devices. In this paper, the physical principles and the current status of several different probe storage devices, including thermo-mechanical probe memory, magnetic probe memory, ferroelectric probe memory, and phase-change probe memory, are reviewed in details, as well as their respective merits and weakness. This paper provides an overview of the emerging probe memories potentially for next generation storage device so as to motivate the exploration of more innovative technologies to push forward the development of the probe storage devices.

  9. A Hybrid Multilevel Storage Architecture for Electric Power Dispatching Big Data

    NASA Astrophysics Data System (ADS)

    Yan, Hu; Huang, Bibin; Hong, Bowen; Hu, Jing

    2017-10-01

    Electric power dispatching is the center of the whole power system. In the long run time, the power dispatching center has accumulated a large amount of data. These data are now stored in different power professional systems and form lots of information isolated islands. Integrating these data and do comprehensive analysis can greatly improve the intelligent level of power dispatching. In this paper, a hybrid multilevel storage architecture for electrical power dispatching big data is proposed. It introduces relational database and NoSQL database to establish a power grid panoramic data center, effectively meet power dispatching big data storage needs, including the unified storage of structured and unstructured data fast access of massive real-time data, data version management and so on. It can be solid foundation for follow-up depth analysis of power dispatching big data.

  10. A Secure and Efficient Audit Mechanism for Dynamic Shared Data in Cloud Storage

    PubMed Central

    2014-01-01

    With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data. PMID:24959630

  11. An ASIC memory buffer controller for a high speed disk system

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Campbell, Steve

    1993-01-01

    The need for large capacity, high speed mass memory storage devices has become increasingly evident at NASA during the past decade. High performance mass storage systems are crucial to present and future NASA systems. Spaceborne data storage system requirements have grown in response to the increasing amounts of data generated and processed by orbiting scientific experiments. Predictions indicate increases in the volume of data by orders of magnitude during the next decade. Current predictions are for storage capacities on the order of terabits (Tb), with data rates exceeding one gigabit per second (Gbps). As part of the design effort for a state of the art mass storage system, NASA Langley has designed a 144 CMOS ASIC to support high speed data transfers. This paper discusses the system architecture, ASIC design and some of the lessons learned in the development process.

  12. A secure and efficient audit mechanism for dynamic shared data in cloud storage.

    PubMed

    Kwon, Ohmin; Koo, Dongyoung; Shin, Yongjoo; Yoon, Hyunsoo

    2014-01-01

    With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data.

  13. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  14. Embedded optical interconnect technology in data storage systems

    NASA Astrophysics Data System (ADS)

    Pitwon, Richard C. A.; Hopkins, Ken; Milward, Dave; Muggeridge, Malcolm

    2010-05-01

    As both data storage interconnect speeds increase and form factors in hard disk drive technologies continue to shrink, the density of printed channels on the storage array midplane goes up. The dominant interconnect protocol on storage array midplanes is expected to increase to 12 Gb/s by 2012 thereby exacerbating the performance bottleneck in future digital data storage systems. The design challenges inherent to modern data storage systems are discussed and an embedded optical infrastructure proposed to mitigate this bottleneck. The proposed solution is based on the deployment of an electro-optical printed circuit board and active interconnect technology. The connection architecture adopted would allow for electronic line cards with active optical edge connectors to be plugged into and unplugged from a passive electro-optical midplane with embedded polymeric waveguides. A demonstration platform has been developed to assess the viability of embedded electro-optical midplane technology in dense data storage systems and successfully demonstrated at 10.3 Gb/s. Active connectors incorporate optical transceiver interfaces operating at 850 nm and are connected in an in-plane coupling configuration to the embedded waveguides in the midplane. In addition a novel method of passively aligning and assembling passive optical devices to embedded polymer waveguide arrays has also been demonstrated.

  15. Optical data storage and metallization of polymers

    NASA Technical Reports Server (NTRS)

    Roland, C. M.; Sonnenschein, M. F.

    1991-01-01

    The utilization of polymers as media for optical data storage offers many potential benefits and consequently has been widely explored. New developments in thermal imaging are described, wherein high resolution lithography is accomplished without thermal smearing. The emphasis was on the use of poly(ethylene terephthalate) film, which simultaneously serves as both the substrate and the data storage medium. Both physical and chemical changes can be induced by the application of heat and, thereby, serve as a mechanism for high resolution optical data storage in polymers. The extension of the technique to obtain high resolution selective metallization of poly(ethylene terephthalate) is also described.

  16. New Trends of Digital Data Storage in DNA

    PubMed Central

    2016-01-01

    With the exponential growth in the capacity of information generated and the emerging need for data to be stored for prolonged period of time, there emerges a need for a storage medium with high capacity, high storage density, and possibility to withstand extreme environmental conditions. DNA emerges as the prospective medium for data storage with its striking features. Diverse encoding models for reading and writing data onto DNA, codes for encrypting data which addresses issues of error generation, and approaches for developing codons and storage styles have been developed over the recent past. DNA has been identified as a potential medium for secret writing, which achieves the way towards DNA cryptography and stenography. DNA utilized as an organic memory device along with big data storage and analytics in DNA has paved the way towards DNA computing for solving computational problems. This paper critically analyzes the various methods used for encoding and encrypting data onto DNA while identifying the advantages and capability of every scheme to overcome the drawbacks identified priorly. Cryptography and stenography techniques have been analyzed in a critical approach while identifying the limitations of each method. This paper also identifies the advantages and limitations of DNA as a memory device and memory applications. PMID:27689089

  17. New Trends of Digital Data Storage in DNA.

    PubMed

    De Silva, Pavani Yashodha; Ganegoda, Gamage Upeksha

    With the exponential growth in the capacity of information generated and the emerging need for data to be stored for prolonged period of time, there emerges a need for a storage medium with high capacity, high storage density, and possibility to withstand extreme environmental conditions. DNA emerges as the prospective medium for data storage with its striking features. Diverse encoding models for reading and writing data onto DNA, codes for encrypting data which addresses issues of error generation, and approaches for developing codons and storage styles have been developed over the recent past. DNA has been identified as a potential medium for secret writing, which achieves the way towards DNA cryptography and stenography. DNA utilized as an organic memory device along with big data storage and analytics in DNA has paved the way towards DNA computing for solving computational problems. This paper critically analyzes the various methods used for encoding and encrypting data onto DNA while identifying the advantages and capability of every scheme to overcome the drawbacks identified priorly. Cryptography and stenography techniques have been analyzed in a critical approach while identifying the limitations of each method. This paper also identifies the advantages and limitations of DNA as a memory device and memory applications.

  18. Cost-effective data storage/archival subsystem for functional PACS

    NASA Astrophysics Data System (ADS)

    Chen, Y. P.; Kim, Yongmin

    1993-09-01

    Not the least of the requirements of a workable PACS is the ability to store and archive vast amounts of information. A medium-size hospital will generate between 1 and 2 TBytes of data annually on a fully functional PACS. A high-speed image transmission network coupled with a comparably high-speed central data storage unit can make local memory and magnetic disks in the PACS workstations less critical and, in an extreme case, unnecessary. Under these circumstances, the capacity and performance of the central data storage subsystem and database is critical in determining the response time at the workstations, thus significantly affecting clinical acceptability. The central data storage subsystem not only needs to provide sufficient capacity to store about ten days worth of images (five days worth of new studies, and on the average, about one comparison study for each new study), but also supplies images to the requesting workstation in a timely fashion. The database must provide fast retrieval responses upon users' requests for images. This paper analyzes both advantages and disadvantages of multiple parallel transfer disks versus RAID disks for short-term central data storage subsystem, as well as optical disk jukebox versus digital recorder tape subsystem for long-term archive. Furthermore, an example high-performance cost-effective storage subsystem which integrates both the RAID disks and high-speed digital tape subsystem as a cost-effective PACS data storage/archival unit are presented.

  19. Earth Science Data Grid System

    NASA Astrophysics Data System (ADS)

    Chi, Y.; Yang, R.; Kafatos, M.

    2004-05-01

    The Earth Science Data Grid System (ESDGS) is a software system in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We also develop the earth science application metadata; geospatial, temporal, and content-based indexing; and some other tools. In this paper, we will describe software architecture and components of the data grid system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.

  20. Damsel: A Data Model Storage Library for Exascale Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Liao, Wei-keng

    Computational science applications have been described as having one of seven motifs (the “seven dwarfs”), each having a particular pattern of computation and communication. From a storage and I/O perspective, these applications can also be grouped into a number of data model motifs describing the way data is organized and accessed during simulation, analysis, and visualization. Major storage data models developed in the 1990s, such as Network Common Data Format (netCDF) and Hierarchical Data Format (HDF) projects, created support for more complex data models. Development of both netCDF and HDF5 was influenced by multi-dimensional dataset storage requirements, but their accessmore » models and formats were designed with sequential storage in mind (e.g., a POSIX I/O model). Although these and other high-level I/O libraries have had a beneficial impact on large parallel applications, they do not always attain a high percentage of peak I/O performance due to fundamental design limitations, and they do not address the full range of current and future computational science data models. The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. The project consists of three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community. The product of this project, Damsel library, is openly available for download from http://cucis.ece.northwestern.edu/projects/DAMSEL. Several case studies and application programming interface reference are also available to assist new users to learn to use the library.« less

  1. Global SWOT Data Assimilation of River Hydrodynamic Model; the Twin Simulation Test of CaMa-Flood

    NASA Astrophysics Data System (ADS)

    Ikeshima, D.; Yamazaki, D.; Kanae, S.

    2016-12-01

    CaMa-Flood is a global scale model for simulating hydrodynamics in large scale rivers. It can simulate river hydrodynamics such as river discharge, flooded area, water depth and so on by inputting water runoff derived from land surface model. Recently many improvements at parameters or terrestrial data are under process to enhance the reproducibility of true natural phenomena. However, there are still some errors between nature and simulated result due to uncertainties in each model. SWOT (Surface water and Ocean Topography) is a satellite, which is going to be launched in 2021, can measure open water surface elevation. SWOT observed data can be used to calibrate hydrodynamics model at river flow forecasting and is expected to improve model's accuracy. Combining observation data into model to calibrate is called data assimilation. In this research, we developed data-assimilated river flow simulation system in global scale, using CaMa-Flood as river hydrodynamics model and simulated SWOT as observation data. Generally at data assimilation, calibrating "model value" with "observation value" makes "assimilated value". However, the observed data of SWOT satellite will not be available until its launch in 2021. Instead, we simulated the SWOT observed data using CaMa-Flood. Putting "pure input" into CaMa-Flood produce "true water storage". Extracting actual daily swath of SWOT from "true water storage" made simulated observation. For "model value", we made "disturbed water storage" by putting "noise disturbed input" to CaMa-Flood. Since both "model value" and "observation value" are made by same model, we named this twin simulation. At twin simulation, simulated observation of "true water storage" is combined with "disturbed water storage" to make "assimilated value". As the data assimilation method, we used ensemble Kalman filter. If "assimilated value" is closer to "true water storage" than "disturbed water storage", the data assimilation can be marked effective. Also by changing the input disturbance of "disturbed water storage", acceptable rate of uncertainty at the input may be discussed.

  2. Digital Photograph Security: What Plastic Surgeons Need to Know.

    PubMed

    Thomas, Virginia A; Rugeley, Patricia B; Lau, Frank H

    2015-11-01

    Sharing and storing digital patient photographs occur daily in plastic surgery. Two major risks associated with the practice, data theft and Health Insurance Portability and Accountability Act (HIPAA) violations, have been dramatically amplified by high-speed data connections and digital camera ubiquity. The authors review what plastic surgeons need to know to mitigate those risks and provide recommendations for implementing an ideal, HIPAA-compliant solution for plastic surgeons' digital photography needs: smartphones and cloud storage. Through informal discussions with plastic surgeons, the authors identified the most common photograph sharing and storage methods. For each method, a literature search was performed to identify the risks of data theft and HIPAA violations. HIPAA violation risks were confirmed by the second author (P.B.R.), a compliance liaison and privacy officer. A comprehensive review of HIPAA-compliant cloud storage services was performed. When possible, informal interviews with cloud storage services representatives were conducted. The most common sharing and storage methods are not HIPAA compliant, and several are prone to data theft. The authors' review of cloud storage services identified six HIPAA-compliant vendors that have strong to excellent security protocols and policies. These options are reasonably priced. Digital photography and technological advances offer major benefits to plastic surgeons but are not without risks. A proper understanding of data security and HIPAA regulations needs to be applied to these technologies to safely capture their benefits. Cloud storage services offer efficient photograph sharing and storage with layers of security to ensure HIPAA compliance and mitigate data theft risk.

  3. Integration of cloud-based storage in BES III computing environment

    NASA Astrophysics Data System (ADS)

    Wang, L.; Hernandez, F.; Deng, Z.

    2014-06-01

    We present an on-going work that aims to evaluate the suitability of cloud-based storage as a supplement to the Lustre file system for storing experimental data for the BES III physics experiment and as a backend for storing files belonging to individual members of the collaboration. In particular, we discuss our findings regarding the support of cloud-based storage in the software stack of the experiment. We report on our development work that improves the support of CERN' s ROOT data analysis framework and allows efficient remote access to data through several cloud storage protocols. We also present our efforts providing the experiment with efficient command line tools for navigating and interacting with cloud storage-based data repositories both from interactive sessions and grid jobs.

  4. Simulation of mass storage systems operating in a large data processing facility

    NASA Technical Reports Server (NTRS)

    Holmes, R.

    1972-01-01

    A mass storage simulation program was written to aid system designers in the design of a data processing facility. It acts as a tool for measuring the overall effect on the facility of on-line mass storage systems, and it provides the means of measuring and comparing the performance of competing mass storage systems. The performance of the simulation program is demonstrated.

  5. Hydrologic implications of GRACE satellite data in the Colorado River Basin

    USGS Publications Warehouse

    Scanlon, Bridget R.; Zhang, Zizhan; Reedy, Robert C.; Pool, Donald R.; Save, Himanshu; Long, Di; Chen, Jianli; Wolock, David M.; Conway, Brian D.; Winester, Daniel

    2015-01-01

    Use of GRACE (Gravity Recovery and Climate Experiment) satellites for assessing global water resources is rapidly expanding. Here we advance application of GRACE satellites by reconstructing long-term total water storage (TWS) changes from ground-based monitoring and modeling data. We applied the approach to the Colorado River Basin which has experienced multiyear intense droughts at decadal intervals. Estimated TWS declined by 94 km3 during 1986–1990 and by 102 km3 during 1998–2004, similar to the TWS depletion recorded by GRACE (47 km3) during 2010–2013. Our analysis indicates that TWS depletion is dominated by reductions in surface reservoir and soil moisture storage in the upper Colorado basin with additional reductions in groundwater storage in the lower basin. Groundwater storage changes are controlled mostly by natural responses to wet and dry cycles and irrigation pumping outside of Colorado River delivery zones based on ground-based water level and gravity data. Water storage changes are controlled primarily by variable water inputs in response to wet and dry cycles rather than increasing water use. Surface reservoir storage buffers supply variability with current reservoir storage representing ∼2.5 years of available water use. This study can be used as a template showing how to extend short-term GRACE TWS records and using all available data on storage components of TWS to interpret GRACE data, especially within the context of droughts.

  6. Groundwater Storage Changes: Present Status from GRACE Observations

    NASA Technical Reports Server (NTRS)

    Chen, Jianli; Famiglietti, James S.; Scanlon, Bridget R.; Rodell, Matthew

    2015-01-01

    Satellite gravity measurements from the Gravity Recovery and Climate Experiment (GRACE) provide quantitative measurement of terrestrial water storage (TWS) changes with unprecedented accuracy. Combining GRACE-observed TWS changes and independent estimates of water change in soil and snow and surface reservoirs offers a means for estimating groundwater storage change. Since its launch in March 2002, GRACE time-variable gravity data have been successfully used to quantify long-term groundwater storage changes in different regions over the world, including northwest India, the High Plains Aquifer and the Central Valley in the USA, the North China Plain, Middle East, and southern Murray-Darling Basin in Australia, where groundwater storage has been significantly depleted in recent years (or decades). It is difficult to rely on in situ groundwater measurements for accurate quantification of large, regional-scale groundwater storage changes, especially at long timescales due to inadequate spatial and temporal coverage of in situ data and uncertainties in storage coefficients. The now nearly 13 years of GRACE gravity data provide a successful and unique complementary tool for monitoring and measuring groundwater changes on a global and regional basis. Despite the successful applications of GRACE in studying global groundwater storage change, there are still some major challenges limiting the application and interpretation of GRACE data. In this paper, we present an overview of GRACE applications in groundwater studies and discuss if and how the main challenges to using GRACE data can be addressed.

  7. Artificial cognitive memory—changing from density driven to functionality driven

    NASA Astrophysics Data System (ADS)

    Shi, L. P.; Yi, K. J.; Ramanathan, K.; Zhao, R.; Ning, N.; Ding, D.; Chong, T. C.

    2011-03-01

    Increasing density based on bit size reduction is currently a main driving force for the development of data storage technologies. However, it is expected that all of the current available storage technologies might approach their physical limits in around 15 to 20 years due to miniaturization. To further advance the storage technologies, it is required to explore a new development trend that is different from density driven. One possible direction is to derive insights from biological counterparts. Unlike physical memories that have a single function of data storage, human memory is versatile. It contributes to functions of data storage, information processing, and most importantly, cognitive functions such as adaptation, learning, perception, knowledge generation, etc. In this paper, a brief review of current data storage technologies are presented, followed by discussions of future storage technology development trend. We expect that the driving force will evolve from density to functionality, and new memory modules associated with additional functions other than only data storage will appear. As an initial step toward building a future generation memory technology, we propose Artificial Cognitive Memory (ACM), a memory based intelligent system. We also present the characteristics of ACM, new technologies that can be used to develop ACM components such as bioinspired element cells (silicon, memristor, phase change, etc.), and possible methodologies to construct a biologically inspired hierarchical system.

  8. NAFFS: network attached flash file system for cloud storage on portable consumer electronics

    NASA Astrophysics Data System (ADS)

    Han, Lin; Huang, Hao; Xie, Changsheng

    Cloud storage technology has become a research hotspot in recent years, while the existing cloud storage services are mainly designed for data storage needs with stable high speed Internet connection. Mobile Internet connections are often unstable and the speed is relatively low. These native features of mobile Internet limit the use of cloud storage in portable consumer electronics. The Network Attached Flash File System (NAFFS) presented the idea of taking the portable device built-in NAND flash memory as the front-end cache of virtualized cloud storage device. Modern portable devices with Internet connection have built-in more than 1GB NAND Flash, which is quite enough for daily data storage. The data transfer rate of NAND flash device is much higher than mobile Internet connections[1], and its non-volatile feature makes it very suitable as the cache device of Internet cloud storage on portable device, which often have unstable power supply and intermittent Internet connection. In the present work, NAFFS is evaluated with several benchmarks, and its performance is compared with traditional network attached file systems, such as NFS. Our evaluation results indicate that the NAFFS achieves an average accessing speed of 3.38MB/s, which is about 3 times faster than directly accessing cloud storage by mobile Internet connection, and offers a more stable interface than that of directly using cloud storage API. Unstable Internet connection and sudden power off condition are tolerable, and no data in cache will be lost in such situation.

  9. Problems in the long-term storage of data obtained from scientific space experiments

    NASA Technical Reports Server (NTRS)

    Zlotin, G. N.; Khovanskiy, Y. D.

    1975-01-01

    It is shown that long-term data storage systems can be achieved when the system which organizes and conducts the scientific space experiments is equipped with a specialized subsystem: the information filing system. Its main functions are described along with the necessity of stage-by-stage development and compatibility with the data processing systems. The requirements for long-term data storage media are discussed.

  10. Triboelectrification-Enabled Self-Powered Data Storage.

    PubMed

    Kuang, Shuang Yang; Zhu, Guang; Wang, Zhong Lin

    2018-02-01

    Data storage by any means usually requires an electric driving power for writing or reading. A novel approach for self-powered, triboelectrification-enabled data storage (TEDS) is presented. Data are incorporated into a set of metal-based surface patterns. As a probe slides across the patterned surface, triboelectrification between the scanning probe and the patterns produces alternatively varying voltage signal in quasi-square wave. The trough and crest of the quasi-square wave signal are coded as binary bits of "0" and "1," respectively, while the time span of the trough and the crest is associated with the number of bits. The storage of letters and sentences is demonstrated through either square-shaped or disc-shaped surface patterns. Based on experimental data and numerical calculation, the theoretically predicted maximum data storage density could reach as high as 38.2 Gbit in -2 . Demonstration of real-time data retrieval is realized with the assistance of software interface. For the TEDS reported in this work, the measured voltage signal is self-generated as a result of triboelectrification without the reliance on an external power source. This feature brings about not only low power consumption but also a much more simplified structure. Therefore, this work paves a new path to a unique approach of high-density data storage that may have widespread applications.

  11. Use of Schema on Read in Earth Science Data Archives

    NASA Astrophysics Data System (ADS)

    Petrenko, M.; Hegde, M.; Smit, C.; Pilone, P.; Pham, L.

    2017-12-01

    Traditionally, NASA Earth Science data archives have file-based storage using proprietary data file formats, such as HDF and HDF-EOS, which are optimized to support fast and efficient storage of spaceborne and model data as they are generated. The use of file-based storage essentially imposes an indexing strategy based on data dimensions. In most cases, NASA Earth Science data uses time as the primary index, leading to poor performance in accessing data in spatial dimensions. For example, producing a time series for a single spatial grid cell involves accessing a large number of data files. With exponential growth in data volume due to the ever-increasing spatial and temporal resolution of the data, using file-based archives poses significant performance and cost barriers to data discovery and access. Storing and disseminating data in proprietary data formats imposes an additional access barrier for users outside the mainstream research community. At the NASA Goddard Earth Sciences Data Information Services Center (GES DISC), we have evaluated applying the "schema-on-read" principle to data access and distribution. We used Apache Parquet to store geospatial data, and have exposed data through Amazon Web Services (AWS) Athena, AWS Simple Storage Service (S3), and Apache Spark. Using the "schema-on-read" approach allows customization of indexing—spatial or temporal—to suit the data access pattern. The storage of data in open formats such as Apache Parquet has widespread support in popular programming languages. A wide range of solutions for handling big data lowers the access barrier for all users. This presentation will discuss formats used for data storage, frameworks with support for "schema-on-read" used for data access, and common use cases covering data usage patterns seen in a geospatial data archive.

  12. Towards Highly-Efficient Phototriggered Data Storage by Utilizing a Diketopyrrolopyrrole-Based Photoelectronic Small Molecule.

    PubMed

    Li, Yang; Li, Hua; He, Jinghui; Xu, Qingfeng; Li, Najun; Chen, Dongyun; Lu, Jianmei

    2016-07-20

    A cooperative photoelectrical strategy is proposed for effectively modulating the performance of a multilevel data-storage device. By taking advantage of organic photoelectronic molecules as storage media, the fabricated device exhibited enhanced working parameters under the action of both optical and electrical inputs. In cooperation with UV light, the operating voltages of the memory device were decreased, which was beneficial for low energy consumption. Moreover, the ON/OFF current ratio was more tunable and facilitated high-resolution multilevel storage. Compared with previous methods that focused on tuning the storage media, this study provides an easy approach for optimizing organic devices through multiple physical channels. More importantly, this method holds promise for integrating multiple functionalities into high-density data-storage devices. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis?

    NASA Technical Reports Server (NTRS)

    Halem, M.; Shaffer, F.; Palm, N.; Salmon, E.; Raghavan, S.; Kempster, L.

    1998-01-01

    This technology assessment of long-term high capacity data storage systems identifies an emerging crisis of severe proportions related to preserving important historical data in science, healthcare, manufacturing, finance and other fields. For the last 50 years, the information revolution, which has engulfed all major institutions of modem society, centered itself on data-their collection, storage, retrieval, transmission, analysis and presentation. The transformation of long term historical data records into information concepts, according to Drucker, is the next stage in this revolution towards building the new information based scientific and business foundations. For this to occur, data survivability, reliability and evolvability of long term storage media and systems pose formidable technological challenges. Unlike the Y2K problem, where the clock is ticking and a crisis is set to go off at a specific time, large capacity data storage repositories face a crisis similar to the social security system in that the seriousness of the problem emerges after a decade or two. The essence of the storage crisis is as follows: since it could take a decade to migrate a peta-byte of data to a new media for preservation, and the life expectancy of the storage media itself is only a decade, then it may not be possible to complete the transfer before an irrecoverable data loss occurs. Over the last two decades, a number of anecdotal crises have occurred where vital scientific and business data were lost or would have been lost if not for major expenditures of resources and funds to save this data, much like what is happening today to solve the Y2K problem. A pr-ime example was the joint NASA/NSF/NOAA effort to rescue eight years worth of TOVS/AVHRR data from an obsolete system, which otherwise would have not resulted in the valuable 20-year long satellite record of global warming. Current storage systems solutions to long-term data survivability rest on scalable architectures having parallel paths for data migration.

  14. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  15. System and Method for Providing a Climate Data Persistence Service

    NASA Technical Reports Server (NTRS)

    Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)

    2018-01-01

    A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.

  16. Earth Science Data Grid System

    NASA Astrophysics Data System (ADS)

    Chi, Y.; Yang, R.; Kafatos, M.

    2004-12-01

    The Earth Science Data Grid System (ESDGS) is a software in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We are also developing additional services of 1) metadata management, 2) geospatial, temporal, and content-based indexing, and 3) near/on site data processing, in response to the unique needs of Earth science applications. In this paper, we will describe the software architecture and components of the system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.

  17. Fast non-interferometric iterative phase retrieval for holographic data storage.

    PubMed

    Lin, Xiao; Huang, Yong; Shimura, Tsutomu; Fujimura, Ryushi; Tanaka, Yoshito; Endo, Masao; Nishimoto, Hajimu; Liu, Jinpeng; Li, Yang; Liu, Ying; Tan, Xiaodi

    2017-12-11

    Fast non-interferometric phase retrieval is a very important technique for phase-encoded holographic data storage and other phase based applications due to its advantage of easy implementation, simple system setup, and robust noise tolerance. Here we present an iterative non-interferometric phase retrieval for 4-level phase encoded holographic data storage based on an iterative Fourier transform algorithm and known portion of the encoded data, which increases the storage code rate to two-times that of an amplitude based method. Only a single image at the Fourier plane of the beam is captured for the iterative reconstruction. Since beam intensity at the Fourier plane of the reconstructed beam is more concentrated than the reconstructed beam itself, the requirement of diffractive efficiency of the recording media is reduced, which will improve the dynamic range of recording media significantly. The phase retrieval only requires 10 iterations to achieve a less than 5% phase data error rate, which is successfully demonstrated by recording and reconstructing a test image data experimentally. We believe our method will further advance the holographic data storage technique in the era of big data.

  18. Educational outreach at the NSF Engineering Research Center for Data Storage Systems

    NASA Astrophysics Data System (ADS)

    Williams, James E., Jr.

    1996-07-01

    An aspect of the National Science Foundation Engineering Research Center in Data Storage Systems (DSSC) program that is valued by our sponsors is the way we use our different educational programs to impact the data storage industry in a positive fashion. The most common way to teach data storage materials is in classes that are offered as part of the Carnegie Mellon curriculum. Another way the DSSC attempts to educate students is through outreach programs such as the NSF Research Experiences for Undergraduates and Young Scholars programs, both of which have been very successful and place emphasis and including women, under represented minorities and disable d students. The Center has also established cooperative outreach partnerships which serve to both educate students and benefit the industry. One example is the cooperative program we have had with the Magnetics Technology Centre at the National University of Singapore to help strengthen their research and educational efforts to benefit U.S. data storage companies with plants in Singapore. In addition, the Center has started a program that will help train outstanding students from technical institutes to increase their value as technicians to the data storage industry when they graduate.

  19. Basin-Scale Freshwater Storage Trends from GRACE

    NASA Astrophysics Data System (ADS)

    Famiglietti, J.; Kiel, B.; Frappart, F.; Syed, T. H.; Rodell, M.

    2006-12-01

    Four years have passed since the GRACE satellite tandem began recording variations in Earth's gravitational field. On monthly to annual timescales, variations in the gravity signal for a given location correspond primarily to changes in water storage. GRACE thus reveals, in a comprehensive, vertically-integrated manner, which areas and basins have experienced net increases or decreases in water storage. GRACE data (April 2002 to November 2005) released by the Center for Space Research at the University of Texas at Austin (RL01) is used for this study. Model-based data from GLDAS (Global Land Data Assimilation System) is integrated into this study for comparison with the CSR GRACE data. Basin-scale GLDAS storage trends are similar to those from GRACE, except in the Arctic, likely due to the GLDAS snow module. Outside of the Arctic, correlation of GRACE and GLDAS data confirms significant basin-scale storage trends across the GRACE data collection period. Sharp storage decreases are noted in the Congo, Zambezi, Mekong, Parana, and Yukon basins, among others. Significant increases are noted in the Niger, Lena, and Volga basins, and others. Current and future work involves assessment of these trends and their causes in the context of hydroclimatological variability.

  20. Network Consumption and Storage Needs when Working in a Full-Time Routine Digital Environment in a Large Nonacademic Training Hospital.

    PubMed

    Nap, Marius

    2016-01-01

    Digital pathology is indisputably connected with high demands on data traffic and storage. As a consequence, control of the logistic process and insight into the management of both traffic and storage is essential. We monitored data traffic from scanners to server and server to workstation and registered storage needs for diagnostic images and additional projects. The results showed that data traffic inside the hospital network (1 Gbps) never exceeded 80 Mbps for scanner-to-server activity, and activity from the server to the workstation took at most 5 Mbps. Data storage per image increased from 300 MB to an average of 600 MB as a result of camera and software updates, and, due to the increased scanning speed, the scanning time was reduced with almost 8 h/day. Introduction of a storage policy of only 12 months for diagnostic images and rescanning if needed resulted in a manageable storage window of 45 TB for the period of 1 year. Using simple registration tools allowed the transition of digital pathology into a concise package that allows planning and control. Incorporating retrieval of such information from scanning and storage devices will reduce the fear of losing control by the management when introducing digital pathology in daily routine. © 2016 S. Karger AG, Basel.

  1. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis

    NASA Technical Reports Server (NTRS)

    Halem, M.; Shaffer, F.; Palm, N.; Salmon, E.; Raghavan, S.; Kempster, L.

    1998-01-01

    The density of digital storage media in our information-intensive society increases by a factor of four every three years, while the rate at which this data can be migrated to viable long-term storage has been increasing by a factor of only four every nine years. Meanwhile, older data stored on increasingly obsolete media, are at considerable risk. When the systems for which the media were designed are no longer serviced by their manufacturers (many of whom are out of business), the data will no longer be accessible. In some cases, older media suffer from a physical breakdown of components - tapes simply lose their magnetic properties after a long time in storage. The scale of the crisis is compatible to that facing the Social Security System. Greater financial and intellectual resources to the development and refinement of new storage media and migration technologies in order to preserve as much data as possible.

  2. Multiferroic composites for magnetic data storage beyond the super-paramagnetic limit

    NASA Astrophysics Data System (ADS)

    Vopson, M. M.; Zemaityte, E.; Spreitzer, M.; Namvar, E.

    2014-09-01

    Ultra high-density magnetic data storage requires magnetic grains of <5 nm diameters. Thermal stability of such small magnetic grain demands materials with very large magneto-crystalline anisotropy, which makes data write process almost impossible, even when Heat Assisted Magnetic Recording (HAMR) technology is deployed. Here, we propose an alternative method of strengthening the thermal stability of the magnetic grains via elasto-mechanical coupling between the magnetic data storage layer and a piezo-ferroelectric substrate. Using Stoner-Wohlfarth single domain model, we show that the correct tuning of this coupling can increase the effective magneto-crystalline anisotropy of the magnetic grains making them stable beyond the super-paramagnetic limit. However, the effective magnetic anisotropy can also be lowered or even switched off during the write process by simply altering the applied voltage to the substrate. Based on these effects, we propose two magnetic data storage protocols, one of which could potentially replace HAMR technology, with both schemes promising unprecedented increases in the data storage areal density beyond the super-paramagnetic size limit.

  3. Applications drivers for data parking on the Information Superhighway

    NASA Technical Reports Server (NTRS)

    Johnson, Clark E., Jr.; Foeller, Thomas

    1994-01-01

    As the cost of data storage continues to decline (currently about one-millionth of its cost four decades ago) entirely new applications areas become economically feasible. Many of these new areas involved the extraordinarily high data rates and universal connectivity soon to be provided by the National Information Infrastructure (NII). The commonly held belief is that the main driver for the NII will be entertainment applications. We believe that entertainment applications as currently touted (multi-media, 500 video channels, video-on-demand, etc.) will play an important but far from dominant role in the development of the NII and its data storage components. The most pervasively effective drivers will be medical applications such as telemedicine and remote diagnosis, education and environmental monitoring. These applications have a significant funding base and offer a clearly perceived opportunity to improve the nation's standard of living. The NII's wideband connectivity both nationwide and worldwide requires a broad spectrum of data storage devices with a wide-range of performance capabilities. These storage centers will be dispersed throughout the system. Magnetic recording devices will fill the majority of these new data storage requirements for at least the rest of this century. The storage needs of various application areas and their respective market sizes will be explored. The comparative performance of various magnetic technologies and competitive alternative storage systems will be discussed.

  4. Analysis of Temperature Variability in Medication Storage Compartments in Emergency Transport Helicopters.

    PubMed

    O'Donnell, Margaret A; Whitfield, Justin

    The purpose of this study was to determine whether the temperature in medication storage compartments in air medical helicopters was within United States Pharmacopeia (USP)-defined limits for controlled room temperature. This was a prospective study using data obtained from a continuous temperature monitoring device. A total of 4 monitors were placed within 2 medication storage locations in 2 identical helicopters. The data collection period lasted 2 weeks during the summer and winter seasons. Data retrieved from monitors were compared against USP parameters for proper medication storage. Results documented temperatures outside the acceptable range a majority of the time with temperatures above the high limit during summer and below the low limit during winter. The study determined that compartments used for medication storage frequently fell outside of the range for USP-defined limits for medication storage. Flight programs should monitor storage areas, carefully taking actions to keep medication within defined ranges. Copyright © 2016 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  5. Sensitivity of GRACE-derived estimates of groundwater-level changes in southern Ontario, Canada

    NASA Astrophysics Data System (ADS)

    Hachborn, Ellen; Berg, Aaron; Levison, Jana; Ambadan, Jaison Thomas

    2017-12-01

    Amidst changing climates, understanding the world's water resources is of increasing importance. In Ontario, Canada, low water conditions are currently assessed using only precipitation and watershed-based stream gauges by the Conservation Authorities in Ontario and the Ministry of Natural Resources and Forestry. Regional groundwater-storage changes in Ontario are not currently measured using satellite data by research institutes. In this study, contributions from the Gravity Recovery and Climate Experiment (GRACE) data are compared to a hydrogeological database covering southern Ontario from 2003 to 2013, to determine the suitability of GRACE total water storage estimates for monitoring groundwater storage in this location. Terrestrial water storage data from GRACE were used to determine monthly groundwater storage (GWS) anomaly values. GWS values were also determined by multiplying groundwater-level elevations (from the Provincial Groundwater Monitoring Network wells) by specific yield. Comparisons of GRACE-derived GWS to well-based GWS data determined that GRACE is sufficiently sensitive to obtain a meaningful signal in southern Ontario. Results show that GWS values produced by GRACE are useful for identifying regional changes in groundwater storage in areas with limited available hydrogeological characterization data. Results also indicate that GRACE may have an ability to forecast changes in groundwater storage, which will become useful when monitoring climate shifts in the near future.

  6. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  7. A system for the input and storage of data in the Besm-6 digital computer

    NASA Technical Reports Server (NTRS)

    Schmidt, K.; Blenke, L.

    1975-01-01

    Computer programs used for the decoding and storage of large volumes of data on the the BESM-6 computer are described. The following factors are discussed: the programming control language allows the programs to be run as part of a modular programming system used in data processing; data control is executed in a hierarchically built file on magnetic tape with sequential index storage; and the programs are not dependent on the structure of the data.

  8. Analyzing Data Remnant Remains on User Devices to Determine Probative Artifacts in Cloud Environment.

    PubMed

    Ahmed, Abdulghani Ali; Xue Li, Chua

    2018-01-01

    Cloud storage service allows users to store their data online, so that they can remotely access, maintain, manage, and back up data from anywhere via the Internet. Although helpful, this storage creates a challenge to digital forensic investigators and practitioners in collecting, identifying, acquiring, and preserving evidential data. This study proposes an investigation scheme for analyzing data remnants and determining probative artifacts in a cloud environment. Using pCloud as a case study, this research collected the data remnants available on end-user device storage following the storing, uploading, and accessing of data in the cloud storage. Data remnants are collected from several sources, including client software files, directory listing, prefetch, registry, network PCAP, browser, and memory and link files. Results demonstrate that the collected remnants data are beneficial in determining a sufficient number of artifacts about the investigated cybercrime. © 2017 American Academy of Forensic Sciences.

  9. Storage requirements for Georgia streams

    USGS Publications Warehouse

    Carter, Robert F.

    1983-01-01

    The suitability of a stream as a source of water supply or for waste disposal may be severely limited by low flow during certain periods. A water user may be forced to provide storage facilities to supplement the natural flow if the low flow is insufficient for his needs. This report provides data for evaluating the feasibility of augmenting low streamflow by means of storage facilities. It contains tabular data on storage requirements for draft rates that are as much as 60 percent of the mean annual flow at 99 continuous-record gaging stations, and draft-storage diagrams for estimating storage requirements at many additional sites. Through analyses of streamflow data, the State was divided into four regions. Draft-storage diagrams for each region provide a means of estimating storage requirements for sites on streams where data are scant, provided the drainage area, mean annual flow, and the 7-day, 10-year low flow are known or can be estimated. These data are tabulated for the 99 gaging stations used in the analyses and for 102 partial-record sites where only base-flow measurements have been made. The draft-storage diagrams are useful not only for estimating in-channel storage required for low-flow augmentation, but also can be used for estimating the volume of off-channel storage required to retain wastewater during low-flow periods for later release. In addition, these relationships can be helpful in estimating the volume of wastewater to be disposed of by spraying on land, provided that the water disposed of in this manner is only that for which streamflow dilution water is not currently available. Mean annual flow can be determined for any stream within the State by using the runoff map in this report. Low-flow indices can be estimated by several methods, including correlation of base-flow measurements with concurrent flow at nearby continuous-record gaging stations where low-flow indices have been determined.

  10. Optical storage media data integrity studies

    NASA Technical Reports Server (NTRS)

    Podio, Fernando L.

    1994-01-01

    Optical disk-based information systems are being used in private industry and many Federal Government agencies for on-line and long-term storage of large quantities of data. The storage devices that are part of these systems are designed with powerful, but not unlimited, media error correction capacities. The integrity of data stored on optical disks does not only depend on the life expectancy specifications for the medium. Different factors, including handling and storage conditions, may result in an increase of medium errors in size and frequency. Monitoring the potential data degradation is crucial, especially for long term applications. Efforts are being made by the Association for Information and Image Management Technical Committee C21, Storage Devices and Applications, to specify methods for monitoring and reporting to the user medium errors detected by the storage device while writing, reading or verifying the data stored in that medium. The Computer Systems Laboratory (CSL) of the National Institute of Standard and Technology (NIST) has a leadership role in the development of these standard techniques. In addition, CSL is researching other data integrity issues, including the investigation of error-resilient compression algorithms. NIST has conducted care and handling experiments on optical disk media with the objective of identifying possible causes of degradation. NIST work in data integrity and related standards activities is described.

  11. Two-Level Verification of Data Integrity for Data Storage in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping

    Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.

  12. Possibility of the market expansion of large capacity optical cold archive

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ikuo; Sakata, Emiko

    2017-08-01

    The field, IoT and Big data, which is activated by the revolution of ICT, has caused rapid increase of distribution data of various business application. As a result, data with low access frequency has been rapidly increasing into a huge scale that human has never experienced before. This data with low access frequency is called "cold data", and the storage for cold data is called "cold storage". In this situation, the specifications of storage including access frequency, response speed and cost is determined by the application's request.

  13. Soil Moisture or Groundwater?

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2017-12-01

    Partitioning the vertically integrated water storage variations estimated from GRACE satellite data into the components of which it is comprised requires independent information. Land surface models, which simulate the transfer and storage of moisture and energy at the land surface, are often used to estimate water storage variability of snow, surface water, and soil moisture. To obtain an estimate of changes in groundwater, the estimates of these storage components are removed from GRACE data. Biases in the modeled water storage components are therefore present in the residual groundwater estimate. In this study, we examine how soil moisture variability, estimated using the Community Land Model (CLM), depends on the vertical structure of the model. We then explore the implications of this uncertainty in the context of estimating groundwater variations using GRACE data.

  14. Volume Holographic Storage of Digital Data Implemented in Photorefractive Media

    NASA Astrophysics Data System (ADS)

    Heanue, John Frederick

    A holographic data storage system is fundamentally different from conventional storage devices. Information is recorded in a volume, rather than on a two-dimensional surface. Data is transferred in parallel, on a page-by -page basis, rather than serially. These properties, combined with a limited need for mechanical motion, lead to the potential for a storage system with high capacity, fast transfer rate, and short access time. The majority of previous volume holographic storage experiments have involved direct storage and retrieval of pictorial information. Success in the development of a practical holographic storage device requires an understanding of the performance capabilities of a digital system. This thesis presents a number of contributions toward this goal. A description of light diffraction from volume gratings is given. The results are used as the basis for a theoretical and numerical analysis of interpage crosstalk in both angular and wavelength multiplexed holographic storage. An analysis of photorefractive grating formation in photovoltaic media such as lithium niobate is presented along with steady-state expressions for the space-charge field in thermal fixing. Thermal fixing by room temperature recording followed by ion compensation at elevated temperatures is compared to simultaneous recording and compensation at high temperature. In particular, the tradeoff between diffraction efficiency and incomplete Bragg matching is evaluated. An experimental investigation of orthogonal phase code multiplexing is described. Two unique capabilities, the ability to perform arithmetic operations on stored data pages optically, rather than electronically, and encrypted data storage, are demonstrated. A comparison of digital signal representations, or channel codes, is carried out. The codes are compared in terms of bit-error rate performance at constant capacity. A well-known one-dimensional digital detection technique, maximum likelihood sequence estimation, is extended for use in a two-dimensional page format memory. The effectiveness of the technique in a system corrupted by intersymbol interference is investigated both experimentally and through numerical simulations. The experimental implementation of a fully-automated multiple page digital holographic storage system is described. Finally, projections of the performance limits of holographic data storage are made taking into account typical noise sources.

  15. Hydrologic implications of GRACE satellite data in the Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Scanlon, Bridget R.; Zhang, Zizhan; Reedy, Robert C.; Pool, Donald R.; Save, Himanshu; Long, Di; Chen, Jianli; Wolock, David M.; Conway, Brian D.; Winester, Daniel

    2015-12-01

    Use of GRACE (Gravity Recovery and Climate Experiment) satellites for assessing global water resources is rapidly expanding. Here we advance application of GRACE satellites by reconstructing long-term total water storage (TWS) changes from ground-based monitoring and modeling data. We applied the approach to the Colorado River Basin which has experienced multiyear intense droughts at decadal intervals. Estimated TWS declined by 94 km3 during 1986-1990 and by 102 km3 during 1998-2004, similar to the TWS depletion recorded by GRACE (47 km3) during 2010-2013. Our analysis indicates that TWS depletion is dominated by reductions in surface reservoir and soil moisture storage in the upper Colorado basin with additional reductions in groundwater storage in the lower basin. Groundwater storage changes are controlled mostly by natural responses to wet and dry cycles and irrigation pumping outside of Colorado River delivery zones based on ground-based water level and gravity data. Water storage changes are controlled primarily by variable water inputs in response to wet and dry cycles rather than increasing water use. Surface reservoir storage buffers supply variability with current reservoir storage representing ˜2.5 years of available water use. This study can be used as a template showing how to extend short-term GRACE TWS records and using all available data on storage components of TWS to interpret GRACE data, especially within the context of droughts. This article was corrected on 12 JAN 2016. See the end of the full text for details.

  16. Storage reliability analysis summary report. Volume 2: Electro mechanical devices

    NASA Astrophysics Data System (ADS)

    Smith, H. B., Jr.; Krulac, I. L.

    1982-09-01

    This document summarizes storage reliability data collected by the US Army Missile Command on electro-mechanical devices over a period of several years. Sources of data are detailed, major failure modes and mechanisms are listed and discussed. Non-operational failure rate prediction methodology is given, and conclusions and recommendations for enhancing the storage reliability of devices are drawn from the analysis of collected data.

  17. Storing, Browsing, Querying, and Sharing Data: the THREDDS Data Repository (TDR)

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D.; Baltzer, T.

    2005-12-01

    The Unidata Internet Data Distribution (IDD) network delivers gigabytes of data per day in near real time to sites across the U.S. and beyond. The THREDDS Data Server (TDS) supports public browsing of metadata and data access via OPeNDAP enabled URLs for datasets such as these. With such large quantities of data, sites generally employ a simple data management policy, keeping the data for a relatively short term on the order of hours to perhaps a week or two. In order to save interesting data in longer term storage and make it available for sharing, a user must move the data herself. In this case the user is responsible for determining where space is available, executing the data movement, generating any desired metadata, and setting access control to enable sharing. This task sequence is generally based on execution of a sequence of low level operating system specific commands with significant user involvement. The LEAD (Linked Environments for Atmospheric Discovery) project is building a cyberinfrastructure to support research and education in mesoscale meteorology. LEAD orchestrations require large, robust, and reliable storage with speedy access to stage data and store both intermediate and final results. These requirements suggest storage solutions that involve distributed storage, replication, and interfacing to archival storage systems such as mass storage systems and tape or removable disks. LEAD requirements also include metadata generation and access in order to support querying. In support of both THREDDS and LEAD requirements, Unidata is designing and prototyping the THREDDS Data Repository (TDR), a framework for a modular data repository to support distributed data storage and retrieval using a variety of back end storage media and interchangeable software components. The TDR interface will provide high level abstractions for long term storage, controlled, fast and reliable access, and data movement capabilities via a variety of technologies such as OPeNDAP and gridftp. The modular structure will allow substitution of software components so that both simple and complex storage media can be integrated into the repository. It will also allow integration of different varieties of supporting software. For example, if replication is desired, replica management could be handled via a simple hash table or a complex solution such as Replica Locater Service (RLS). In order to ensure that metadata is available for all the data in the repository, the TDR will also generate THREDDS metadata when necessary. Users will be able to establish levels of access control to their metadata and data. Coupled with a THREDDS Data Server, both browsing via THREDDS catalogs and querying capabilities will be supported. This presentation will describe the motivating factors, current status, and future plans of the TDR. References: IDD: http://www.unidata.ucar.edu/content/software/idd/index.html THREDDS: http://www.unidata.ucar.edu/content/projects/THREDDS/tech/server/ServerStatus.html LEAD: http://lead.ou.edu/ RLS: http://www.isi.edu/~annc/papers/chervenakRLSjournal05.pdf

  18. Scientific Data Storage for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Readey, J.

    2014-12-01

    Traditionally data storage used for geophysical software systems has centered on file-based systems and libraries such as NetCDF and HDF5. In contrast cloud based infrastructure providers such as Amazon AWS, Microsoft Azure, and the Google Cloud Platform generally provide storage technologies based on an object based storage service (for large binary objects) complemented by a database service (for small objects that can be represented as key-value pairs). These systems have been shown to be highly scalable, reliable, and cost effective. We will discuss a proposed system that leverages these cloud-based storage technologies to provide an API-compatible library for traditional NetCDF and HDF5 applications. This system will enable cloud storage suitable for geophysical applications that can scale up to petabytes of data and thousands of users. We'll also cover other advantages of this system such as enhanced metadata search.

  19. Data systems and computer science space data systems: Onboard memory and storage

    NASA Technical Reports Server (NTRS)

    Shull, Tom

    1991-01-01

    The topics are presented in viewgraph form and include the following: technical objectives; technology challenges; state-of-the-art assessment; mass storage comparison; SODR drive and system concepts; program description; vertical Bloch line (VBL) device concept; relationship to external programs; and backup charts for memory and storage.

  20. Optical Disks Compete with Videotape and Magnetic Storage Media: Part I.

    ERIC Educational Resources Information Center

    Urrows, Henry; Urrows, Elizabeth

    1988-01-01

    Describes the latest technology in videotape cassette systems and other magnetic storage devices and their possible effects on optical data disks. Highlights include Honeywell's Very Large Data Store (VLDS); Exabyte's tape cartridge storage system; standards for tape drives; and Masstor System's videotape cartridge system. (LRW)

  1. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    PubMed Central

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  2. Photographic memory: The storage and retrieval of data

    NASA Technical Reports Server (NTRS)

    Horton, J.

    1984-01-01

    The concept of density encoding digital data in a mass-storage computer peripheral is proposed. This concept requires that digital data be encoded as distinguishable density levels (DDLS) of the film to be used as the storage medium. These DDL's are then recorded on the film in relatively large pixels. Retrieval of the data would be accomplished by scanning the photographic record using a relatively small aperture. Multiplexing of the pixels is used to store data of a range greater than the number of DDL's supportable by the film in question. Although a cartographic application is used as an example for the photographic storage of data, any digital data can be stored in a like manner. When the data is inherently spatially-distributed, the aptness of the proposed scheme is even more evident. In such a case, human-readability is an advantage which can be added to those mentioned earlier: speed of acquisition, ease of implementation, and cost effectiveness.

  3. Advances in Telemetry Capability as Demonstrated on an Affordable Precision Mortar

    DTIC Science & Technology

    2012-06-01

    of high rate data and then broadcasting it over the rest of the flight test. Lastly an on-board data storage implementation using a MicroSD card is...broadcasting it over the rest of the flight test. Lastly an on- board data storage implementation using a MicroSD card is presented. KEY WORDS...the flight test. Lastly an on-board data storage implementation using a MicroSD card is presented. 2 GPS INTEGRATION Although ARL has

  4. Up-to-date state of storage techniques used for large numerical data files

    NASA Technical Reports Server (NTRS)

    Chlouba, V.

    1975-01-01

    Methods for data storage and output in data banks and memory files are discussed along with a survey of equipment available for this. Topics discussed include magnetic tapes, magnetic disks, Terabit magnetic tape memory, Unicon 690 laser memory, IBM 1360 photostore, microfilm recording equipment, holographic recording, film readers, optical character readers, digital data storage techniques, and photographic recording. The individual types of equipment are summarized in tables giving the basic technical parameters.

  5. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    NASA Astrophysics Data System (ADS)

    Zotes Resines, M.; Heikkila, S. S.; Duellmann, D.; Adde, G.; Toebbicke, R.; Hughes, J.; Wang, L.

    2014-06-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  6. Characterizing multiple timescales of stream and storage zone interaction that affect solute fate and transport in streams

    USGS Publications Warehouse

    Choi, Jungyill; Harvey, Judson W.; Conklin, Martha H.

    2000-01-01

    The fate of contaminants in streams and rivers is affected by exchange and biogeochemical transformation in slowly moving or stagnant flow zones that interact with rapid flow in the main channel. In a typical stream, there are multiple types of slowly moving flow zones in which exchange and transformation occur, such as stagnant or recirculating surface water as well as subsurface hyporheic zones. However, most investigators use transport models with just a single storage zone in their modeling studies, which assumes that the effects of multiple storage zones can be lumped together. Our study addressed the following question: Can a single‐storage zone model reliably characterize the effects of physical retention and biogeochemical reactions in multiple storage zones? We extended an existing stream transport model with a single storage zone to include a second storage zone. With the extended model we generated 500 data sets representing transport of nonreactive and reactive solutes in stream systems that have two different types of storage zones with variable hydrologic conditions. The one storage zone model was tested by optimizing the lumped storage parameters to achieve a best fit for each of the generated data sets. Multiple storage processes were categorized as possessing I, additive; II, competitive; or III, dominant storage zone characteristics. The classification was based on the goodness of fit of generated data sets, the degree of similarity in mean retention time of the two storage zones, and the relative distributions of exchange flux and storage capacity between the two storage zones. For most cases (>90%) the one storage zone model described either the effect of the sum of multiple storage processes (category I) or the dominant storage process (category III). Failure of the one storage zone model occurred mainly for category II, that is, when one of the storage zones had a much longer mean retention time (ts ratio > 5.0) and when the dominance of storage capacity and exchange flux occurred in different storage zones. We also used the one storage zone model to estimate a “single” lumped rate constant representing the net removal of a solute by biogeochemical reactions in multiple storage zones. For most cases the lumped rate constant that was optimized by one storage zone modeling estimated the flux‐weighted rate constant for multiple storage zones. Our results explain how the relative hydrologic properties of multiple storage zones (retention time, storage capacity, exchange flux, and biogeochemical reaction rate constant) affect the reliability of lumped parameters determined by a one storage zone transport model. We conclude that stream transport models with a single storage compartment will in most cases reliably characterize the dominant physical processes of solute retention and biogeochemical reactions in streams with multiple storage zones.

  7. Storage media for computers in radiology.

    PubMed

    Dandu, Ravi Varma

    2008-11-01

    The introduction and wide acceptance of digital technology in medical imaging has resulted in an exponential increase in the amount of data produced by the radiology department. There is an insatiable need for storage space to archive this ever-growing volume of image data. Healthcare facilities should plan the type and size of the storage media that they needed, based not just on the volume of data but also on considerations such as the speed and ease of access, redundancy, security, costs, as well as the longevity of the archival technology. This article reviews the various digital storage media and compares their merits and demerits.

  8. Asynchronous Object Storage with QoS for Scientific and Commercial Big Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brim, Michael J; Dillow, David A; Oral, H Sarp

    2013-01-01

    This paper presents our design for an asynchronous object storage system intended for use in scientific and commercial big data workloads. Use cases from the target workload do- mains are used to motivate the key abstractions used in the application programming interface (API). The architecture of the Scalable Object Store (SOS), a prototype object stor- age system that supports the API s facilities, is presented. The SOS serves as a vehicle for future research into scalable and resilient big data object storage. We briefly review our research into providing efficient storage servers capable of providing quality of service (QoS) contractsmore » relevant for big data use cases.« less

  9. Storage and network bandwidth requirements through the year 2000 for the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen

    1996-01-01

    The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.

  10. Using RFID to Enhance Security in Off-Site Data Storage

    PubMed Central

    Lopez-Carmona, Miguel A.; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R.

    2010-01-01

    Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system’s benefits in terms of efficiency and failure prevention. PMID:22163638

  11. A hierarchical storage management (HSM) scheme for cost-effective on-line archival using lossy compression.

    PubMed

    Avrin, D E; Andriole, K P; Yin, L; Gould, R G; Arenson, R L

    2001-03-01

    A hierarchical storage management (HSM) scheme for cost-effective on-line archival of image data using lossy compression is described. This HSM scheme also provides an off-site tape backup mechanism and disaster recovery. The full-resolution image data are viewed originally for primary diagnosis, then losslessly compressed and sent off site to a tape backup archive. In addition, the original data are wavelet lossy compressed (at approximately 25:1 for computed radiography, 10:1 for computed tomography, and 5:1 for magnetic resonance) and stored on a large RAID device for maximum cost-effective, on-line storage and immediate retrieval of images for review and comparison. This HSM scheme provides a solution to 4 problems in image archiving, namely cost-effective on-line storage, disaster recovery of data, off-site tape backup for the legal record, and maximum intermediate storage and retrieval through the use of on-site lossy compression.

  12. Using RFID to enhance security in off-site data storage.

    PubMed

    Lopez-Carmona, Miguel A; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R

    2010-01-01

    Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system's benefits in terms of efficiency and failure prevention.

  13. Exascale Storage Systems the SIRIUS Way

    NASA Astrophysics Data System (ADS)

    Klasky, S. A.; Abbasi, H.; Ainsworth, M.; Choi, J.; Curry, M.; Kurc, T.; Liu, Q.; Lofstead, J.; Maltzahn, C.; Parashar, M.; Podhorszki, N.; Suchyta, E.; Wang, F.; Wolf, M.; Chang, C. S.; Churchill, M.; Ethier, S.

    2016-10-01

    As the exascale computing age emerges, data related issues are becoming critical factors that determine how and where we do computing. Popular approaches used by traditional I/O solution and storage libraries become increasingly bottlenecked due to their assumptions about data movement, re-organization, and storage. While, new technologies, such as “burst buffers”, can help address some of the short-term performance issues, it is essential that we reexamine the underlying storage and I/O infrastructure to effectively support requirements and challenges at exascale and beyond. In this paper we present a new approach to the exascale Storage System and I/O (SSIO), which is based on allowing users to inject application knowledge into the system and leverage this knowledge to better manage, store, and access large data volumes so as to minimize the time to scientific insights. Central to our approach is the distinction between the data, metadata, and the knowledge contained therein, transferred from the user to the system by describing “utility” of data as it ages.

  14. Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.

    2016-12-01

    Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.

  15. LEVERAGING AGING MATERIALS DATA TO SUPPORT EXTENSION OF TRANSPORTATION SHIPPING PACKAGES SERVICE LIFE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, K.; Bellamy, S.; Daugherty, W.

    Nuclear material inventories are increasingly being transferred to interim storage locations where they may reside for extended periods of time. Use of a shipping package to store nuclear materials after the transfer has become more common for a variety of reasons. Shipping packages are robust and have a qualified pedigree for performance in normal operation and accident conditions but are only certified over an approved transportation window. The continued use of shipping packages to contain nuclear material during interim storage will result in reduced overall costs and reduced exposure to workers. However, the shipping package materials of construction must maintainmore » integrity as specified by the safety basis of the storage facility throughout the storage period, which is typically well beyond the certified transportation window. In many ways, the certification processes required for interim storage of nuclear materials in shipping packages is similar to life extension programs required for dry cask storage systems for commercial nuclear fuels. The storage of spent nuclear fuel in dry cask storage systems is federally-regulated, and over 1500 individual dry casks have been in successful service up to 20 years in the US. The uncertainty in final disposition will likely require extended storage of this fuel well beyond initial license periods and perhaps multiple re-licenses may be needed. Thus, both the shipping packages and the dry cask storage systems require materials integrity assessments and assurance of continued satisfactory materials performance over times not considered in the original evaluation processes. Test programs for the shipping packages have been established to obtain aging data on materials of construction to demonstrate continued system integrity. The collective data may be coupled with similar data for the dry cask storage systems and used to support extending the service life of shipping packages in both transportation and storage.« less

  16. Optimizing tertiary storage organization and access for spatio-temporal datasets

    NASA Technical Reports Server (NTRS)

    Chen, Ling Tony; Rotem, Doron; Shoshani, Arie; Drach, Bob; Louis, Steve; Keating, Meridith

    1994-01-01

    We address in this paper data management techniques for efficiently retrieving requested subsets of large datasets stored on mass storage devices. This problem represents a major bottleneck that can negate the benefits of fast networks, because the time to access a subset from a large dataset stored on a mass storage system is much greater that the time to transmit that subset over a network. This paper focuses on very large spatial and temporal datasets generated by simulation programs in the area of climate modeling, but the techniques developed can be applied to other applications that deal with large multidimensional datasets. The main requirement we have addressed is the efficient access of subsets of information contained within much larger datasets, for the purpose of analysis and interactive visualization. We have developed data partitioning techniques that partition datasets into 'clusters' based on analysis of data access patterns and storage device characteristics. The goal is to minimize the number of clusters read from mass storage systems when subsets are requested. We emphasize in this paper proposed enhancements to current storage server protocols to permit control over physical placement of data on storage devices. We also discuss in some detail the aspects of the interface between the application programs and the mass storage system, as well as a workbench to help scientists to design the best reorganization of a dataset for anticipated access patterns.

  17. Integration of end-user Cloud storage for CMS analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  18. Integration of end-user Cloud storage for CMS analysis

    DOE PAGES

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...

    2017-05-19

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  19. Holographic data storage crystals for LDEF (A0044)

    NASA Technical Reports Server (NTRS)

    Callen, W. R.; Gaylord, T. K.

    1984-01-01

    Electro-optic holographic recording systems were developed. The spaceworthiness of electro-optic crystals for use in ultrahigh capacity space data storage and retrieval systems are examined. The crystals for this experiment are included with the various electro-optical components of LDEF experiment. The effects of long-duration exposure on active optical system components is investigated. The concept of data storage in an optical-phase holographic memory is illustrated.

  20. Discrete event simulation and the resultant data storage system response in the operational mission environment of Jupiter-Saturn /Voyager/ spacecraft

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1978-01-01

    The Data Storage Subsystem Simulator (DSSSIM) simulating (by ground software) occurrence of discrete events in the Voyager mission is described. Functional requirements for Data Storage Subsystems (DSS) simulation are discussed, and discrete event simulation/DSSSIM processing is covered. Four types of outputs associated with a typical DSSSIM run are presented, and DSSSIM limitations and constraints are outlined.

  1. Designing and application of SAN extension interface based on CWDM

    NASA Astrophysics Data System (ADS)

    Qin, Leihua; Yu, Shengsheng; Zhou, Jingli

    2005-11-01

    As Fibre Channel (FC) becomes the protocol of choice within corporate data centers, enterprises are increasingly deploying SANs in their data central. In order to mitigate the risk of losing data and improve the availability of data, more and more enterprises are increasingly adopting storage extension technologies to replicate their business critical data to a secondary site. Transmitting this information over distance requires a carrier grade environment with zero data loss, scalable throughput, low jitter, high security and ability to travel long distance. To address this business requirements, there are three basic architectures for storage extension, they are Storage over Internet Protocol, Storage over Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) and Storage over Dense Wavelength Division Multiplexing (DWDM). Each approach varies in functionality, complexity, cost, scalability, security, availability , predictable behavior (bandwidth, jitter, latency) and multiple carrier limitations. Compared with these connectiviy technologies,Coarse Wavelength Division Multiplexing (CWDM) is a Simplified, Low Cost and High Performance connectivity solutions for enterprises to deploy their storage extension. In this paper, we design a storage extension connectivity over CWDM and test it's electrical characteristic and random read and write performance of disk array through the CWDM connectivity, testing result show us that the performance of the connectivity over CWDM is acceptable. Furthermore, we propose three kinds of network architecture of SAN extension based on CWDM interface. Finally the credit-Based flow control mechanism of FC, and the relationship between credits and extension distance is analyzed.

  2. 36 CFR § 1236.28 - What additional requirements apply to the selection and maintenance of electronic records storage...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... unscheduled data on magnetic records storage media onto tested and verified new electronic media. ... apply to the selection and maintenance of electronic records storage media for permanent records? Â... storage media for permanent records? (a) Agencies must maintain the storage and test areas for electronic...

  3. A new technique in reference based DNA sequence compression algorithm: Enabling partial decompression

    NASA Astrophysics Data System (ADS)

    Banerjee, Kakoli; Prasad, R. A.

    2014-10-01

    The whole gamut of Genetic data is ever increasing exponentially. The human genome in its base format occupies almost thirty terabyte of data and doubling its size every two and a half year. It is well-know that computational resources are limited. The most important resource which genetic data requires in its collection, storage and retrieval is its storage space. Storage is limited. Computational performance is also dependent on storage and execution time. Transmission capabilities are also directly dependent on the size of the data. Hence Data compression techniques become an issue of utmost importance when we confront with the task of handling such giganticdatabases like GenBank. Decompression is also an issue when such huge databases are being handled. This paper is intended not only to provide genetic data compression but also partially decompress the genetic sequences.

  4. Antenna data storage concept for phased array radio astronomical instruments

    NASA Astrophysics Data System (ADS)

    Gunst, André W.; Kruithof, Gert H.

    2018-04-01

    Low frequency Radio Astronomy instruments like LOFAR and SKA-LOW use arrays of dipole antennas for the collection of radio signals from the sky. Due to the large number of antennas involved, the total data rate produced by all the antennas is enormous. Storage of the antenna data is both economically and technologically infeasible using the current state of the art storage technology. Therefore, real-time processing of the antenna voltage data using beam forming and correlation is applied to achieve a data reduction throughout the signal chain. However, most science could equally well be performed using an archive of raw antenna voltage data coming straight from the A/D converters instead of capturing and processing the antenna data in real time over and over again. Trends on storage and computing technology make such an approach feasible on a time scale of approximately 10 years. The benefits of such a system approach are more science output and a higher flexibility with respect to the science operations. In this paper we present a radically new system concept for a radio telescope based on storage of raw antenna data. LOFAR is used as an example for such a future instrument.

  5. A novel anti-piracy optical disk with photochromic diarylethene

    NASA Astrophysics Data System (ADS)

    Liu, Guodong; Cao, Guoqiang; Huang, Zhen; Wang, Shenqian; Zou, Daowen

    2005-09-01

    Diarylethene is one of photochromic material with many advantages and one of the most promising recording materials for huge optical data storage. Diarylethene has two forms, which can be converted to each other by laser beams of different wavelength. The material has been researched for rewritable optical disks. Volatile data storage is one of its properties, which was always considered as an obstacle to utility. Many researches have been done for combating the obstacle for a long time. In fact, volatile data storage is very useful for anti-piracy optical data storage. Piracy is a social and economical problem. One technology of anti-piracy optical data storage is to limit readout of the data recorded in the material by encryption software. By the development of computer technologies, this kind of software is more and more easily cracked. Using photochromic diarylethene as the optical recording material, the signals of the data recorded in the material are degraded when it is read, and readout of the data is limited. Because the method uses hardware to realize anti-piracy, it is impossible cracked. In this paper, we will introduce this usage of the material. Some experiments are presented for proving its feasibility.

  6. dCache: Big Data storage for HEP communities and beyond

    NASA Astrophysics Data System (ADS)

    Millar, A. P.; Behrmann, G.; Bernardt, C.; Fuhrmann, P.; Litvintsev, D.; Mkrtchyan, T.; Petersen, A.; Rossi, A.; Schwank, K.

    2014-06-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  7. Encrypted holographic data storage based on orthogonal-phase-code multiplexing.

    PubMed

    Heanue, J F; Bashaw, M C; Hesselink, L

    1995-09-10

    We describe an encrypted holographic data-storage system that combines orthogonal-phase-code multiplexing with a random-phase key. The system offers the security advantages of random-phase coding but retains the low cross-talk performance and the minimum code storage requirements typical in an orthogonal-phase-code-multiplexing system.

  8. Data Retention Policy | High-Performance Computing | NREL

    Science.gov Websites

    HPC Data Retention Policy. File storage areas on Peregrine and Gyrfalcon are either user-centric to reclaim storage. We can make special arrangements for permanent storage, if needed. User-Centric > is 3 months after the last project ends. During this retention period, the user may log in to

  9. Calculating the ecosystem service of water storage in isolated wetlands using LiDAR in north central Florida, USA (presentation)

    EPA Science Inventory

    This study used remotely-sensed Light Detection and Ranging (LiDAR) data to estimate potential water storage capacity of isolated wetlands in north central Florida. The data were used to calculate the water storage potential of >8500 polygons identified as isolated wetlands. We f...

  10. Calculating the ecosystem service of water storage in isolated wetlands using LIDAR in north central Florida, USA

    EPA Science Inventory

    This study used remotely-sensed Light Detection and Ranging (LiDAR) data to estimate potential water storage capacity of isolated wetlands in north central Florida. The data were used to calculate the water storage potential of >8500 polygons identified as isolated wetlands. We ...

  11. POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.; Betts, W.

    2015-12-01

    The STAR online computing infrastructure has become an intensive dynamic system used for first-hand data collection and analysis resulting in a dense collection of data output. As we have transitioned to our current state, inefficient, limited storage systems have become an impediment to fast feedback to online shift crews. Motivation for a centrally accessible, scalable and redundant distributed storage system had become a necessity in this environment. OpenStack Swift Object Storage and Ceph Object Storage are two eye-opening technologies as community use and development have led to success elsewhere. In this contribution, OpenStack Swift and Ceph have been put to the test with single and parallel I/O tests, emulating real world scenarios for data processing and workflows. The Ceph file system storage, offering a POSIX compliant file system mounted similarly to an NFS share was of particular interest as it aligned with our requirements and was retained as our solution. I/O performance tests were run against the Ceph POSIX file system and have presented surprising results indicating true potential for fast I/O and reliability. STAR'S online compute farm historical use has been for job submission and first hand data analysis. The goal of reusing the online compute farm to maintain a storage cluster and job submission will be an efficient use of the current infrastructure.

  12. Genomic big data hitting the storage bottleneck.

    PubMed

    Papageorgiou, Louis; Eleni, Picasi; Raftopoulou, Sofia; Mantaiou, Meropi; Megalooikonomou, Vasileios; Vlachakis, Dimitrios

    2018-01-01

    During the last decades, there is a vast data explosion in bioinformatics. Big data centres are trying to face this data crisis, reaching high storage capacity levels. Although several scientific giants examine how to handle the enormous pile of information in their cupboards, the problem remains unsolved. On a daily basis, there is a massive quantity of permanent loss of extensive information due to infrastructure and storage space problems. The motivation for sequencing has fallen behind. Sometimes, the time that is spent to solve storage space problems is longer than the one dedicated to collect and analyse data. To bring sequencing to the foreground, scientists have to slide over such obstacles and find alternative ways to approach the issue of data volume. Scientific community experiences the data crisis era, where, out of the box solutions may ease the typical research workflow, until technological development meets the needs of Bioinformatics.

  13. Cryptonite: A Secure and Performant Data Repository on Public Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor

    2012-06-29

    Cloud storage has become immensely popular for maintaining synchronized copies of files and for sharing documents with collaborators. However, there is heightened concern about the security and privacy of Cloud-hosted data due to the shared infrastructure model and an implicit trust in the service providers. Emerging needs of secure data storage and sharing for domains like Smart Power Grids, which deal with sensitive consumer data, require the persistence and availability of Cloud storage but with client-controlled security and encryption, low key management overhead, and minimal performance costs. Cryptonite is a secure Cloud storage repository that addresses these requirements using amore » StrongBox model for shared key management.We describe the Cryptonite service and desktop client, discuss performance optimizations, and provide an empirical analysis of the improvements. Our experiments shows that Cryptonite clients achieve a 40% improvement in file upload bandwidth over plaintext storage using the Azure Storage Client API despite the added security benefits, while our file download performance is 5 times faster than the baseline for files greater than 100MB.« less

  14. Fast, axis-agnostic, dynamically summarized storage and retrieval for mass spectrometry data.

    PubMed

    Handy, Kyle; Rosen, Jebediah; Gillan, André; Smith, Rob

    2017-01-01

    Mass spectrometry, a popular technique for elucidating the molecular contents of experimental samples, creates data sets comprised of millions of three-dimensional (m/z, retention time, intensity) data points that correspond to the types and quantities of analyzed molecules. Open and commercial MS data formats are arranged by retention time, creating latency when accessing data across multiple m/z. Existing MS storage and retrieval methods have been developed to overcome the limitations of retention time-based data formats, but do not provide certain features such as dynamic summarization and storage and retrieval of point meta-data (such as signal cluster membership), precluding efficient viewing applications and certain data-processing approaches. This manuscript describes MzTree, a spatial database designed to provide real-time storage and retrieval of dynamically summarized standard and augmented MS data with fast performance in both m/z and RT directions. Performance is reported on real data with comparisons against related published retrieval systems.

  15. Study on parallel and distributed management of RS data based on spatial data base

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Liu, Shijin

    2006-12-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  16. Cavity-enhanced eigenmode and angular hybrid multiplexing in holographic data storage systems.

    PubMed

    Miller, Bo E; Takashima, Yuzuru

    2016-12-26

    Resonant optical cavities have been demonstrated to improve energy efficiencies in Holographic Data Storage Systems (HDSS). The orthogonal reference beams supported as cavity eigenmodes can provide another multiplexing degree of freedom to push storage densities toward the limit of 3D optical data storage. While keeping the increased energy efficiency of a cavity enhanced reference arm, image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at two Bragg angles. We experimentally confirmed write rates are enhanced by an average factor of 1.1, and page crosstalk is about 2.5%. This hybrid multiplexing opens up a pathway to increase storage density while minimizing modification of current angular multiplexing HDSS.

  17. Dynamic-RAM Data Storage Unit

    NASA Technical Reports Server (NTRS)

    Sturman, J. C.

    1985-01-01

    Dynamic random-access-memory (RAM) data delay and storage unit developed to insure data received from satellite is stored and not lost when satellite is not within range of ground station. Stores 256K of serial data, with independent read and write capability.

  18. Data on conceptual design of cryogenic energy storage system combined with liquefied natural gas regasification process.

    PubMed

    Lee, Inkyu; Park, Jinwoo; Moon, Il

    2017-12-01

    This paper describes data of an integrated process, cryogenic energy storage system combined with liquefied natural gas (LNG) regasification process. The data in this paper is associated with the article entitled "Conceptual Design and Exergy Analysis of Combined Cryogenic Energy Storage and LNG Regasification Processes: Cold and Power Integration" (Lee et al., 2017) [1]. The data includes the sensitivity case study dataset of the air flow rate and the heat exchanging feasibility data by composite curves. The data is expected to be helpful to the cryogenic energy process development.

  19. Comprehensive monitoring for heterogeneous geographically distributed storage

    DOE PAGES

    Ratnikova, Natalia; Karavakis, E.; Lammel, S.; ...

    2015-12-23

    Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a centrally managed storage space, will be increased to up to 40%. For comprehensive tracking and monitoring of the storage utilization across all participating sites, CMS developed a space monitoring system, which provides a central view of the geographically dispersed heterogeneous storage systems. The first prototype was deployed at pilot sites in summer 2014, and has been substantially reworked since then.more » In this study, we discuss the functionality and our experience of system deployment and operation on the full CMS scale.« less

  20. Requirements for the structured recording of surgical device data in the digital operating room.

    PubMed

    Rockstroh, Max; Franke, Stefan; Neumuth, Thomas

    2014-01-01

    Due to the increasing complexity of the surgical working environment, increasingly technical solutions must be found to help relieve the surgeon. This objective is supported by a structured storage concept for all relevant device data. In this work, we present a concept and prototype development of a storage system to address intraoperative medical data. The requirements of such a system are described, and solutions for data transfer, processing, and storage are presented. In a subsequent study, a prototype based on the presented concept is tested for correct and complete data transmission and storage and for the ability to record a complete neurosurgical intervention with low processing latencies. In the final section, several applications for the presented data recorder are shown. The developed system based on the presented concept is able to store the generated data correctly, completely, and quickly enough even if much more data than expected are sent during a surgical intervention. The Surgical Data Recorder supports automatic recognition of the interventional situation by providing a centralized data storage and access interface to the OR communication bus. In the future, further data acquisition technologies should be integrated. Therefore, additional interfaces must be developed. The data generated by these devices and technologies should also be stored in or referenced by the Surgical Data Recorder to support the analysis of the OR situation.

  1. Study on parallel and distributed management of RS data based on spatial database

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  2. Disk storage management for LHCb based on Data Popularity estimator

    NASA Astrophysics Data System (ADS)

    Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-12-01

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.

  3. FORCEnet Net Centric Architecture - A Standards View

    DTIC Science & Technology

    2006-06-01

    SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION DATA MANAGEMENT APPLICATION...R V I C E P L A T F O R M S E R V I C E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM...E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION

  4. Storage media for computers in radiology

    PubMed Central

    Dandu, Ravi Varma

    2008-01-01

    The introduction and wide acceptance of digital technology in medical imaging has resulted in an exponential increase in the amount of data produced by the radiology department. There is an insatiable need for storage space to archive this ever-growing volume of image data. Healthcare facilities should plan the type and size of the storage media that they needed, based not just on the volume of data but also on considerations such as the speed and ease of access, redundancy, security, costs, as well as the longevity of the archival technology. This article reviews the various digital storage media and compares their merits and demerits. PMID:19774182

  5. Modeling of the Assiniboine Delta Aquifer (ADA) of Manitoba using the Groundwater Storage from GRACE

    NASA Astrophysics Data System (ADS)

    Yirdaw-Zeleke, S.; Snelgrove, K.

    2007-12-01

    This paper investigates the use of GRACE (Gravity Recovery and Climate Experiment) moisture storages for modeling of the Assiniboine Delta Aquifer (ADA) of Manitoba, Canada. There are great promises from GRACE in capturing regional groundwater storages that are potentially used for modeling application. However, it is well known that these storages are difficult to measure over the scales needed for hydrological model applications. Therefore, prior to modeling the aquifer using GRACE moisture storages, the storages need to be downscaled in to regional groundwater storages using the measured groundwater head data available in the area. Previous studies in the ADA have shown that the downscaled moisture storage estimates compared favorably with the measured groundwater storage over the area. This study focuses on the modeling of the ADA aquifer using the downscaled GRACE moisture storages. These storages will be used to initialize, calibration and potentially steer the hydrologic simulation. The calibrated model then will be validated independently using the measured data. These validations will hopefully provide better explanations for the underlying reasons for the differences in model predictions and measurements. This will identify some of the key assumptions and uncertainties in predicting moisture storage, and so highlight topics for further discussion and research.

  6. Data storage systems technology for the Space Station era

    NASA Technical Reports Server (NTRS)

    Dalton, John; Mccaleb, Fred; Sos, John; Chesney, James; Howell, David

    1987-01-01

    The paper presents the results of an internal NASA study to determine if economically feasible data storage solutions are likely to be available to support the ground data transport segment of the Space Station mission. An internal NASA effort to prototype a portion of the required ground data processing system is outlined. It is concluded that the requirements for all ground data storage functions can be met with commercial disk and tape drives assuming conservative technology improvements and that, to meet Space Station data rates with commercial technology, the data will have to be distributed over multiple devices operating in parallel and in a sustained maximum throughput mode.

  7. Durable High-Density Data Storage

    NASA Technical Reports Server (NTRS)

    Lamartine, Bruce C.; Stutz, Roger A.

    1996-01-01

    The focus ion beam (FIB) micromilling process for data storage provides a new non-magnetic storage method for archiving large amounts of data. The process stores data on robust materials such as steel, silicon, and gold coated silicon. The storage process was developed to provide a method to insure the long term storage life of data. We estimate that the useful life of data written on silicon or gold-coated silicon to be on the order of a few thousand years without the need to rewrite the data every few years. The process uses an ion beam to carve material from the surface, much like stone cutters in ancient civilizations removed material from stone. The deeper the information is carved into the media, the longer the expected life of the information. The process can record information in three formats: (1) binary at densities of 23 Gbits/square inch, (2) alphanumeric at optical or non-optical density, and (3) graphical at optical and non-optical density. The formats can be mixed on the same media; and thus, it is possible to record, in a human-viewable format, instructions that can be read using an optical microscope. These instructions provide guidance on reading the remaining higher density information.

  8. Implementation of Organ Culture storage of donor corneas: a 3 year study of its impact on the corneal transplant wait list at the Lions New South Wales Eye Bank.

    PubMed

    Devasahayam, Raj; Georges, Pierre; Hodge, Christopher; Treloggen, Jane; Cooper, Simon; Petsoglou, Con; Sutton, Gerard; Zhu, Meidong

    2016-09-01

    Organ Culture corneal storage offers an extended storage time and increased donor pool and tissue assessment opportunities. In September 2011, the Lions New South Wales Eye Bank (LNSWEB) moved from hypothermic storage to Organ Culture corneal storage. This study evaluates the impact of implementation of Organ Culture on donor eye retrieval and the corneal transplant waiting list over a 3 year period in NSW, Australia. Retrospective review of the LNSWEB data from September 2011 to August 2014. Tissue collection, waiting list and tissue utilization data were recorded. The data from September 2008 to August 2011 for Optisol-GS storage was used for comparison. The annual donor and cornea collection rate increased 35 % and 44 % respectively with Organ Culture compared to Optisol-GS storage. The utilization rate of corneal tissue increased from 73.4 % with hypothermic storage to 77.2 % with Organ Culture storage. The transplant wait list decreased by 77.3 % from September 2011 to August 2014 and correlated with the increased rate of corneal transplantation (r = -0.9381, p < 0.0001). No other factors impacting the wait list changed over this period. Corneas not used from either storage method were due to unacceptable endothelial cell density/viability. The contamination rate of corneas stored in Organ Culture medium was low at 1.74 %. The Organ Culture storage method increases the corneal donor pool available to Eye banks. The practical benefits of the extended storage time and increased donor assessment opportunities have directly led to an increase in corneal utilization rate and a significant decrease in recipient wait list time.

  9. A system approach to archival storage

    NASA Technical Reports Server (NTRS)

    Corcoran, John W.

    1991-01-01

    The introduction and viewgraphs of a discussion on a system approach to archival storage presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. The use of D-2 iron particles for archival storage is discussed along with how acceleration factors relating short-term tests to archival life times can be justified. Ampex Recording Systems is transferring D-2 video technology to data storage applications, and encountering concerns about corrosion. To protect the D-2 standard, Battelle tests were done on all four tapes in the Class 2 environment. Error rates were measured before and after the test on both exposed and control groups.

  10. Storage system architectures and their characteristics

    NASA Technical Reports Server (NTRS)

    Sarandrea, Bryan M.

    1993-01-01

    Not all users storage requirements call for 20 MBS data transfer rates, multi-tier file or data migration schemes, or even automated retrieval of data. The number of available storage solutions reflects the broad range of user requirements. It is foolish to think that any one solution can address the complete range of requirements. For users with simple off-line storage requirements, the cost and complexity of high end solutions would provide no advantage over a more simple solution. The correct answer is to match the requirements of a particular storage need to the various attributes of the available solutions. The goal of this paper is to introduce basic concepts of archiving and storage management in combination with the most common architectures and to provide some insight into how these concepts and architectures address various storage problems. The intent is to provide potential consumers of storage technology with a framework within which to begin the hunt for a solution which meets their particular needs. This paper is not intended to be an exhaustive study or to address all possible solutions or new technologies, but is intended to be a more practical treatment of todays storage system alternatives. Since most commercial storage systems today are built on Open Systems concepts, the majority of these solutions are hosted on the UNIX operating system. For this reason, some of the architectural issues discussed focus around specific UNIX architectural concepts. However, most of the architectures are operating system independent and the conclusions are applicable to such architectures on any operating system.

  11. Mash-up of techniques between data crawling/transfer, data preservation/stewardship and data processing/visualization technologies on a science cloud system designed for Earth and space science: a report of successful operation and science projects of the NICT Science Cloud

    NASA Astrophysics Data System (ADS)

    Murata, K. T.

    2014-12-01

    Data-intensive or data-centric science is 4th paradigm after observational and/or experimental science (1st paradigm), theoretical science (2nd paradigm) and numerical science (3rd paradigm). Science cloud is an infrastructure for 4th science methodology. The NICT science cloud is designed for big data sciences of Earth, space and other sciences based on modern informatics and information technologies [1]. Data flow on the cloud is through the following three techniques; (1) data crawling and transfer, (2) data preservation and stewardship, and (3) data processing and visualization. Original tools and applications of these techniques have been designed and implemented. We mash up these tools and applications on the NICT Science Cloud to build up customized systems for each project. In this paper, we discuss science data processing through these three steps. For big data science, data file deployment on a distributed storage system should be well designed in order to save storage cost and transfer time. We developed a high-bandwidth virtual remote storage system (HbVRS) and data crawling tool, NICTY/DLA and Wide-area Observation Network Monitoring (WONM) system, respectively. Data files are saved on the cloud storage system according to both data preservation policy and data processing plan. The storage system is developed via distributed file system middle-ware (Gfarm: GRID datafarm). It is effective since disaster recovery (DR) and parallel data processing are carried out simultaneously without moving these big data from storage to storage. Data files are managed on our Web application, WSDBank (World Science Data Bank). The big-data on the cloud are processed via Pwrake, which is a workflow tool with high-bandwidth of I/O. There are several visualization tools on the cloud; VirtualAurora for magnetosphere and ionosphere, VDVGE for google Earth, STICKER for urban environment data and STARStouch for multi-disciplinary data. There are 30 projects running on the NICT Science Cloud for Earth and space science. In 2003 56 refereed papers were published. At the end, we introduce a couple of successful results of Earth and space sciences using these three techniques carried out on the NICT Sciences Cloud. [1] http://sc-web.nict.go.jp

  12. Petabyte Class Storage at Jefferson Lab (CEBAF)

    NASA Technical Reports Server (NTRS)

    Chambers, Rita; Davis, Mark

    1996-01-01

    By 1997, the Thomas Jefferson National Accelerator Facility will collect over one Terabyte of raw information per day of Accelerator operation from three concurrently operating Experimental Halls. When post-processing is included, roughly 250 TB of raw and formatted experimental data will be generated each year. By the year 2000, a total of one Petabyte will be stored on-line. Critical to the experimental program at Jefferson Lab (JLab) is the networking and computational capability to collect, store, retrieve, and reconstruct data on this scale. The design criteria include support of a raw data stream of 10-12 MB/second from Experimental Hall B, which will operate the CEBAF (Continuous Electron Beam Accelerator Facility) Large Acceptance Spectrometer (CLAS). Keeping up with this data stream implies design strategies that provide storage guarantees during accelerator operation, minimize the number of times data is buffered allow seamless access to specific data sets for the researcher, synchronize data retrievals with the scheduling of postprocessing calculations on the data reconstruction CPU farms, as well as support the site capability to perform data reconstruction and reduction at the same overall rate at which new data is being collected. The current implementation employs state-of-the-art StorageTek Redwood tape drives and robotics library integrated with the Open Storage Manager (OSM) Hierarchical Storage Management software (Computer Associates, International), the use of Fibre Channel RAID disks dual-ported between Sun Microsystems SMP servers, and a network-based interface to a 10,000 SPECint92 data processing CPU farm. Issues of efficiency, scalability, and manageability will become critical to meet the year 2000 requirements for a Petabyte of near-line storage interfaced to over 30,000 SPECint92 of data processing power.

  13. The cornerstone of data warehousing for government applications

    NASA Technical Reports Server (NTRS)

    Kenbeek, Doug; Rothschild, Jack

    1996-01-01

    The purpose of this paper is to discuss data warehousing storage issues and the impact of EMC open storage technology for meeting the myriad of challenges government organizations face when building Decision Support/Data Warehouse system.

  14. Effect of storage time on gene expression data acquired from unfrozen archived newborn blood spots.

    PubMed

    Ho, Nhan T; Busik, Julia V; Resau, James H; Paneth, Nigel; Khoo, Sok Kean

    2016-11-01

    Unfrozen archived newborn blood spots (NBS) have been shown to retain sufficient messenger RNA (mRNA) for gene expression profiling. However, the effect of storage time at ambient temperature for NBS samples in relation to the quality of gene expression data is relatively unknown. Here, we evaluated mRNA expression from quantitative real-time PCR (qRT-PCR) and microarray data obtained from NBS samples stored at ambient temperature to determine the effect of storage time on the quality of gene expression. These data were generated in a previous case-control study examining NBS in 53 children with cerebral palsy (CP) and 53 matched controls. NBS sample storage period ranged from 3 to 16years at ambient temperature. We found persistently low RNA integrity numbers (RIN=2.3±0.71) and 28S/18S rRNA ratios (~0) across NBS samples for all storage periods. In both qRT-PCR and microarray data, the expression of three common housekeeping genes-beta cytoskeletal actin (ACTB), glyceraldehyde 3-phosphate dehydrogenase (GAPDH), and peptidylprolyl isomerase A (PPIA)-decreased with increased storage time. Median values of each microarray probe intensity at log 2 scale also decreased over time. After eight years of storage, probe intensity values were largely reduced to background intensity levels. Of 21,500 genes tested, 89% significantly decreased in signal intensity, with 13,551, 10,730, and 9925 genes detected within 5years, > 5 to <10years, and >10years of storage, respectively. We also examined the expression of two gender-specific genes (X inactivation-specific transcript, XIST and lysine-specific demethylase 5D, KDM5D) and seven gene sets representing the inflammatory, hypoxic, coagulative, and thyroidal pathways hypothesized to be related to CP risk to determine the effect of storage time on the detection of these biologically relevant genes. We found the gender-specific genes and CP-related gene sets detectable in all storage periods, but exhibited differential expression (between male vs. female or CP vs. control) only within the first six years of storage. We concluded that gene expression data quality deteriorates in unfrozen archived NBS over time and that differential gene expression profiling and analysis is recommended for those NBS samples collected and stored within six years at ambient temperature. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Towards Regional, Error-Bounded Landscape Carbon Storage Estimates for Data-Deficient Areas of the World

    PubMed Central

    Willcock, Simon; Phillips, Oliver L.; Platts, Philip J.; Balmford, Andrew; Burgess, Neil D.; Lovett, Jon C.; Ahrends, Antje; Bayliss, Julian; Doggart, Nike; Doody, Kathryn; Fanning, Eibleis; Green, Jonathan; Hall, Jaclyn; Howell, Kim L.; Marchant, Rob; Marshall, Andrew R.; Mbilinyi, Boniface; Munishi, Pantaleon K. T.; Owen, Nisha; Swetnam, Ruth D.; Topp-Jorgensen, Elmer J.; Lewis, Simon L.

    2012-01-01

    Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as ‘lowland tropical forest’ are often used, termed ‘Tier 1 type’ analyses by the Intergovernmental Panel on Climate Change (IPCC). Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI) for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon) for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC ‘Tier 2’ reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92–6.74) Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced for a relatively low investment. PMID:23024764

  16. Towards regional, error-bounded landscape carbon storage estimates for data-deficient areas of the world.

    PubMed

    Willcock, Simon; Phillips, Oliver L; Platts, Philip J; Balmford, Andrew; Burgess, Neil D; Lovett, Jon C; Ahrends, Antje; Bayliss, Julian; Doggart, Nike; Doody, Kathryn; Fanning, Eibleis; Green, Jonathan; Hall, Jaclyn; Howell, Kim L; Marchant, Rob; Marshall, Andrew R; Mbilinyi, Boniface; Munishi, Pantaleon K T; Owen, Nisha; Swetnam, Ruth D; Topp-Jorgensen, Elmer J; Lewis, Simon L

    2012-01-01

    Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as 'lowland tropical forest' are often used, termed 'Tier 1 type' analyses by the Intergovernmental Panel on Climate Change (IPCC). Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI) for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon) for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC 'Tier 2' reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92-6.74) Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced for a relatively low investment.

  17. FPGA-based prototype storage system with phase change memory

    NASA Astrophysics Data System (ADS)

    Li, Gezi; Chen, Xiaogang; Chen, Bomy; Li, Shunfen; Zhou, Mi; Han, Wenbing; Song, Zhitang

    2016-10-01

    With the ever-increasing amount of data being stored via social media, mobile telephony base stations, and network devices etc. the database systems face severe bandwidth bottlenecks when moving vast amounts of data from storage to the processing nodes. At the same time, Storage Class Memory (SCM) technologies such as Phase Change Memory (PCM) with unique features like fast read access, high density, non-volatility, byte-addressability, positive response to increasing temperature, superior scalability, and zero standby leakage have changed the landscape of modern computing and storage systems. In such a scenario, we present a storage system called FLEET which can off-load partial or whole SQL queries to the storage engine from CPU. FLEET uses an FPGA rather than conventional CPUs to implement the off-load engine due to its highly parallel nature. We have implemented an initial prototype of FLEET with PCM-based storage. The results demonstrate that significant performance and CPU utilization gains can be achieved by pushing selected query processing components inside in PCM-based storage.

  18. Remote Sensing of Groundwater Storage Changes in Illinois using the Gravity Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Yeh, Pat J.-F.; Swenson, S. C.; Famiglietti, J. S.; Rodell, M.

    2007-01-01

    Regional groundwater storage changes in Illinois are estimated from monthly GRACE total water storage change (TWSC) data and in situ measurements of soil moisture for the period 2002-2005. Groundwater storage change estimates are compared to those derived from the soil moisture and available well level data. The seasonal pattern and amplitude of GRACE-estimated groundwater storage changes track those of the in situ measurements reasonably well, although substantial differences exist in month-to-month variations. The seasonal cycle of GRACE TWSC agrees well with observations (correlation coefficient = 0.83), while the seasonal cycle of GRACE-based estimates of groundwater storage changes beneath 2 m depth agrees with observations with a correlation coefficient of 0.63. We conclude that the GRACE-based method of estimating monthly to seasonal groundwater storage changes performs reasonably well at the 200,000 sq km scale of Illinois.

  19. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  20. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  1. PACS storage technology update: holographic storage.

    PubMed

    Colang, John E; Johnston, James N

    2006-01-01

    This paper focuses on the emerging technology of holographic storage and its effect on picture archiving and communication systems (PACS). A review of the emerging technology is presented, which includes a high level description of holographic drives and the associated substrate media, the laser and optical technology, and the spatial light modulator. The potential advantages and disadvantages of holographic drive and storage technology are evaluated. PACS administrators face myriad complex and expensive storage solutions and selecting an appropriate system is time-consuming and costly. Storage technology may become obsolete quickly because of the exponential nature of the advances in digital storage media. Holographic storage may turn out to be a low cost, high speed, high volume storage solution of the future; however, data is inconclusive at this early stage of the technology lifecycle. Despite the current lack of quantitative data to support the hypothesis that holographic technology will have a significant effect on PACS and standards of practice, it seems likely from the current information that holographic technology will generate significant efficiencies. This paper assumes the reader has a fundamental understanding of PACS technology.

  2. An effective XML based name mapping mechanism within StoRM

    NASA Astrophysics Data System (ADS)

    Corso, E.; Forti, A.; Ghiselli, A.; Magnoni, L.; Zappi, R.

    2008-07-01

    In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of parameters as the desired quality of services and the VOMS attributes specified in the requests. StoRM is a SRM service developed by INFN and ICTP-EGRID to manage file and space on standard POSIX and high performing parallel and cluster file systems. An upcoming requirement in the Grid data scenario is the orthogonality of the logical name and the physical location of data, in order to refer, with the same identifier, to different copies of data archived in various storage areas with different quality of service. The mapping mechanism proposed in StoRM is based on a XML document that represents the different storage components managed by the service, the storage areas defined by the site administrator, the quality of service they provide and the Virtual Organization that want to use the storage area. An appropriate directory tree is realized in each storage component reflecting the XML schema. In this scenario StoRM is able to identify the physical location of a requested data evaluating the logical identifier and the specified attributes following the XML schema, without querying any database service. This paper presents the namespace schema defined, the different entities represented and the technical details of the StoRM implementation.

  3. DNA as a digital information storage device: hope or hype?

    PubMed

    Panda, Darshan; Molla, Kutubuddin Ali; Baig, Mirza Jainul; Swain, Alaka; Behera, Deeptirekha; Dash, Manaswini

    2018-05-01

    The total digital information today amounts to 3.52 × 10 22 bits globally, and at its consistent exponential rate of growth is expected to reach 3 × 10 24 bits by 2040. Data storage density of silicon chips is limited, and magnetic tapes used to maintain large-scale permanent archives begin to deteriorate within 20 years. Since silicon has limited data storage ability and serious limitations, such as human health hazards and environmental pollution, researchers across the world are intently searching for an appropriate alternative. Deoxyribonucleic acid (DNA) is an appealing option for such a purpose due to its endurance, a higher degree of compaction, and similarity to the sequential code of 0's and 1's as found in a computer. This emerging field of DNA as means of data storage has the potential to transform science fiction into reality, wherein a device that can fit in our palms can accommodate the information of the entire world, as latest research has revealed that just four grams of DNA could store the annual global digital information. DNA has all the properties to supersede the conventional hard disk, as it is capable of retaining ten times more data, has a thousandfold storage density, and consumes 10 8 times less power to store a similar amount of data. Although DNA has an enormous potential as a data storage device of the future, multiple bottlenecks such as exorbitant costs, excruciatingly slow writing and reading mechanisms, and vulnerability to mutations or errors need to be resolved. In this review, we have critically analyzed the emergence of DNA as a molecular storage device for the future, its ability to address the future digital data crunch, potential challenges in achieving this objective, various current industrial initiatives, and major breakthroughs.

  4. A parallel architecture of interpolated timing recovery for high- speed data transfer rate and wide capture-range

    NASA Astrophysics Data System (ADS)

    Higashino, Satoru; Kobayashi, Shoei; Yamagami, Tamotsu

    2007-06-01

    High data transfer rate has been demanded for data storage devices along increasing the storage capacity. In order to increase the transfer rate, high-speed data processing techniques in read-channel devices are required. Generally, parallel architecture is utilized for the high-speed digital processing. We have developed a new architecture of Interpolated Timing Recovery (ITR) to achieve high-speed data transfer rate and wide capture-range in read-channel devices for the information storage channels. It facilitates the parallel implementation on large-scale-integration (LSI) devices.

  5. Three-dimensional magnetic bubble memory system

    NASA Technical Reports Server (NTRS)

    Stadler, Henry L. (Inventor); Katti, Romney R. (Inventor); Wu, Jiin-Chuan (Inventor)

    1994-01-01

    A compact memory uses magnetic bubble technology for providing data storage. A three-dimensional arrangement, in the form of stacks of magnetic bubble layers, is used to achieve high volumetric storage density. Output tracks are used within each layer to allow data to be accessed uniquely and unambiguously. Storage can be achieved using either current access or field access magnetic bubble technology. Optical sensing via the Faraday effect is used to detect data. Optical sensing facilitates the accessing of data from within the three-dimensional package and lends itself to parallel operation for supporting high data rates and vector and parallel processing.

  6. Holographic optical disc

    NASA Astrophysics Data System (ADS)

    Zhou, Gan; An, Xin; Pu, Allen; Psaltis, Demetri; Mok, Fai H.

    1999-11-01

    The holographic disc is a high capacity, disk-based data storage device that can provide the performance for next generation mass data storage needs. With a projected capacity approaching 1 terabit on a single 12 cm platter, the holographic disc has the potential to become a highly efficient storage hardware for data warehousing applications. The high readout rate of holographic disc makes it especially suitable for generating multiple, high bandwidth data streams such as required for network server computers. Multimedia applications such as interactive video and HDTV can also potentially benefit from the high capacity and fast data access of holographic memory.

  7. Sirocco Storage Server v. pre-alpha 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew L.; Danielson, Geoffrey; Ward, H. Lee

    Sirocco is a parallel storage system under development, designed for write-intensive workloads on large-scale HPC platforms. It implements a keyvalue object store on top of a set of loosely federated storage servers that cooperate to ensure data integrity and performance. It includes support for a range of different types of storage transactions. This software release constitutes a conformant storage server, along with the client-side libraries to access the storage over a network.

  8. Effects of water storage in the stele on measurements of the hydraulics of young roots of corn and barley.

    PubMed

    Joshi, Ankur; Knipfer, Thorsten; Steudle, Ernst

    2009-11-01

    In standard techniques (root pressure probe or high-pressure flowmeter), the hydraulic conductivity of roots is calculated from transients of root pressure using responses following step changes in volume or pressure, which may be affected by a storage of water in the stele. Storage effects were examined using both experimental data of root pressure relaxations and clamps and a physical capacity model. Young roots of corn and barley were treated as a three-compartment system, comprising a serial arrangement of xylem/probe, stele and outside medium/cortex. The hydraulic conductivities of the endodermis and of xylem vessels were derived from experimental data. The lower limit of the storage capacity of stelar tissue was caused by the compressibility of water. This was subsequently increased to account for realistic storage capacities of the stele. When root water storage was varied over up to five orders of magnitude, the results of simulations showed that storage effects could not explain the experimental data, suggesting a major contribution of effects other than water storage. It is concluded that initial water flows may be used to measure root hydraulic conductivity provided that the volumes of water used are much larger than the volumes stored.

  9. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  10. An Open-Source Storage Solution for Cryo-Electron Microscopy Samples.

    PubMed

    Ultee, Eveline; Schenkel, Fred; Yang, Wen; Brenzinger, Susanne; Depelteau, Jamie S; Briegel, Ariane

    2018-02-01

    Cryo-electron microscopy (cryo-EM) enables the study of biological structures in situ in great detail and to solve protein structures at Ångstrom level resolution. Due to recent advances in instrumentation and data processing, the field of cryo-EM is a rapidly growing. Access to facilities and national centers that house the state-of-the-art microscopes is limited due to the ever-rising demand, resulting in long wait times between sample preparation and data acquisition. To improve sample storage, we have developed a cryo-storage system with an efficient, high storage capacity that enables sample storage in a highly organized manner. This system is simple to use, cost-effective and easily adaptable for any type of grid storage box and dewar and any size cryo-EM laboratory.

  11. AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System

    NASA Astrophysics Data System (ADS)

    Wang, R.; Harris, C.; Wicenec, A.

    2016-07-01

    In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.

  12. Optical mass memory system (AMM-13). AMM-13 system segment specification

    NASA Technical Reports Server (NTRS)

    Bailey, G. A.

    1980-01-01

    The performance, design, development, and test requirements for an optical mass data storage and retrieval system prototype (AMM-13) are established. This system interfaces to other system segments of the NASA End-to-End Data System via the Data Base Management System segment and is designed to have a storage capacity of 10 to the 13th power bits (10 to the 12th power bits on line). The major functions of the system include control, input and output, recording of ingested data, fiche processing/replication and storage and retrieval.

  13. Land Water Storage within the Congo Basin Inferred from GRACE Satellite Gravity Data

    NASA Technical Reports Server (NTRS)

    Crowley, John W.; Mitrovica, Jerry X.; Bailey, Richard C.; Tamisiea, Mark E.; Davis, James L.

    2006-01-01

    GRACE satellite gravity data is used to estimate terrestrial (surface plus ground) water storage within the Congo Basin in Africa for the period of April, 2002 - May, 2006. These estimates exhibit significant seasonal (30 +/- 6 mm of equivalent water thickness) and long-term trends, the latter yielding a total loss of approximately 280 km(exp 3) of water over the 50-month span of data. We also combine GRACE and precipitation data set (CMAP, TRMM) to explore the relative contributions of the source term to the seasonal hydrological balance within the Congo Basin. We find that the seasonal water storage tends to saturate for anomalies greater than 30-44 mm of equivalent water thickness. Furthermore, precipitation contributed roughly three times the peak water storage after anomalously rainy seasons, in early 2003 and 2005, implying an approximately 60-70% loss from runoff and evapotranspiration. Finally, a comparison of residual land water storage (monthly estimates minus best-fitting trends) in the Congo and Amazon Basins shows an anticorrelation, in agreement with the 'see-saw' variability inferred by others from runoff data.

  14. Portable and Error-Free DNA-Based Data Storage.

    PubMed

    Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica

    2017-07-10

    DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.

  15. Evolving Requirements for Magnetic Tape Data Storage Systems

    NASA Technical Reports Server (NTRS)

    Gniewek, John J.

    1996-01-01

    Magnetic tape data storage systems have evolved in an environment where the major applications have been back-up/restore, disaster recovery, and long term archive. Coincident with the rapidly improving price-performance of disk storage systems, the prime requirements for tape storage systems have remained: (1) low cost per MB, (2) a data rate balanced to the remaining system components. Little emphasis was given to configuring the technology components to optimize retrieval of the stored data. Emerging new applications such as network attached high speed memory (HSM), and digital libraries, place additional emphasis and requirements on the retrieval of the stored data. It is therefore desirable to consider the system to be defined both by STorage And Retrieval System (STARS) requirements. It is possible to provide comparative performance analysis of different STARS by incorporating parameters related to (1) device characteristics, and (2) application characteristics in combination with queuing theory analysis. Results of these analyses are presented here in the form of response time as a function of system configuration for two different types of devices and for a variety of applications.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowen, Benjamin; Ruebel, Oliver; Fischer, Curt Fischer R.

    BASTet is an advanced software library written in Python. BASTet serves as the analysis and storage library for the OpenMSI project. BASTet is an integrate framework for: i) storage of spectral imaging data, ii) storage of derived analysis data, iii) provenance of analyses, iv) integration and execution of analyses via complex workflows. BASTet implements the API for the HDF5 storage format used by OpenMSI. Analyses that are developed using BASTet benefit from direct integration with storage format, automatic tracking of provenance, and direct integration with command-line and workflow execution tools. BASTet also defines interfaces to enable developers to directly integratemore » their analysis with OpenMSI's web-based viewing infrastruture without having to know OpenMSI. BASTet also provides numerous helper classes and tools to assist with the conversion of data files, ease parallel implementation of analysis algorithms, ease interaction with web-based functions, description methods for data reduction. BASTet also includes detailed developer documentation, user tutorials, iPython notebooks, and other supporting documents.« less

  17. Semantics-based distributed I/O with the ParaMEDIC framework.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaji, P.; Feng, W.; Lin, H.

    2008-01-01

    Many large-scale applications simultaneously rely on multiple resources for efficient execution. For example, such applications may require both large compute and storage resources; however, very few supercomputing centers can provide large quantities of both. Thus, data generated at the compute site oftentimes has to be moved to a remote storage site for either storage or visualization and analysis. Clearly, this is not an efficient model, especially when the two sites are distributed over a wide-area network. Thus, we present a framework called 'ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing' which uses application-specific semantic information to convert the generatedmore » data to orders-of-magnitude smaller metadata at the compute site, transfer the metadata to the storage site, and re-process the metadata at the storage site to regenerate the output. Specifically, ParaMEDIC trades a small amount of additional computation (in the form of data post-processing) for a potentially significant reduction in data that needs to be transferred in distributed environments.« less

  18. Multibit data storage states formed in plasma-treated MoS₂ transistors.

    PubMed

    Chen, Mikai; Nam, Hongsuk; Wi, Sungjin; Priessnitz, Greg; Gunawan, Ivan Manuel; Liang, Xiaogan

    2014-04-22

    New multibit memory devices are desirable for improving data storage density and computing speed. Here, we report that multilayer MoS2 transistors, when treated with plasmas, can dramatically serve as low-cost, nonvolatile, highly durable memories with binary and multibit data storage capability. We have demonstrated binary and 2-bit/transistor (or 4-level) data states suitable for year-scale data storage applications as well as 3-bit/transistor (or 8-level) data states for day-scale data storage. This multibit memory capability is hypothesized to be attributed to plasma-induced doping and ripple of the top MoS2 layers in a transistor, which could form an ambipolar charge-trapping layer interfacing the underlying MoS2 channel. This structure could enable the nonvolatile retention of charged carriers as well as the reversible modulation of polarity and amount of the trapped charge, ultimately resulting in multilevel data states in memory transistors. Our Kelvin force microscopy results strongly support this hypothesis. In addition, our research suggests that the programming speed of such memories can be improved by using nanoscale-area plasma treatment. We anticipate that this work would provide important scientific insights for leveraging the unique structural property of atomically layered two-dimensional materials in nanoelectronic applications.

  19. Multi-views storage model and access methods of conversation history in converged IP messaging system

    NASA Astrophysics Data System (ADS)

    Lu, Meilian; Yang, Dong; Zhou, Xing

    2013-03-01

    Based on the analysis of the requirements of conversation history storage in CPM (Converged IP Messaging) system, a Multi-views storage model and access methods of conversation history are proposed. The storage model separates logical views from physical storage and divides the storage into system managed region and user managed region. It simultaneously supports conversation view, system pre-defined view and user-defined view of storage. The rationality and feasibility of multi-view presentation, the physical storage model and access methods are validated through the implemented prototype. It proves that, this proposal has good scalability, which will help to optimize the physical data storage structure and improve storage performance.

  20. 77 FR 13151 - Agency Information Collection Activities: Proposed Collection; Comments Requested; Records and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-05

    ... Data: Daily Summaries, Records of Production, Storage, and Disposition, and Supporting Data by Licensed... approved collection. (2) Title of the Form/Collection: Records and Supporting Data: Daily Summaries, Records of Production, Storage and Disposition and Supporting Data by Explosives Manufacturers. (3) Agency...

  1. 76 FR 81967 - Agency Information Collection Activities: Proposed Collection; Comments Requested: Records and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-29

    ... Data: Daily Summaries, Records of Production, Storage, and Disposition, and Supporting Data by Licensed... Form/Collection: Records and Supporting Data: Daily Summaries, Records of Production, Storage and Disposition and Supporting Data by Explosives Manufacturers. (3) Agency form number, if any, and the...

  2. Extending DIRAC File Management with Erasure-Coding for efficient storage.

    NASA Astrophysics Data System (ADS)

    Cadellin Skipsey, Samuel; Todev, Paulin; Britton, David; Crooks, David; Roy, Gareth

    2015-12-01

    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.

  3. Leveraging Available Data to Support Extension of Transportation Packages Service Life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, K.; Abramczyk, G.; Bellamy, S.

    Data obtained from testing shipping package materials have been leveraged to support extending the service life of select shipping packages while in nuclear materials transportation. Increasingly, nuclear material inventories are being transferred to an interim storage location where they will reside for extended periods of time. Use of a shipping package to store nuclear materials in an interim storage location has become more attractive for a variety of reasons. Shipping packages are robust and have a qualified pedigree for their performance in normal operation and accident conditions within the approved shipment period and storing nuclear material within a shipping packagemore » results in reduced operations for the storage facility. However, the shipping package materials of construction must maintain a level of integrity as specified by the safety basis of the storage facility through the duration of the storage period, which is typically well beyond the one year transportation window. Test programs have been established to obtain aging data on materials of construction that are the most sensitive/susceptible to aging in certain shipping package designs. The collective data are being used to support extending the service life of shipping packages in both transportation and storage.« less

  4. A Fault-Tolerant Radiation-Robust Mass Storage Concept for Highly Scaled Flash Memory

    NASA Astrophysics Data System (ADS)

    Fuchs, Cristian M.; Trinitis, Carsten; Appel, Nicolas; Langer, Martin

    2015-09-01

    Future spacemissions will require vast amounts of data to be stored and processed aboard spacecraft. While satisfying operational mission requirements, storage systems must guarantee data integrity and recover damaged data throughout the mission. NAND-flash memories have become popular for space-borne high performance mass memory scenarios, though future storage concepts will rely upon highly scaled flash or other memory technologies. With modern flash memory, single bit erasure coding and RAID based concepts are insufficient. Thus, a fully run-time configurable, high performance, dependable storage concept, requiring a minimal set of logic or software. The solution is based on composite erasure coding and can be adjusted for altered mission duration or changing environmental conditions.

  5. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  6. A 3-D seismic investigation of the Ray gas storage reef, Macomb County, Michigan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer, S.F.; Dixon, R.A.

    1994-08-01

    A 4.2 mi[sup 2] 3-D seismic survey was acquired over the Ray Niagaran reef gas storage field in southeast Michigan as part of a program to maximize storage capacity and gas deliverability of the storage reservoir. Goals of the survey were to (1) determine if additional storage capacity could be found either as extensions to the Ray reef or as undiscovered satellite reefs, (2) investigate the relationship between the main body and a low-relief gas well east of the reef, and (3) determine if seismic data can be used to quantify reservoir parameters to maximize the productive capacity of infillmore » wells. Interpretation of the 3-D seismic data resulted in a detailed image of the reef, using several interpretive techniques. A seismic reflection within the reef was correlated with a known porosity zone, and a possible relationship between porosity and seismic amplitude was investigated. A potential connection between the main reef and the low-relief gas well was identified. This project illustrates the economic value of investigating an existing storage reef with 3-D seismic data, and underscores the necessity of such a survey prior to developing a new storage reservoir.« less

  7. 7 CFR 1767.70 - Record storage media.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 12 2012-01-01 2012-01-01 false Record storage media. 1767.70 Section 1767.70... Record storage media. The media used to capture and store the data will play an important part of each Rural Development borrower. Each borrower has the flexibility to select its own storage media. The...

  8. 7 CFR 1767.70 - Record storage media.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true Record storage media. 1767.70 Section 1767.70... Record storage media. The media used to capture and store the data will play an important part of each Rural Development borrower. Each borrower has the flexibility to select its own storage media. The...

  9. 7 CFR 1767.70 - Record storage media.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 12 2013-01-01 2013-01-01 false Record storage media. 1767.70 Section 1767.70... Record storage media. The media used to capture and store the data will play an important part of each Rural Development borrower. Each borrower has the flexibility to select its own storage media. The...

  10. 7 CFR 1767.70 - Record storage media.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 12 2011-01-01 2011-01-01 false Record storage media. 1767.70 Section 1767.70... Record storage media. The media used to capture and store the data will play an important part of each Rural Development borrower. Each borrower has the flexibility to select its own storage media. The...

  11. 12 CFR 749.2 - Vital records preservation program.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... compliance for the storage of those records if the service agreement specifies the data processor safeguards... records preservation, a schedule for the storage and destruction of records, and a records preservation log detailing for each record stored, its name, storage location, storage date, and name of the person...

  12. 12 CFR 749.2 - Vital records preservation program.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... compliance for the storage of those records if the service agreement specifies the data processor safeguards... records preservation, a schedule for the storage and destruction of records, and a records preservation log detailing for each record stored, its name, storage location, storage date, and name of the person...

  13. 12 CFR 749.2 - Vital records preservation program.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... compliance for the storage of those records if the service agreement specifies the data processor safeguards... records preservation, a schedule for the storage and destruction of records, and a records preservation log detailing for each record stored, its name, storage location, storage date, and name of the person...

  14. 12 CFR 749.2 - Vital records preservation program.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... compliance for the storage of those records if the service agreement specifies the data processor safeguards... records preservation, a schedule for the storage and destruction of records, and a records preservation log detailing for each record stored, its name, storage location, storage date, and name of the person...

  15. 12 CFR 749.2 - Vital records preservation program.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... compliance for the storage of those records if the service agreement specifies the data processor safeguards... records preservation, a schedule for the storage and destruction of records, and a records preservation log detailing for each record stored, its name, storage location, storage date, and name of the person...

  16. Leveraging Cloud Computing to Improve Storage Durability, Availability, and Cost for MER Maestro

    NASA Technical Reports Server (NTRS)

    Chang, George W.; Powell, Mark W.; Callas, John L.; Torres, Recaredo J.; Shams, Khawaja S.

    2012-01-01

    The Maestro for MER (Mars Exploration Rover) software is the premiere operation and activity planning software for the Mars rovers, and it is required to deliver all of the processed image products to scientists on demand. These data span multiple storage arrays sized at 2 TB, and a backup scheme ensures data is not lost. In a catastrophe, these data would currently recover at 20 GB/hour, taking several days for a restoration. A seamless solution provides access to highly durable, highly available, scalable, and cost-effective storage capabilities. This approach also employs a novel technique that enables storage of the majority of data on the cloud and some data locally. This feature is used to store the most recent data locally in order to guarantee utmost reliability in case of an outage or disconnect from the Internet. This also obviates any changes to the software that generates the most recent data set as it still has the same interface to the file system as it did before updates

  17. An Object-Relational Ifc Storage Model Based on Oracle Database

    NASA Astrophysics Data System (ADS)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  18. On-Chip Fluorescence Switching System for Constructing a Rewritable Random Access Data Storage Device.

    PubMed

    Nguyen, Hoang Hiep; Park, Jeho; Hwang, Seungwoo; Kwon, Oh Seok; Lee, Chang-Soo; Shin, Yong-Beom; Ha, Tai Hwan; Kim, Moonil

    2018-01-10

    We report the development of on-chip fluorescence switching system based on DNA strand displacement and DNA hybridization for the construction of a rewritable and randomly accessible data storage device. In this study, the feasibility and potential effectiveness of our proposed system was evaluated with a series of wet experiments involving 40 bits (5 bytes) of data encoding a 5-charactered text (KRIBB). Also, a flexible data rewriting function was achieved by converting fluorescence signals between "ON" and "OFF" through DNA strand displacement and hybridization events. In addition, the proposed system was successfully validated on a microfluidic chip which could further facilitate the encoding and decoding process of data. To the best of our knowledge, this is the first report on the use of DNA hybridization and DNA strand displacement in the field of data storage devices. Taken together, our results demonstrated that DNA-based fluorescence switching could be applicable to construct a rewritable and randomly accessible data storage device through controllable DNA manipulations.

  19. Development of a system for off-peak electrical energy use by air conditioners and heat pumps

    NASA Astrophysics Data System (ADS)

    Russell, L. D.

    1980-05-01

    Investigation and evaluation of several alternatives for load management for the TVA system are described. Specific data for the TVA system load characteristics were studied to determine the typical peak and off peak periods for the system. The alternative systems investigated for load management included gaseous energy storage, phase change materials energy storage, zeolite energy storage, variable speed controllers for compressors, and weather sensitive controllers. After investigating these alternatives, system design criteria were established; then, the gaseous and PCM energy storage systems were analyzed. The system design criteria include economic assessment of all alternatives. Handbook data were developed for economic assessment. A liquid/PCM energy storage system was judged feasible.

  20. Study of Basin Recession Characteristics and Groundwater Storage Properties

    NASA Astrophysics Data System (ADS)

    Yen-Bo, Chen; Cheng-Haw, Lee

    2017-04-01

    Stream flow and groundwater storage are freshwater resources that human live on.In this study, we discuss southern area basin recession characteristics and Kao-Ping River basin groundwater storage, and hope to supply reference to Taiwan water resource management. The first part of this study is about recession characteristics. We apply Brutsaert (2008) low flow analysis model to establish two recession data pieces sifting models, including low flow steady period model and normal condition model. Within individual event analysis, group event analysis and southern area basin recession assessment, stream flow and base flow recession characteristics are parameterized. The second part of this study is about groundwater storage. Among main basin in southern Taiwan, there are sufficient stream flow and precipitation gaging station data about Kao-Ping River basin and extensive drainage data, and data about different hydrological characteristics between upstream and downstream area. Therefore, this study focuses on Kao-Ping River basin and accesses groundwater storage properties. Taking residue of groundwater volume in dry season into consideration, we use base flow hydrograph to access periodical property of groundwater storage, in order to establish hydrological period conceptual model. With groundwater storage and precipitation accumulative linearity quantified by hydrological period conceptual model, their periodical changing and alternation trend properties in each drainage areas of Kao-Ping River basin have been estimated. Results of this study showed that the recession time of stream flow is related to initial flow rate of the recession events. The recession time index is lower when the flow is stream flow, not base flow, and the recession time index is higher in low flow steady flow period than in normal recession condition. By applying hydrological period conceptual model, groundwater storage could explicitly be analyzed and compared with precipitation, by only using stream flow data. Keywords: stream flow, base flow, recession characteristics, groundwater storage

  1. Development and evaluation of a low-cost and high-capacity DICOM image data storage system for research.

    PubMed

    Yakami, Masahiro; Ishizu, Koichi; Kubo, Takeshi; Okada, Tomohisa; Togashi, Kaori

    2011-04-01

    Thin-slice CT data, useful for clinical diagnosis and research, is now widely available but is typically discarded in many institutions, after a short period of time due to data storage capacity limitations. We designed and built a low-cost high-capacity Digital Imaging and COmmunication in Medicine (DICOM) storage system able to store thin-slice image data for years, using off-the-shelf consumer hardware components, such as a Macintosh computer, a Windows PC, and network-attached storage units. "Ordinary" hierarchical file systems, instead of a centralized data management system such as relational database, were adopted to manage patient DICOM files by arranging them in directories enabling quick and easy access to the DICOM files of each study by following the directory trees with Windows Explorer via study date and patient ID. Software used for this system was open-source OsiriX and additional programs we developed ourselves, both of which were freely available via the Internet. The initial cost of this system was about $3,600 with an incremental storage cost of about $900 per 1 terabyte (TB). This system has been running since 7th Feb 2008 with the data stored increasing at the rate of about 1.3 TB per month. Total data stored was 21.3 TB on 23rd June 2009. The maintenance workload was found to be about 30 to 60 min once every 2 weeks. In conclusion, this newly developed DICOM storage system is useful for research due to its cost-effectiveness, enormous capacity, high scalability, sufficient reliability, and easy data access.

  2. A Rich Metadata Filesystem for Scientific Data

    ERIC Educational Resources Information Center

    Bui, Hoang

    2012-01-01

    As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…

  3. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)

    2011-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  4. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor)

    2010-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  5. Using Solid State Disk Array as a Cache for LHC ATLAS Data Analysis

    NASA Astrophysics Data System (ADS)

    Yang, W.; Hanushevsky, A. B.; Mount, R. P.; Atlas Collaboration

    2014-06-01

    User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and cover a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of the spinning disk storage to help improve the performance of LHC ATLAS user analysis at SLAC. The system uses several days of data access records to make caching decisions. It can also use information from other sources such as a work-flow management system. We evaluate the performance of the system both in terms of caching and its impact on user analysis jobs. The system currently uses Xrootd technology, but the technique can be applied to any storage system.

  6. Design and evaluation of a hybrid storage system in HEP environment

    NASA Astrophysics Data System (ADS)

    Xu, Qi; Cheng, Yaodong; Chen, Gang

    2017-10-01

    Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.

  7. Development of DKB ETL module in case of data conversion

    NASA Astrophysics Data System (ADS)

    Kaida, A. Y.; Golosova, M. V.; Grigorieva, M. A.; Gubin, M. Y.

    2018-05-01

    Modern scientific experiments involve the producing of huge volumes of data that requires new approaches in data processing and storage. These data themselves, as well as their processing and storage, are accompanied by a valuable amount of additional information, called metadata, distributed over multiple informational systems and repositories, and having a complicated, heterogeneous structure. Gathering these metadata for experiments in the field of high energy nuclear physics (HENP) is a complex issue, requiring the quest for solutions outside the box. One of the tasks is to integrate metadata from different repositories into some kind of a central storage. During the integration process, metadata taken from original source repositories go through several processing steps: metadata aggregation, transformation according to the current data model and loading it to the general storage in a standardized form. The R&D project of ATLAS experiment on LHC, Data Knowledge Base, is aimed to provide fast and easy access to significant information about LHC experiments for the scientific community. The data integration subsystem, being developed for the DKB project, can be represented as a number of particular pipelines, arranging data flow from data sources to the main DKB storage. The data transformation process, represented by a single pipeline, can be considered as a number of successive data transformation steps, where each step is implemented as an individual program module. This article outlines the specifics of program modules, used in the dataflow, and describes one of the modules developed and integrated into the data integration subsystem of DKB.

  8. A data driven model for the impact of IFT and density variations on CO2 storage capacity in geologic formations

    NASA Astrophysics Data System (ADS)

    Nomeli, Mohammad A.; Riaz, Amir

    2017-09-01

    Carbon dioxide (CO2) storage in depleted hydrocarbon reservoirs and deep saline aquifers is one of the most promising solutions for decreasing CO2 concentration in the atmosphere. One of the important issues for CO2 storage in subsurface environments is the sealing efficiency of low-permeable cap-rocks overlying potential CO2 storage reservoirs. Though we focus on the effect of IFT in this study as a factor influencing sealing efficiency or storage capacity, other factors such as interfacial interactions, wettability, pore radius and interfacial mass transfer also affect the mobility and storage capacity of CO2 phase in the pore space. The study of the variation of IFT is however important because the pressure needed to penetrate a pore depends on both the pore size and the interfacial tension. Hence small variations in IFT can affect flow across a large population of pores. A novel model is proposed to find the IFT of the ternary systems (CO2/brine-salt) in a range of temperatures (300-373 K), pressures (50-250 bar), and up to 6 molal salinity applicable to CO2 storage in geological formations through a multi-variant non-linear regression of experimental data. The method uses a general empirical model for the quaternary system CO2/brine-salts that can be made to coincide with experimental data for a variety of solutions. We introduce correction parameters into the model, which compensates for uncertainties, and enforce agreement with experimental data. The results for IFT show a strong dependence on temperature, pressure, and salinity. The model has been found to describe the experimental data in the appropriate parameter space with reasonable precision. Finally, we use the new model to evaluate the effects of formation depth on the actual efficiency of CO2 storage. The results indicate that, in the case of CO2 storage in deep subsurface environments as a global-warming mitigation strategy, CO2 storage capacity increases with reservoir depth.

  9. Geodesy - the key for constraining rates of magma supply, storage, and eruption

    NASA Astrophysics Data System (ADS)

    Poland, Michael; Anderson, Kyle

    2016-04-01

    Volcanology is an inherently interdisciplinary science that requires joint analysis of diverse physical and chemical datasets to infer subsurface processes from surface observations. Among the diversity of data that can be collected, however, geodetic data are critical for elucidating the main elements of a magmatic plumbing system because of their sensitivity to subsurface changes in volume and mass. In particular, geodesy plays a key role in determining rates of magma supply, storage, and eruption. For example, surface displacements are critical for estimating the volume changes and locations of subsurface magma storage zones, and remotely sensed radar data make it possible to place significant bounds on eruptive volumes. Combining these measurements with geochemical indicators of magma composition and volatile content enables modeling of magma fluxes throughout a volcano's plumbing system, from source to surface. We combined geodetic data (particularly InSAR) with prior geochemical constraints and measured gas emissions from Kīlauea Volcano, Hawai`i, to develop a probabilistic model that relates magma supply, storage, and eruption over time. We found that the magma supply rate to Kīlauea during 2006 was 35-100% greater than during 2000-2001, with coincident increased rates of subsurface magma storage and eruption at the surface. By 2012, this surge in supply had ended, and supply rates were below those of 2000-2001; magma storage and eruption rates were similarly reduced. These results demonstrate the connection between magma supply, storage, and eruption, and the overall importance of magma supply with respect to volcanic hazards at Kīlauea and similar volcanoes. Our model also confirms the importance of geodetic data in modeling these parameters - rates of storage and eruption are, in some cases, almost uniquely constrained by geodesy. Future modeling efforts along these lines should also seek to incorporate gravity data, to better determine magma compressibility and subsurface mass change.

  10. Battery Energy Storage Market: Commercial Scale, Lithium-ion Projects in the U.S.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLaren, Joyce; Gagnon, Pieter; Anderson, Kate

    2016-10-01

    This slide deck presents current market data on the commercial scale li-ion battery storage projects in the U.S. It includes existing project locations, cost data and project cost breakdown, a map of demand charges across the U.S. and information about how the ITC and MACRS apply to energy storage projects that are paired with solar PV technology.

  11. Comparing groundwater recharge and storage variability from GRACE satellite observations with observed water levels and recharge model simulations

    NASA Astrophysics Data System (ADS)

    Allen, D. M.; Henry, C.; Demon, H.; Kirste, D. M.; Huang, J.

    2011-12-01

    Sustainable management of groundwater resources, particularly in water stressed regions, requires estimates of groundwater recharge. This study in southern Mali, Africa compares approaches for estimating groundwater recharge and understanding recharge processes using a variety of methods encompassing groundwater level-climate data analysis, GRACE satellite data analysis, and recharge modelling for current and future climate conditions. Time series data for GRACE (2002-2006) and observed groundwater level data (1982-2001) do not overlap. To overcome this problem, GRACE time series data were appended to the observed historical time series data, and the records compared. Terrestrial water storage anomalies from GRACE were corrected for soil moisture (SM) using the Global Land Data Assimilation System (GLDAS) to obtain monthly groundwater storage anomalies (GRACE-SM), and monthly recharge estimates. Historical groundwater storage anomalies and recharge were determined using the water table fluctuation method using observation data from 15 wells. Historical annual recharge averaged 145.0 mm (or 15.9% of annual rainfall) and compared favourably with the GRACE-SM estimate of 149.7 mm (or 14.8% of annual rainfall). Both records show lows and peaks in May and September, respectively; however, the peak for the GRACE-SM data is shifted later in the year to November, suggesting that the GLDAS may poorly predict the timing of soil water storage in this region. Recharge simulation results show good agreement between the timing and magnitude of the mean monthly simulated recharge and the regional mean monthly storage anomaly hydrograph generated from all monitoring wells. Under future climate conditions, annual recharge is projected to decrease by 8% for areas with luvisols and by 11% for areas with nitosols. Given this potential reduction in groundwater recharge, there may be added stress placed on an already stressed resource.

  12. Comparison of Decadal Water Storage Trends from Global Hydrological Models and GRACE Satellite Data

    NASA Astrophysics Data System (ADS)

    Scanlon, B. R.; Zhang, Z. Z.; Save, H.; Sun, A. Y.; Mueller Schmied, H.; Van Beek, L. P.; Wiese, D. N.; Wada, Y.; Long, D.; Reedy, R. C.; Doll, P. M.; Longuevergne, L.

    2017-12-01

    Global hydrology is increasingly being evaluated using models; however, the reliability of these global models is not well known. In this study we compared decadal trends (2002-2014) in land water storage from 7 global models (WGHM, PCR-GLOBWB, and GLDAS: NOAH, MOSAIC, VIC, CLM, and CLSM) to storage trends from new GRACE satellite mascon solutions (CSR-M and JPL-M). The analysis was conducted over 186 river basins, representing about 60% of the global land area. Modeled total water storage trends agree with those from GRACE-derived trends that are within ±0.5 km3/yr but greatly underestimate large declining and rising trends outside this range. Large declining trends are found mostly in intensively irrigated basins and in some basins in northern latitudes. Rising trends are found in basins with little or no irrigation and are generally related to increasing trends in precipitation. The largest decline is found in the Ganges (-12 km3/yr) and the largest rise in the Amazon (43 km3/yr). Differences between models and GRACE are greatest in large basins (>0.5x106 km2) mostly in humid regions. There is very little agreement in storage trends between models and GRACE and among the models with values of r2 mostly <0.1. Various factors can contribute to discrepancies in water storage trends between models and GRACE, including uncertainties in precipitation, model calibration, storage capacity, and water use in models and uncertainties in GRACE data related to processing, glacier leakage, and glacial isostatic adjustment. The GRACE data indicate that land has a large capacity to store water over decadal timescales that is underrepresented by the models. The storage capacity in the modeled soil and groundwater compartments may be insufficient to accommodate the range in water storage variations shown by GRACE data. The inability of the models to capture the large storage trends indicates that model projections of climate and human-induced changes in water storage may be mostly underestimated. Future GRACE and model studies should try to reduce the various sources of uncertainty in water storage trends and should consider expanding the modeled storage capacity of the soil profiles and their interaction with groundwater.

  13. Effects of Storage Time on Glycolysis in Donated Human Blood Units

    PubMed Central

    Qi, Zhen; Roback, John D.; Voit, Eberhard O.

    2017-01-01

    Background: Donated blood is typically stored before transfusions. During storage, the metabolism of red blood cells changes, possibly causing storage lesions. The changes are storage time dependent and exhibit donor-specific variations. It is necessary to uncover and characterize the responsible molecular mechanisms accounting for such biochemical changes, qualitatively and quantitatively; Study Design and Methods: Based on the integration of metabolic time series data, kinetic models, and a stoichiometric model of the glycolytic pathway, a customized inference method was developed and used to quantify the dynamic changes in glycolytic fluxes during the storage of donated blood units. The method provides a proof of principle for the feasibility of inferences regarding flux characteristics from metabolomics data; Results: Several glycolytic reaction steps change substantially during storage time and vary among different fluxes and donors. The quantification of these storage time effects, which are possibly irreversible, allows for predictions of the transfusion outcome of individual blood units; Conclusion: The improved mechanistic understanding of blood storage, obtained from this computational study, may aid the identification of blood units that age quickly or more slowly during storage, and may ultimately improve transfusion management in clinics. PMID:28353627

  14. Effects of Storage Time on Glycolysis in Donated Human Blood Units.

    PubMed

    Qi, Zhen; Roback, John D; Voit, Eberhard O

    2017-03-29

    Background : Donated blood is typically stored before transfusions. During storage, the metabolism of red blood cells changes, possibly causing storage lesions. The changes are storage time dependent and exhibit donor-specific variations. It is necessary to uncover and characterize the responsible molecular mechanisms accounting for such biochemical changes, qualitatively and quantitatively; Study Design and Methods : Based on the integration of metabolic time series data, kinetic models, and a stoichiometric model of the glycolytic pathway, a customized inference method was developed and used to quantify the dynamic changes in glycolytic fluxes during the storage of donated blood units. The method provides a proof of principle for the feasibility of inferences regarding flux characteristics from metabolomics data; Results : Several glycolytic reaction steps change substantially during storage time and vary among different fluxes and donors. The quantification of these storage time effects, which are possibly irreversible, allows for predictions of the transfusion outcome of individual blood units; Conclusion : The improved mechanistic understanding of blood storage, obtained from this computational study, may aid the identification of blood units that age quickly or more slowly during storage, and may ultimately improve transfusion management in clinics.

  15. 19 CFR 163.5 - Methods for storage of records.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... standard business practice for storage of records include, but are not limited to, machine readable data... 19 Customs Duties 2 2012-04-01 2012-04-01 false Methods for storage of records. 163.5 Section 163... THE TREASURY (CONTINUED) RECORDKEEPING § 163.5 Methods for storage of records. (a) Original records...

  16. 19 CFR 163.5 - Methods for storage of records.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... standard business practice for storage of records include, but are not limited to, machine readable data... 19 Customs Duties 2 2011-04-01 2011-04-01 false Methods for storage of records. 163.5 Section 163... THE TREASURY (CONTINUED) RECORDKEEPING § 163.5 Methods for storage of records. (a) Original records...

  17. The amino acid's backup bone - storage solutions for proteomics facilities.

    PubMed

    Meckel, Hagen; Stephan, Christian; Bunse, Christian; Krafzik, Michael; Reher, Christopher; Kohl, Michael; Meyer, Helmut Erich; Eisenacher, Martin

    2014-01-01

    Proteomics methods, especially high-throughput mass spectrometry analysis have been continually developed and improved over the years. The analysis of complex biological samples produces large volumes of raw data. Data storage and recovery management pose substantial challenges to biomedical or proteomic facilities regarding backup and archiving concepts as well as hardware requirements. In this article we describe differences between the terms backup and archive with regard to manual and automatic approaches. We also introduce different storage concepts and technologies from transportable media to professional solutions such as redundant array of independent disks (RAID) systems, network attached storages (NAS) and storage area network (SAN). Moreover, we present a software solution, which we developed for the purpose of long-term preservation of large mass spectrometry raw data files on an object storage device (OSD) archiving system. Finally, advantages, disadvantages, and experiences from routine operations of the presented concepts and technologies are evaluated and discussed. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013. Published by Elsevier B.V.

  18. A simple model for constant storage modulus of poly (lactic acid)/poly (ethylene oxide)/carbon nanotubes nanocomposites at low frequencies assuming the properties of interphase regions and networks.

    PubMed

    Zare, Yasser; Rhim, Sungsoo; Garmabi, Hamid; Rhee, Kyong Yop

    2018-04-01

    The networks of nanoparticles in nanocomposites cause solid-like behavior demonstrating a constant storage modulus at low frequencies. This study examines the storage modulus of poly (lactic acid)/poly (ethylene oxide)/carbon nanotubes (CNT) nanocomposites. The experimental data of the storage modulus in the plateau regions are obtained by a frequency sweep test. In addition, a simple model is developed to predict the constant storage modulus assuming the properties of the interphase regions and the CNT networks. The model calculations are compared with the experimental results, and the parametric analyses are applied to validate the predictability of the developed model. The calculations properly agree with the experimental data at all polymer and CNT concentrations. Moreover, all parameters acceptably modulate the constant storage modulus. The percentage of the networked CNT, the modulus of networks, and the thickness and modulus of the interphase regions directly govern the storage modulus of nanocomposites. The outputs reveal the important roles of the interphase properties in the storage modulus. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. A Rewritable, Random-Access DNA-Based Storage System.

    PubMed

    Yazdi, S M Hossein Tabatabaei; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica

    2015-09-18

    We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.

  20. A Rewritable, Random-Access DNA-Based Storage System

    NASA Astrophysics Data System (ADS)

    Tabatabaei Yazdi, S. M. Hossein; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica

    2015-09-01

    We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.

  1. Segmentation, dynamic storage, and variable loading on CDC equipment

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.

    1980-01-01

    Techniques for varying the segmented load structure of a program and for varying the dynamic storage allocation, depending upon whether a batch type or interactive type run is desired, are explained and demonstrated. All changes are based on a single data input to the program. The techniques involve: code within the program to suppress scratch pad input/output (I/O) for a batch run or translate the in-core data storage area from blank common to the end-of-code+1 address of a particular segment for an interactive run; automatic editing of the segload directives prior to loading, based upon data input to the program, to vary the structure of the load for interactive and batch runs; and automatic editing of the load map to determine the initial addresses for in core data storage for an interactive run.

  2. Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc

    NASA Astrophysics Data System (ADS)

    Becker, Peter; Plesea, Lucian; Maurer, Thomas

    2016-06-01

    The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.

  3. Impact of Data Placement on Resilience in Large-Scale Object Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carns, Philip; Harms, Kevin; Jenkins, John

    Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model tomore » investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.« less

  4. Motivation and Design of the Sirocco Storage System Version 1.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew Leon; Ward, H. Lee; Danielson, Geoffrey Charles

    Sirocco is a massively parallel, high performance storage system for the exascale era. It emphasizes client-to-client coordination, low server-side coupling, and free data movement to improve resilience and performance. Its architecture is inspired by peer-to-peer and victim- cache architectures. By leveraging these ideas, Sirocco natively supports several media types, including RAM, flash, disk, and archival storage, with automatic migration between levels. Sirocco also includes storage interfaces and support that are more advanced than typical block storage. Sirocco enables clients to efficiently use key-value storage or block-based storage with the same interface. It also provides several levels of transactional data updatesmore » within a single storage command, including full ACID-compliant updates. This transaction support extends to updating several objects within a single transaction. Further support is provided for con- currency control, enabling greater performance for workloads while providing safe concurrent modification. By pioneering these and other technologies and techniques in the storage system, Sirocco is poised to fulfill a need for a massively scalable, write-optimized storage system for exascale systems. This is version 1.0 of a document reflecting the current and planned state of Sirocco. Further versions of this document will be accessible at http://www.cs.sandia.gov/Scalable_IO/ sirocco .« less

  5. DataForge: Modular platform for data storage and analysis

    NASA Astrophysics Data System (ADS)

    Nozik, Alexander

    2018-04-01

    DataForge is a framework for automated data acquisition, storage and analysis based on modern achievements of applied programming. The aim of the DataForge is to automate some standard tasks like parallel data processing, logging, output sorting and distributed computing. Also the framework extensively uses declarative programming principles via meta-data concept which allows a certain degree of meta-programming and improves results reproducibility.

  6. LOGISTIC MANAGEMENT INFORMATION SYSTEM - MANUAL DATA STORAGE AND RETRIEVAL SYSTEM.

    DTIC Science & Technology

    Logistics Management Information System . The procedures are applicable to manual storage and retrieval of all data used in the Logistics Management ... Information System (LMIS) and include the following: (1) Action Officer data source file. (2) Action Officer presentation format file. (3) LMI Coordination

  7. Managing security and privacy concerns over data storage in healthcare research.

    PubMed

    Mackenzie, Isla S; Mantay, Brian J; McDonnell, Patrick G; Wei, Li; MacDonald, Thomas M

    2011-08-01

    Issues surrounding data security and privacy are of great importance when handling sensitive health-related data for research. The emphasis in the past has been on balancing the risks to individuals with the benefit to society of the use of databases for research. However, a new way of looking at such issues is that by optimising procedures and policies regarding security and privacy of data to the extent that there is no appreciable risk to the privacy of individuals, we can create a 'win-win' situation in which everyone benefits, and pharmacoepidemiological research can flourish with public support. We discuss holistic measures, involving both information technology and people, taken to improve the security and privacy of data storage. After an internal review, we commissioned an external audit by an independent consultant with a view to optimising our data storage and handling procedures. Improvements to our policies and procedures were implemented as a result of the audit. By optimising our storage of data, we hope to inspire public confidence and hence cooperation with the use of health care data in research. Copyright © 2011 John Wiley & Sons, Ltd.

  8. User and group storage management the CMS CERN T2 centre

    NASA Astrophysics Data System (ADS)

    Cerminara, G.; Franzoni, G.; Pfeiffer, A.

    2015-12-01

    A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user/group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.

  9. Classification of CO2 Geologic Storage: Resource and Capacity

    USGS Publications Warehouse

    Frailey, S.M.; Finley, R.J.

    2009-01-01

    The use of the term capacity to describe possible geologic storage implies a realistic or likely volume of CO2 to be sequestered. Poor data quantity and quality may lead to very high uncertainty in the storage estimate. Use of the term "storage resource" alleviates the implied certainty of the term "storage capacity". This is especially important to non- scientists (e.g. policy makers) because "capacity" is commonly used to describe the very specific and more certain quantities such as volume of a gas tank or a hotel's overnight guest limit. Resource is a term used in the classification of oil and gas accumulations to infer lesser certainty in the commercial production of oil and gas. Likewise for CO2 sequestration, a suspected porous and permeable zone can be classified as a resource, but capacity can only be estimated after a well is drilled into the formation and a relatively higher degree of economic and regulatory certainty is established. Storage capacity estimates are lower risk or higher certainty compared to storage resource estimates. In the oil and gas industry, prospective resource and contingent resource are used for estimates with less data and certainty. Oil and gas reserves are classified as Proved and Unproved, and by analogy, capacity can be classified similarly. The highest degree of certainty for an oil or gas accumulation is Proved, Developed Producing (PDP) Reserves. For CO2 sequestration this could be Proved Developed Injecting (PDI) Capacity. A geologic sequestration storage classification system is developed by analogy to that used by the oil and gas industry. When a CO2 sequestration industry emerges, storage resource and capacity estimates will be considered a company asset and consequently regulated by the Securities and Exchange Commission. Additionally, storage accounting and auditing protocols will be required to confirm projected storage estimates and assignment of credits from actual injection. An example illustrates the use of these terms and how storage classification changes as new data become available. ?? 2009 Elsevier Ltd. All rights reserved.

  10. High-speed asynchronous data mulitiplexer/demultiplexer for high-density digital recorders

    NASA Astrophysics Data System (ADS)

    Berdugo, Albert; Small, Martin B.

    1996-11-01

    Modern High Density Digital Recorders are ideal devices for the storage of large amounts of digital and/or wideband analog data. Ruggedized versions of these recorders are currently available and are supporting many military and commercial flight test applications. However, in certain cases, the storage format becomes very critical, e.g., when a large number of data types are involved, or when channel- to-channel correlation is critical, or when the original data source must be accurately recreated during post mission analysis. A properly designed storage format will not only preserve data quality, but will yield the maximum storage capacity and record time for any given recorder family or data type. This paper describes a multiplex/demultiplex technique that formats multiple high speed data sources into a single, common format for recording. The method is compatible with many popular commercial recorder standards such as DCRsi, VLDS, and DLT. Types of input data typically include PCM, wideband analog data, video, aircraft data buses, avionics, voice, time code, and many others. The described method preserves tight data correlation with minimal data overhead. The described technique supports full reconstruction of the original input signals during data playback. Output data correlation across channels is preserved for all types of data inputs. Simultaneous real- time data recording and reconstruction are also supported.

  11. Implementation of system intelligence in a 3-tier telemedicine/PACS hierarchical storage management system

    NASA Astrophysics Data System (ADS)

    Chao, Woodrew; Ho, Bruce K. T.; Chao, John T.; Sadri, Reza M.; Huang, Lu J.; Taira, Ricky K.

    1995-05-01

    Our tele-medicine/PACS archive system is based on a three-tier distributed hierarchical architecture, including magnetic disk farms, optical jukebox, and tape jukebox sub-systems. The hierarchical storage management (HSM) architecture, built around a low cost high performance platform [personal computers (PC) and Microsoft Windows NT], presents a very scaleable and distributed solution ideal for meeting the needs of client/server environments such as tele-medicine, tele-radiology, and PACS. These image based systems typically require storage capacities mirroring those of film based technology (multi-terabyte with 10+ years storage) and patient data retrieval times at near on-line performance as demanded by radiologists. With the scaleable architecture, storage requirements can be easily configured to meet the needs of the small clinic (multi-gigabyte) to those of a major hospital (multi-terabyte). The patient data retrieval performance requirement was achieved by employing system intelligence to manage migration and caching of archived data. Relevant information from HIS/RIS triggers prefetching of data whenever possible based on simple rules. System intelligence embedded in the migration manger allows the clustering of patient data onto a single tape during data migration from optical to tape medium. Clustering of patient data on the same tape eliminates multiple tape loading and associated seek time during patient data retrieval. Optimal tape performance can then be achieved by utilizing the tape drives high performance data streaming capabilities thereby reducing typical data retrieval delays associated with streaming tape devices.

  12. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marschman, Steven C.; Warmann, Stephan A.; Rusch, Chris

    The U.S. Department of Energy Office of Nuclear Energy (DOE-NE), Office of Fuel Cycle Technology, has established the Used Fuel Disposition Campaign (UFDC) to conduct the research and development activities related to storage, transportation, and disposal of used nuclear fuel and high-level radioactive waste. The mission of the UFDC is to identify alternatives and conduct scientific research and technology development to enable storage, transportation and disposal of used nuclear fuel (UNF) and wastes generated by existing and future nuclear fuel cycles. The UFDC Storage and Transportation staffs are responsible for addressing issues regarding the extended or long-term storage of UNFmore » and its subsequent transportation. The near-term objectives of the Storage and Transportation task are to use a science-based approach to develop the technical bases to support the continued safe and secure storage of UNF for extended periods, subsequent retrieval, and transportation. While low burnup fuel [that characterized as having a burnup of less than 45 gigawatt days per metric tonne uranium (GWD/MTU)] has been stored for nearly three decades, the storage of high burnup used fuels is more recent. The DOE has funded a demonstration project to confirm the behavior of used high burnup fuel under prototypic conditions. The Electric Power Research Institute (EPRI) is leading a project team to develop and implement the Test Plan to collect this data from a UNF dry storage system containing high burnup fuel. The Draft Test Plan for the demonstration outlines the data to be collected; the high burnup fuel to be included; the technical data gaps the data will address; and the storage system design, procedures, and licensing necessary to implement the Test Plan. To provide data that is most relevant to high burnup fuel in dry storage, the design of the test storage system must closely mimic real conditions high burnup SNF experiences during all stages of dry storage: loading, cask drying, inert gas backfilling, and transfer to an Independent Spent Fuel Storage Installation (ISFSI) for multi-year storage. To document the initial condition of the used fuel prior to emplacement in a storage system, “sister ” fuel rods will be harvested and sent to a national laboratory for characterization and archival purposes. This report supports the demonstration by describing how sister rods will be shipped and received at a national laboratory, and recommending basic nondestructive and destructive analyses to assure the fuel rods are adequately characterized for UFDC work. For this report, a hub-and-spoke model is proposed, with one location serving as the hub for fuel rod receipt and characterization. In this model, fuel and/or clad would be sent to other locations when capabilities at the hub were inadequate or nonexistent. This model has been proposed to reduce DOE-NE’s obligation for waste cleanup and decontamination of equipment.« less

  14. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Barberis, D.

    2016-09-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.

  15. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage

    PubMed Central

    Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-01-01

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query. PMID:29652810

  16. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage.

    PubMed

    Guo, Yeting; Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-04-13

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query.

  17. Inventory and review of aquifer storage and recovery in southern Florida

    USGS Publications Warehouse

    Reese, Ronald S.

    2002-01-01

    publications > water resources investigations > report 02-4036 US Department of the Interior US Geological Survey WRI 02-4036Inventory and Review of Aquifer Storage and Recovery in Southern Florida By Ronald S. ReeseTallahassee, Florida 2002 prepared as part of the U.S. Geological Survey Place-Based Studies Program ABSTRACT Abstract Introduction Inventory of Data Case Studies Summary References Tables Aquifer storage and recovery in southern Florida has been proposed on an unprecedented scale as part of the Comprehensive Everglades Restoration Plan. Aquifer storage and recovery wells were constructed or are under construction at 27 sites in southern Florida, mostly by local municipalities or counties located in coastal areas. The Upper Floridan aquifer, the principal storage zone of interest to the restoration plan, is the aquifer being used at 22 of the sites. The aquifer is brackish to saline in southern Florida, which can greatly affect the recovery of the freshwater recharged and stored.Well data were inventoried and compiled for all wells at most of the 27 sites. Construction and testing data were compiled into four main categories: (1) well identification, location, and construction data; (2) hydraulic test data; (3) ambient formation water-quality data; and (4) cycle testing data. Each cycle during testing or operation includes periods of recharge of freshwater, storage, and recovery that each last days or months. Cycle testing data include calculations of recovery efficiency, which is the percentage of the total amount of potable water recharged for each cycle that is recovered.Calculated cycle test data include potable water recovery efficiencies for 16 of the 27 sites. However, the number of cycles at most sites was limited; except for two sites, the highest number of cycles was five. Only nine sites had a recovery efficiency above 10 percent for the first cycle, and 10 sites achieved a recovery efficiency above 30 percent during at least one cycle. The highest recovery efficiency achieved per cycle was 84 percent for cycle 16 at the Boynton Beach site.Factors that could affect recovery of freshwater varied widely between sites. The thickness of the open storage zone at all sites ranged from 45 to 452 feet. For sites with the storage zone in the Upper Floridan aquifer, transmissivity based on tests of the storage zones ranged from 800 to 108,000 feet squared per day, leakance values indicated that confinement is not good in some areas, and the chloride concentration of ambient water ranged from 500 to 11,000 milligrams per liter.Based on review of four case studies and data from other sites, several hydrogeologic and design factors appear to be important to the performance of aquifer storage and recovery in the Floridan aquifer system. Performance is maximized when the storage zone is thin and located at the top of the Upper Floridan aquifer, and transmissivity and salinity of the storage zone are moderate (less than 30,000 feet squared per day and 3,000 milligrams per liter of chloride concentration, respectively). The structural setting at a site could also be important because of the potential for updip migration of a recharged freshwater bubble due to density contrast or loss of overlying confinement due to deformation.

  18. Seneca Compressed Air Energy Storage (CAES) Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-11-30

    This document provides specifications for the process air compressor for a compressed air storage project, requests a budgetary quote, and provides supporting information, including compressor data, site specific data, water analysis, and Seneca CAES value drivers.

  19. 40 CFR 792.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 31 2010-07-01 2010-07-01 true Specimen and data storage facilities. 792.51 Section 792.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS Facilities § 792.51 Specimen and data...

  20. 31. Perimeter acquisition radar building room #318, data storage "racks"; ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    31. Perimeter acquisition radar building room #318, data storage "racks"; sign read: M&D controller, logic control buffer, data transmission controller - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND

  1. 49 CFR 242.203 - Retaining information supporting determinations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... integrity of the electronic data storage system, including the prevention of unauthorized access to the program logic or individual records; (2) The program and data storage system must be protected by a... making the determinations. (b) A railroad shall retain the following information: (1) Relevant data from...

  2. 49 CFR 242.203 - Retaining information supporting determinations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... integrity of the electronic data storage system, including the prevention of unauthorized access to the program logic or individual records; (2) The program and data storage system must be protected by a... making the determinations. (b) A railroad shall retain the following information: (1) Relevant data from...

  3. 49 CFR 242.203 - Retaining information supporting determinations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... integrity of the electronic data storage system, including the prevention of unauthorized access to the program logic or individual records; (2) The program and data storage system must be protected by a... making the determinations. (b) A railroad shall retain the following information: (1) Relevant data from...

  4. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem architectures using PxFS and QFS were found to be incompatible with our software architecture, so sharing of data between systems is accomplished via traditional NFS. Linux was found to be limited in terms of deployment flexibility and consistency between versions. Despite the experimentation with various technologies, our current virtualized architecture is stable to the point of an average daily real time data return rate of 92.34% over the entire lifetime of the project to date.

  5. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  6. Assessing Drought Impacts on Water Storage using GRACE Satellites and Regional Groundwater Modeling in the Central Valley of California

    NASA Astrophysics Data System (ADS)

    Scanlon, B. R.; Zhang, Z.; Save, H.; Faunt, C. C.; Dettinger, M. D.

    2015-12-01

    Increasing concerns about drought impacts on water resources in California underscores the need to better understand effects of drought on water storage and coping strategies. Here we use a new GRACE mascons solution with high spatial resolution (1 degree) developed at the Univ. of Texas Center for Space Research (CSR) and output from the most recent regional groundwater model developed by the U.S. Geological Survey to evaluate changes in water storage in response to recent droughts. We also extend the analysis of drought impacts on water storage back to the 1980s using modeling and monitoring data. The drought has been intensifying since 2012 with almost 50% of the state and 100% of the Central Valley under exceptional drought in 2015. Total water storage from GRACE data declined sharply during the current drought, similar to the rate of depletion during the previous drought in 2007 - 2009. However, only 45% average recovery between the two droughts results in a much greater cumulative impact of both droughts. The CSR GRACE Mascons data offer unprecedented spatial resolution with no leakage to the oceans and no requirement for signal restoration. Snow and reservoir storage declines contribute to the total water storage depletion estimated by GRACE with the residuals attributed to groundwater storage. Rates of groundwater storage depletion are consistent with the results of regional groundwater modeling in the Central Valley. Traditional approaches to coping with these climate extremes has focused on surface water reservoir storage; however, increasing conjunctive use of surface water and groundwater and storing excess water from wet periods in depleted aquifers is increasing in the Central Valley.

  7. A study of mass data storage technology for rocket engine data

    NASA Technical Reports Server (NTRS)

    Ready, John F.; Benser, Earl T.; Fritz, Bernard S.; Nelson, Scott A.; Stauffer, Donald R.; Volna, William M.

    1990-01-01

    The results of a nine month study program on mass data storage technology for rocket engine (especially the Space Shuttle Main Engine) health monitoring and control are summarized. The program had the objective of recommending a candidate mass data storage technology development for rocket engine health monitoring and control and of formulating a project plan and specification for that technology development. The work was divided into three major technical tasks: (1) development of requirements; (2) survey of mass data storage technologies; and (3) definition of a project plan and specification for technology development. The first of these tasks reviewed current data storage technology and developed a prioritized set of requirements for the health monitoring and control applications. The second task included a survey of state-of-the-art and newly developing technologies and a matrix-based ranking of the technologies. It culminated in a recommendation of optical disk technology as the best candidate for technology development. The final task defined a proof-of-concept demonstration, including tasks required to develop, test, analyze, and demonstrate the technology advancement, plus an estimate of the level of effort required. The recommended demonstration emphasizes development of an optical disk system which incorporates an order-of-magnitude increase in writing speed above the current state of the art.

  8. Combined statistical analyses for long-term stability data with multiple storage conditions: a simulation study.

    PubMed

    Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R

    2014-01-01

    Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.

  9. What CFOs should know before venturing into the cloud.

    PubMed

    Rajendran, Janakan

    2013-05-01

    There are three major trends in the use of cloud-based services for healthcare IT: Cloud computing involves the hosting of health IT applications in a service provider cloud. Cloud storage is a data storage service that can involve, for example, long-term storage and archival of information such as clinical data, medical images, and scanned documents. Data center colocation involves rental of secure space in the cloud from a vendor, an approach that allows a hospital to share power capacity and proven security protocols, reducing costs.

  10. ACCELERATORS: Preliminary application of turn-by-turn data analysis to the SSRF storage ring

    NASA Astrophysics Data System (ADS)

    Chen, Jian-Hui; Zhao, Zhen-Tang

    2009-07-01

    There is growing interest in utilizing the beam position monitor turn-by-turn (TBT) data to debug accelerators. TBT data can be used to determine the linear optics, coupled optics and nonlinear behaviors of the storage ring lattice. This is not only a useful complement to other methods of determining the linear optics such as LOCO, but also provides a possibility to uncover more hidden phenomena. In this paper, a preliminary application of a β function measurement to the SSRF storage ring is presented.

  11. Evolving Storage and Cyber Infrastructure at the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen; Duffy, Daniel; Spear, Carrie; Sinno, Scott; Vaughan, Garrison; Bowen, Michael

    2018-01-01

    This talk will describe recent developments at the NASA Center for Climate Simulation, which is funded by NASAs Science Mission Directorate, and supports the specialized data storage and computational needs of weather, ocean, and climate researchers, as well as astrophysicists, heliophysicists, and planetary scientists. To meet requirements for higher-resolution, higher-fidelity simulations, the NCCS augments its High Performance Computing (HPC) and storage retrieval environment. As the petabytes of model and observational data grow, the NCCS is broadening data services offerings and deploying and expanding virtualization resources for high performance analytics.

  12. Biophotopol: A Sustainable Photopolymer for Holographic Data Storage Applications.

    PubMed

    Ortuño, Manuel; Gallego, Sergi; Márquez, Andrés; Neipp, Cristian; Pascual, Inmaculada; Beléndez, Augusto

    2012-05-02

    Photopolymers have proved to be useful for different holographic applications such as holographic data storage or holographic optical elements. However, most photopolymers have certain undesirable features, such as the toxicity of some of their components or their low environmental compatibility. For this reason, the Holography and Optical Processing Group at the University of Alicante developed a new dry photopolymer with low toxicity and high thickness called biophotopol, which is very adequate for holographic data storage applications. In this paper we describe our recent studies on biophotopol and the main characteristics of this material.

  13. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  14. The challenge of a data storage hierarchy

    NASA Technical Reports Server (NTRS)

    Ruderman, Michael

    1992-01-01

    A discussion of Mesa Archival Systems' data archiving system is presented. This data archiving system is strictly a software system that is implemented on a mainframe and manages the data into permanent file storage. Emphasis is placed on the fact that any kind of client system on the network can be connected through the Unix interface of the data archiving system.

  15. Data Processing Center of Radioastron Project: 3 years of operation.

    NASA Astrophysics Data System (ADS)

    Shatskaya, Marina

    ASC DATA PROCESSING CENTER (DPC) of Radioastron Project is a fail-safe complex centralized system of interconnected software/ hardware components along with organizational procedures. Tasks facing of the scientific data processing center are organization of service information exchange, collection of scientific data, storage of all of scientific data, data science oriented processing. DPC takes part in the informational exchange with two tracking stations in Pushchino (Russia) and Green Bank (USA), about 30 ground telescopes, ballistic center, tracking headquarters and session scheduling center. Enormous flows of information go to Astro Space Center. For the inquiring of enormous data volumes we develop specialized network infrastructure, Internet channels and storage. The computer complex has been designed at the Astro Space Center (ASC) of Lebedev Physical Institute and includes: - 800 TB on-line storage, - 2000 TB hard drive archive, - backup system on magnetic tapes (2000 TB); - 24 TB redundant storage at Pushchino Radio Astronomy Observatory; - Web and FTP servers, - DPC management and data transmission networks. The structure and functions of ASC Data Processing Center are fully adequate to the data processing requirements of the Radioastron Mission and has been successfully confirmed during Fringe Search, Early Science Program and first year of Key Science Program.

  16. Environmental Data Store (EDS): A multi-node Data Storage Facility for diverse sets of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Piasecki, M.; Ji, P.

    2014-12-01

    Geoscience data comes in many flavors that are determined by type of data such as continous on a grid or mesh or discrete colelcted at point either as one time samples or a stream of data coming of sensors, but coudl also encompass digital files of any time type such text files, WORD or EXCEL documents, or audio and video files. We present a storage facility that is comprsed of 6 nodes each of speciaized to host a certain data type: grid based data (netCDF on a THREDDS server), GIS data (shapefiles using GeoServer), point time series data (CUAHSI ODM), sample data (EDBS), and any digital data (RAMADAA) plus a server fro Remote sensing data and its products. While there is overlap in data type storage capabilities (rasters can go into several of these nodes) we prefer to use dedicated storage facilities that are a) freeware, and b) have a good degree of maturity, and c) have shown their utility for stroing a cetain type. In addition it allows to place these commonly used software stacks and storage solutiosn side-by-side to develop interoprability strategies. We have used a DRUPAL based system to handle user regoistration and authentication, and also use the system for data submission and data search. In support for tis system we developed an extensive controlled vocabulary system that is an amalgamation of various CVs used in the geosciecne community in order to achieve as high a degree of recognition, such the CF conventions, CUAHSI Cvs, , NASA (GCMD), EPA and USGS taxonomies, GEMET, in addition to ontological representations such as SWEET.

  17. Evaluation of relational and NoSQL database architectures to manage genomic annotations.

    PubMed

    Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard

    2016-12-01

    While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. An Information Storage and Retrieval System for Biological and Geological Data. Interim Report.

    ERIC Educational Resources Information Center

    Squires, Donald F.

    A project is being conducted to test the feasibility of an information storage and retrieval system for museum specimen data, particularly for natural history museums. A pilot data processing system has been developed, with the specimen records from the national collections of birds, marine crustaceans, and rocks used as sample data. The research…

  19. Means of storage and automated monitoring of versions of text technical documentation

    NASA Astrophysics Data System (ADS)

    Leonovets, S. A.; Shukalov, A. V.; Zharinov, I. O.

    2018-03-01

    The paper presents automation of the process of preparation, storage and monitoring of version control of a text designer, and program documentation by means of the specialized software is considered. Automation of preparation of documentation is based on processing of the engineering data which are contained in the specifications and technical documentation or in the specification. Data handling assumes existence of strictly structured electronic documents prepared in widespread formats according to templates on the basis of industry standards and generation by an automated method of the program or designer text document. Further life cycle of the document and engineering data entering it are controlled. At each stage of life cycle, archive data storage is carried out. Studies of high-speed performance of use of different widespread document formats in case of automated monitoring and storage are given. The new developed software and the work benches available to the developer of the instrumental equipment are described.

  20. Desiderata for Healthcare Integrated Data Repositories Based on Architectural Comparison of Three Public Repositories

    PubMed Central

    Huser, Vojtech; Cimino, James J.

    2013-01-01

    Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network’s Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management. PMID:24551366

  1. Desiderata for healthcare integrated data repositories based on architectural comparison of three public repositories.

    PubMed

    Huser, Vojtech; Cimino, James J

    2013-01-01

    Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network's Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management.

  2. RALPH: An online computer program for acquisition and reduction of pulse height data

    NASA Technical Reports Server (NTRS)

    Davies, R. C.; Clark, R. S.; Keith, J. E.

    1973-01-01

    A background/foreground data acquisition and analysis system incorporating a high level control language was developed for acquiring both singles and dual parameter coincidence data from scintillation detectors at the Radiation Counting Laboratory at the NASA Manned Spacecraft Center in Houston, Texas. The system supports acquisition of gamma ray spectra in a 256 x 256 coincidence matrix (utilizing disk storage) and simultaneous operation of any of several background support and data analysis functions. In addition to special instruments and interfaces, the hardware consists of a PDP-9 with 24K core memory, 256K words of disk storage, and Dectape and Magtape bulk storage.

  3. Long-Term Outcomes of Laser Prostatectomy for Storage Symptoms: Comparison of Serial 5-Year Followup Data between High Performance System Photoselective Vaporization and Holmium Laser Enucleation of the Prostate.

    PubMed

    Cho, Min Chul; Song, Won Hoon; Park, Juhyun; Cho, Sung Yong; Jeong, Hyeon; Oh, Seung-June; Paick, Jae-Seung; Son, Hwancheol

    2018-06-01

    We compared long-term storage symptom outcomes between photoselective laser vaporization of the prostate with a 120 W high performance system and holmium laser enucleation of the prostate. We also determined factors influencing postoperative improvement of storage symptoms in the long term. Included in our study were 266 men, including 165 treated with prostate photoselective laser vaporization using a 120 W high performance system and 101 treated with holmium laser enucleation of the prostate, on whom 60-month followup data were available. Outcomes were assessed serially 6, 12, 24, 36, 48 and 60 months postoperatively using the International Prostate Symptom Score, uroflowmetry and the serum prostate specific antigen level. Postoperative improvement in storage symptoms was defined as a 50% or greater reduction in the subtotal storage symptom score at each followup visit after surgery compared to baseline. Improvements in frequency, urgency, nocturia, subtotal storage symptom scores and the quality of life index were maintained up to 60 months after photoselective laser vaporization or holmium laser enucleation of the prostate. There was no difference in the degree of improvement in storage symptoms or the percent of patients with postoperative improvement in storage symptoms between the 2 groups throughout the long-term followup. However, the holmium laser group showed greater improvement in voiding symptoms and quality of life than the laser vaporization group. On logistic regression analysis a higher baseline subtotal storage symptom score and a higher BOOI (Bladder Outlet Obstruction Index) were the factors influencing the improvement in storage symptoms 5 years after prostate photoselective laser vaporization or holmium laser enucleation. Our serial followup data suggest that storage symptom improvement was maintained throughout the long-term postoperative period for prostate photoselective laser vaporization with a 120 W high performance system and holmium laser enucleation without any difference between the 2 surgeries. Also, more severe storage symptoms at baseline and a more severe BOOI predicted improved storage symptoms in the long term after each surgery. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  4. Remotely-sensed near real-time monitoring of reservoir storage in India

    NASA Astrophysics Data System (ADS)

    Tiwari, A. D.; Mishra, V.

    2017-12-01

    Real-time reservoir storage information at a high temporal resolution is crucial to mitigate the influence of extreme events like floods and droughts. Despite large implications of near real-time reservoir monitoring in India for water resources and irrigation, remotely sensed monitoring systems have been lacking. Here we develop remotely sensed real-time monitoring systems for 91 large reservoirs in India for the period from 2000 to 2017. For the reservoir storage estimation, we combined Moderate Resolution Imaging Spectroradiometer (MODIS) 8-day 250 m Enhanced Vegetation Index (EVI), and Geoscience Laser Altimeter System (GLAS) onboard the Ice, Cloud, and land Elevation Satellite (ICESat) ICESat/GLAS elevation data. Vegetation data with the highest temporal resolution available from the MODIS is at 16 days. To increase the temporal resolution to 8 days, we developed the 8-day composite of near infrared, red, and blue band surface reflectance. Surface reflectance 8-Day L3 Global 250m only have NIR band and Red band, therefore, surface reflectance of 8-Day L3 Global at 500m is used for the blue band, which was regridded to 250m spatial resolution. An area-elevation relationship was derived using area from an unsupervised classification of MODIS image followed by an image enhancement and elevation data from ICESat/GLAS. A trial and error method was used to obtain the area-elevation relationship for those reservoirs for which ICESat/GLAS data is not available. The reservoir storages results were compared with the gauge storage data from 2002 to 2009 (training period), which were then evaluated for the period of 2010 to 2016. Our storage estimates were highly correlated with observations (R2 = 0.6 to 0.96), and the normalized root mean square error (NRMSE) ranged between 10% and 50%. We also developed a relationship between precipitation and reservoir storage that can be used for prediction of storage during the dry season.

  5. Model-independent and fast determination of optical functions in storage rings via multiturn and closed-orbit data

    NASA Astrophysics Data System (ADS)

    Riemann, Bernard; Grete, Patrick; Weis, Thomas

    2011-06-01

    Multiturn (or turn-by-turn) data acquisition has proven to be a new source of direct measurements for Twiss parameters in storage rings. On the other hand, closed-orbit measurements are a long-known tool for analyzing closed-orbit perturbations with conventional beam position monitor (BPM) systems and are necessarily available at every storage ring. This paper aims at combining the advantages of multiturn measurements and closed-orbit data. We show that only two multiturn BPMs and four correctors in one localized drift space in the storage ring (diagnostic drift) are sufficient for model-independent and absolute measuring of β and φ functions at all BPMs, including the conventional ones, instead of requiring all BPMs being equipped with multiturn electronics.

  6. A novel algorithm for monitoring reservoirs under all-weather conditions at a high temporal resolution through passive microwave remote sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Gao, Huilin

    2016-08-01

    Flood mitigation in developing countries has been hindered by a lack of near real-time reservoir storage information at high temporal resolution. By leveraging satellite passive microwave observations over a reservoir and its vicinity, we present a globally applicable new algorithm to estimate reservoir storage under all-weather conditions at a 4 day time step. A weighted horizontal ratio (WHR) based on the brightness temperatures at 36.5 GHz is introduced, with its coefficients calibrated against an area training data set over each reservoir. Using a predetermined area-elevation (A-H) relationship, these coefficients are then applied to the microwave data to calculate the storage. Validation results over four reservoirs in South Asia indicate that the microwave-based storage estimations (after noise reduction) perform well (with coefficients of determination ranging from 0.41 to 0.74). This is the first time that passive microwave observations are fused with other satellite data for quantifying the storage of individual reservoirs.

  7. Decibel: The Relational Dataset Branching System

    PubMed Central

    Maddox, Michael; Goehring, David; Elmore, Aaron J.; Madden, Samuel; Parameswaran, Aditya; Deshpande, Amol

    2017-01-01

    As scientific endeavors and data analysis become increasingly collaborative, there is a need for data management systems that natively support the versioning or branching of datasets to enable concurrent analysis, cleaning, integration, manipulation, or curation of data across teams of individuals. Common practice for sharing and collaborating on datasets involves creating or storing multiple copies of the dataset, one for each stage of analysis, with no provenance information tracking the relationships between these datasets. This results not only in wasted storage, but also makes it challenging to track and integrate modifications made by different users to the same dataset. In this paper, we introduce the Relational Dataset Branching System, Decibel, a new relational storage system with built-in version control designed to address these shortcomings. We present our initial design for Decibel and provide a thorough evaluation of three versioned storage engine designs that focus on efficient query processing with minimal storage overhead. We also develop an exhaustive benchmark to enable the rigorous testing of these and future versioned storage engine designs. PMID:28149668

  8. MARC ES: a computer program for estimating medical information storage requirements.

    PubMed

    Konoske, P J; Dobbins, R W; Gauker, E D

    1998-01-01

    During combat, documentation of medical treatment information is critical for maintaining continuity of patient care. However, knowledge of prior status and treatment of patients is limited to the information noted on a paper field medical card. The Multi-technology Automated Reader Card (MARC), a smart card, has been identified as a potential storage mechanism for casualty medical information. Focusing on data capture and storage technology, this effort developed a Windows program, MARC ES, to estimate storage requirements for the MARC. The program calculates storage requirements for a variety of scenarios using medical documentation requirements, casualty rates, and casualty flows and provides the user with a tool to estimate the space required to store medical data at each echelon of care for selected operational theaters. The program can also be used to identify the point at which data must be uploaded from the MARC if size constraints are imposed. Furthermore, this model can be readily extended to other systems that store or transmit medical information.

  9. Integrity Verification for Multiple Data Copies in Cloud Storage Based on Spatiotemporal Chaos

    NASA Astrophysics Data System (ADS)

    Long, Min; Li, You; Peng, Fei

    Aiming to strike for a balance between the security, efficiency and availability of the data verification in cloud storage, a novel integrity verification scheme based on spatiotemporal chaos is proposed for multiple data copies. Spatiotemporal chaos is implemented for node calculation of the binary tree, and the location of the data in the cloud is verified. Meanwhile, dynamic operation can be made to the data. Furthermore, blind information is used to prevent a third-party auditor (TPA) leakage of the users’ data privacy in a public auditing process. Performance analysis and discussion indicate that it is secure and efficient, and it supports dynamic operation and the integrity verification of multiple copies of data. It has a great potential to be implemented in cloud storage services.

  10. Groundwater Change in Storage Estimation by Using Monitoring Wells Data

    NASA Astrophysics Data System (ADS)

    Flores, C. I.

    2016-12-01

    In present times, remarkable attention is being given to models and data in hydrology, regarding their role in meeting water management requirements to enable well-informed decisions. Water management under the Sustainable Groundwater Management Act (SGMA) is currently challenging, due to it requires that groundwater sustainability agencies (GSAs) formulate groundwater sustainability plans (GSPs) to comply with new regulations and perform a responsible management to secure California's groundwater resources, particularly when droughts and climate change conditions are present. In this scenario, water budgets and change in groundwater storage estimations are key components for decision makers, but their computation is often difficult, lengthy and uncertain. Therefore, this work presents an innovative approach to integrate hydrologic modeling and available groundwater data into a single simplified tool, a proxy function, that estimate in real time the change in storage based on monitoring wells data. A hydrologic model was developed and calibrated for water years 1970 to 2015, the Yolo County IWFM, which was applied to generate the proxy as a study case, by regressing simulated change in storage versus change in head for the cities of Davis and Woodland area, and obtain a linear function dependent on heads variations over time. Later, the proxy was applied to actual groundwater data in this region to predict the change in storage. Results from this work provide proxy functions to approximate change in storage based on monitoring data for daily, monthly and yearly frameworks, being as well easily transferable to any spreadsheet or database to perform simply yet crucial computations in real time for sustainable groundwater management.

  11. SSeCloud: Using secret sharing scheme to secure keys

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Huang, Yang; Yang, Disheng; Zhang, Yuzhen; Liu, Hengchang

    2017-08-01

    With the use of cloud storage services, one of the concerns is how to protect sensitive data securely and privately. While users enjoy the convenience of data storage provided by semi-trusted cloud storage providers, they are confronted with all kinds of risks at the same time. In this paper, we present SSeCloud, a secure cloud storage system that improves security and usability by applying secret sharing scheme to secure keys. The system encrypts uploading files on the client side and splits encrypted keys into three shares. Each of them is respectively stored by users, cloud storage providers and the alternative third trusted party. Any two of the parties can reconstruct keys. Evaluation results of prototype system show that SSeCloud provides high security without too much performance penalty.

  12. Data Service: Distributed Data Capture and Replication

    NASA Astrophysics Data System (ADS)

    Warner, P. B.; Pietrowicz, S. R.

    2007-10-01

    Data Service is a critical component of the NOAO Data Management and Science Support (DMaSS) Solutions Platform, which is based on a service-oriented architecture, and is to replace the current NOAO Data Transport System. Its responsibilities include capturing data from NOAO and partner telescopes and instruments and replicating the data across multiple (currently six) storage sites. Java 5 was chosen as the implementation language, and Java EE as the underlying enterprise framework. Application metadata persistence is performed using EJB and Hibernate on the JBoss Application Server, with PostgreSQL as the persistence back-end. Although potentially any underlying mass storage system may be used as the Data Service file persistence technology, DTS deployments and Data Service test deployments currently use the Storage Resource Broker from SDSC. This paper presents an overview and high-level design of the Data Service, including aspects of deployment, e.g., for the LSST Data Challenge at the NCSA computing facilities.

  13. Data storage for managing the health enterprise and achieving business continuity.

    PubMed

    Hinegardner, Sam

    2003-01-01

    As organizations move away from a silo mentality to a vision of enterprise-level information, more healthcare IT departments are rejecting the idea of information storage as an isolated, system-by-system solution. IT executives want storage solutions that act as a strategic element of an IT infrastructure, centralizing storage management activities to effectively reduce operational overhead and costs. This article focuses on three areas of enterprise storage: tape, disk, and disaster avoidance.

  14. Quality Detection of Litchi Stored in Different Environments Using an Electronic Nose

    PubMed Central

    Xu, Sai; Lü, Enli; Lu, Huazhong; Zhou, Zhiyan; Wang, Yu; Yang, Jing; Wang, Yajuan

    2016-01-01

    The purpose of this paper was to explore the utility of an electronic nose to detect the quality of litchi fruit stored in different environments. In this study, a PEN3 electronic nose was adopted to test the storage time and hardness of litchi that were stored in three different types of environment (room temperature, refrigerator and controlled-atmosphere). After acquiring data about the hardness of the sample and from the electronic nose, linear discriminant analysis (LDA), canonical correlation analysis (CCA), BP neural network (BPNN) and BP neural network-partial least squares regression (BPNN-PLSR), were employed for data processing. The experimental results showed that the hardness of litchi fruits stored in all three environments decreased during storage. The litchi stored at room temperature had the fastest rate of decrease in hardness, followed by those stored in a refrigerator environment and under a controlled-atmosphere. LDA has a poor ability to classify the storage time of the three environments in which litchi was stored. BPNN can effectively recognize the storage time of litchi stored in a refrigerator and a controlled-atmosphere environment. However, the BPNN classification of the effect of room temperature storage on litchi was poor. CCA results show a significant correlation between electronic nose data and hardness data under the room temperature, and the correlation is more obvious for those under the refrigerator environment and controlled-atmosphere environment. The BPNN-PLSR can effectively predict the hardness of litchi under refrigerator storage conditions and a controlled-atmosphere environment. However, the BPNN-PLSR prediction of the effect of room temperature storage on litchi and global environment storage on litchi were poor. Thus, this experiment proved that an electronic nose can detect the quality of litchi under refrigeratored storage and a controlled-atmosphere environment. These results provide a useful reference for future studies on nondestructive and intelligent monitoring of fruit quality. PMID:27338391

  15. Technology for organization of the onboard system for processing and storage of ERS data for ultrasmall spacecraft

    NASA Astrophysics Data System (ADS)

    Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.

    2017-10-01

    Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montgomery, Rose; Scaglione, John M; Bevard, Bruce Balkcom

    The High Burnup Spent Fuel Data project pulled 25 sister rods (9 from the project assemblies and 16 from similar HBU assemblies) for characterization. The 25 sister rods are all high burnup and cover the range of modern domestic cladding alloys. The 25 sister rods were shipped to Oak Ridge National Laboratory (ORNL) in early 2016 for detailed non-destructive and destructive examination. Examinations are intended to provide baseline data on the initial physical state of the cladding and fuel prior to the loading, drying, and long-term dry storage process. Further examinations are focused on determining the effects of temperatures encounteredmore » during and following drying. Similar tests will be performed on rods taken from the project assemblies at the end of their long-term storage in a TN-32 dry storage cask (the cask rods ) to identify any significant changes in the fuel rods that may have occurred during the dry storage period. Additionally, some of the sister rods will be used for separate effects testing to expand the applicability of the project data to the fleet, and to address some of the data-related gaps associated with extended storage and subsequent transportation of high burnup fuel. A draft test plan is being developed that describes the experimental work to be conducted on the sister rods. This paper summarizes the draft test plan and necessary coordination activities for the multi-year experimental program to supply data relevant to the assessment of the safety of long-term storage followed by transportation of high burnup spent fuel.« less

  17. Fair-share scheduling algorithm for a tertiary storage system

    NASA Astrophysics Data System (ADS)

    Jakl, Pavel; Lauret, Jérôme; Šumbera, Michal

    2010-04-01

    Any experiment facing Peta bytes scale problems is in need for a highly scalable mass storage system (MSS) to keep a permanent copy of their valuable data. But beyond the permanent storage aspects, the sheer amount of data makes complete data-set availability onto live storage (centralized or aggregated space such as the one provided by Scalla/Xrootd) cost prohibitive implying that a dynamic population from MSS to faster storage is needed. One of the most challenging aspects of dealing with MSS is the robotic tape component. If a robotic system is used as the primary storage solution, the intrinsically long access times (latencies) can dramatically affect the overall performance. To speed the retrieval of such data, one could organize the requests according to criterion with an aim to deliver maximal data throughput. However, such approaches are often orthogonal to fair resource allocation and a trade-off between quality of service, responsiveness and throughput is necessary for achieving an optimal and practical implementation of a truly faire-share oriented file restore policy. Starting from an explanation of the key criterion of such a policy, we will present evaluations and comparisons of three different MSS file restoration algorithms which meet fair-share requirements, and discuss their respective merits. We will quantify their impact on a typical file restoration cycle for the RHIC/STAR experimental setup and this, within a development, analysis and production environment relying on a shared MSS service [1].

  18. STOrage and RETrieval and Water Quality eXchange | Water ...

    EPA Pesticide Factsheets

    2016-04-07

    The STORET (short for STOrage and RETrieval) Data Warehouse is a repository for water quality, biological, and physical data and is used by state environmental agencies, EPA and other federal agencies, universities, private citizens, and many others.

  19. Multiplexed Holographic Optical Data Storage In Thick Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Ozcan, Meric; Smithey, Daniel T.; Crew, Marshall

    1998-01-01

    The optical data storage capacity of photochromic bacteriorhodopsin films is investigated by means of theoretical calculations, numerical simulations, and experimental measurements on sequential recording of angularly multiplexed diffraction gratings inside a thick D85N BR film.

  20. STOrage and RETrieval and Water Quality eXchange | Water ...

    EPA Pesticide Factsheets

    2015-11-02

    The STORET (short for STOrage and RETrieval) Data Warehouse is a repository for water quality, biological, and physical data and is used by state environmental agencies, EPA and other federal agencies, universities, private citizens, and many others.

  1. Effective grouping for energy and performance: Construction of adaptive, sustainable, and maintainable data storage

    NASA Astrophysics Data System (ADS)

    Essary, David S.

    The performance gap between processors and storage systems has been increasingly critical over the years. Yet the performance disparity remains, and further, storage energy consumption is rapidly becoming a new critical problem. While smarter caching and predictive techniques do much to alleviate this disparity, the problem persists, and data storage remains a growing contributor to latency and energy consumption. Attempts have been made at data layout maintenance, or intelligent physical placement of data, yet in practice, basic heuristics remain predominant. Problems that early studies sought to solve via layout strategies were proven to be NP-Hard, and data layout maintenance today remains more art than science. With unknown potential and a domain inherently full of uncertainty, layout maintenance persists as an area largely untapped by modern systems. But uncertainty in workloads does not imply randomness; access patterns have exhibited repeatable, stable behavior. Predictive information can be gathered, analyzed, and exploited to improve data layouts. Our goal is a dynamic, robust, sustainable predictive engine, aimed at improving existing layouts by replicating data at the storage device level. We present a comprehensive discussion of the design and construction of such a predictive engine, including workload evaluation, where we present and evaluate classical workloads as well as our own highly detailed traces collected over an extended period. We demonstrate significant gains through an initial static grouping mechanism, and compare against an optimal grouping method of our own construction, and further show significant improvement over competing techniques. We also explore and illustrate the challenges faced when moving from static to dynamic (i.e. online) grouping, and provide motivation and solutions for addressing these challenges. These challenges include metadata storage, appropriate predictive collocation, online performance, and physical placement. We reduced the metadata needed by several orders of magnitude, reducing the required volume from more than 14% of total storage down to less than 1/2%. We also demonstrate how our collocation strategies outperform competing techniques. Finally, we present our complete model and evaluate a prototype implementation against real hardware. This model was demonstrated to be capable of reducing device-level accesses by up to 65%. Keywords: computer systems, collocation, data management, file systems, grouping, metadata, modeling and prediction, operating systems, performance, power, secondary storage.

  2. 18 CFR 11.4 - Use of government dams for pumped storage projects, and use of tribal lands.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... energy used for pumped storage pumping. (2) A licensee who has filed these data under another section of... for pumped storage projects, and use of tribal lands. 11.4 Section 11.4 Conservation of Power and... for pumped storage projects, and use of tribal lands. (a) General Rule. The Commission will determine...

  3. 18 CFR 11.4 - Use of government dams for pumped storage projects, and use of tribal lands.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... energy used for pumped storage pumping. (2) A licensee who has filed these data under another section of... for pumped storage projects, and use of tribal lands. 11.4 Section 11.4 Conservation of Power and... for pumped storage projects, and use of tribal lands. (a) General Rule. The Commission will determine...

  4. 18 CFR 11.4 - Use of government dams for pumped storage projects, and use of tribal lands.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... energy used for pumped storage pumping. (2) A licensee who has filed these data under another section of... for pumped storage projects, and use of tribal lands. 11.4 Section 11.4 Conservation of Power and... for pumped storage projects, and use of tribal lands. (a) General Rule. The Commission will determine...

  5. A new standardized data collection system for interdisciplinary thyroid cancer management: Thyroid COBRA.

    PubMed

    Tagliaferri, Luca; Gobitti, Carlo; Colloca, Giuseppe Ferdinando; Boldrini, Luca; Farina, Eleonora; Furlan, Carlo; Paiar, Fabiola; Vianello, Federica; Basso, Michela; Cerizza, Lorenzo; Monari, Fabio; Simontacchi, Gabriele; Gambacorta, Maria Antonietta; Lenkowicz, Jacopo; Dinapoli, Nicola; Lanzotti, Vito; Mazzarotto, Renzo; Russi, Elvio; Mangoni, Monica

    2018-07-01

    The big data approach offers a powerful alternative to Evidence-based medicine. This approach could guide cancer management thanks to machine learning application to large-scale data. Aim of the Thyroid CoBRA (Consortium for Brachytherapy Data Analysis) project is to develop a standardized web data collection system, focused on thyroid cancer. The Metabolic Radiotherapy Working Group of Italian Association of Radiation Oncology (AIRO) endorsed the implementation of a consortium directed to thyroid cancer management and data collection. The agreement conditions, the ontology of the collected data and the related software services were defined by a multicentre ad hoc working-group (WG). Six Italian cancer centres were firstly started the project, defined and signed the Thyroid COBRA consortium agreement. Three data set tiers were identified: Registry, Procedures and Research. The COBRA-Storage System (C-SS) appeared to be not time-consuming and to be privacy respecting, as data can be extracted directly from the single centre's storage platforms through a secured connection that ensures reliable encryption of sensible data. Automatic data archiving could be directly performed from Image Hospital Storage System or the Radiotherapy Treatment Planning Systems. The C-SS architecture will allow "Cloud storage way" or "distributed learning" approaches for predictive model definition and further clinical decision support tools development. The development of the Thyroid COBRA data Storage System C-SS through a multicentre consortium approach appeared to be a feasible tool in the setup of complex and privacy saving data sharing system oriented to the management of thyroid cancer and in the near future every cancer type. Copyright © 2018 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  6. RAIN: A Bio-Inspired Communication and Data Storage Infrastructure.

    PubMed

    Monti, Matteo; Rasmussen, Steen

    2017-01-01

    We summarize the results and perspectives from a companion article, where we presented and evaluated an alternative architecture for data storage in distributed networks. We name the bio-inspired architecture RAIN, and it offers file storage service that, in contrast with current centralized cloud storage, has privacy by design, is open source, is more secure, is scalable, is more sustainable, has community ownership, is inexpensive, and is potentially faster, more efficient, and more reliable. We propose that a RAIN-style architecture could form the backbone of the Internet of Things that likely will integrate multiple current and future infrastructures ranging from online services and cryptocurrency to parts of government administration.

  7. ESGF and WDCC: The Double Structure of the Digital Data Storage at DKRZ

    NASA Astrophysics Data System (ADS)

    Toussaint, F.; Höck, H.

    2016-12-01

    Since a couple of years, Digital Repositories of climate science face new challenges: International projects are global collaborations. The data storage in parallel moved to federated, distributed storage systems like ESGF. For the long term archival storage (LTA) on the other hand, communities, funders, and data users make stronger demands for data and metadata quality to facilitate data use and reuse. At DKRZ, this situation led to a twofold data dissemination system - a situation which has influence on administration, workflows, and sustainability of the data. The ESGF system is focused on the needs of users as partners in global projects. It includes replication tools, detailed global project standards, and efficient search for the data to download. In contrast, DKRZ's classical CERA LTA storage aims for long term data holding and data curation as well as for data reuse requiring high metadata quality standards. In addition, for LTA data a Digital Object Identifier publication service for the direct integration of research data in scientific publications has been implemented. The editorial process at DKRZ-LTA ensures the quality of metadata and research data. The DOI and a citation code are provided and afterwards registered under DataCite's (datacite.org) regulations. In the overall data life cycle continuous reliability of the data and metadata quality is essential to allow for data handling at Petabytes level, data long term usability, and adequate publication of the results. These considerations lead to the question "What is quality" - with respect to data, to the repository itself, to the publisher, and the user? Global consensus is needed for these assessments as the phases of the end to end workflow gear into each other: For data and metadata, checks need to go hand in hand with the processes of production and storage. The results can be judged following a Quality Maturity Matrix (QMM). Repositories can be certified according to their trustworthiness. For the publication of any scientific conclusions, scientific community, funders, media, and policy makers ask for the publisher's impact in terms of readers' credit, run, and presentation quality. The paper describes the data life cycle. Emphasis is put on the different levels of quality assessment which at DKRZ ensure the data and metadata quality.

  8. Perceptions of firearms and suicide: The role of misinformation in storage practices and openness to means safety measures.

    PubMed

    Anestis, Michael D; Butterworth, Sarah E; Houtsma, Claire

    2018-02-01

    Firearm ownership and unsafe storage increase risk for suicide. Little is known regarding factors that influence storage practices and willingness to engage in means safety. Utilizing Amazon's Mechanical Turk program, we recruited an online sample of 300 adults living in the US who own at least one firearm. Firearm storage practices and openness to means safety measures were assessed using items designed for this study. Data were collected and analyzed in 2017. Firearms stored in non-secure locations and without a locking device were associated with lower beliefs in the relationship between firearm storage and suicide risk. Fearlessness about death moderated the association between current secure versus non-secure storage and beliefs regarding firearm storage and suicide risk, in that storage practices and beliefs were more strongly related at higher levels of fearlessness about death. For both secure and locked storage of a firearm, there was a significant indirect effect of current storage practices on willingness to engage in means safety in the future through current beliefs regarding the relationship between firearm storage and suicide risk. Unsafe storage practices were largely associated with an unwillingness to store firearms more safely or to allow a trusted peer to temporarily store the firearm outside the home in order to prevent their own or someone else's suicide. Self-report and cross-sectional data were used. Results may not generalize to non-firearm owners. Firearm owners are prone to inaccurate beliefs about the relationship between firearms and suicide. These beliefs may influence both current firearm storage practices and the willingness to engage in means safety. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Notes on a storage manager for the Clouds kernel

    NASA Technical Reports Server (NTRS)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  10. 76 FR 50883 - U.S. Customs and Border Protection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    ... broker that filed the entry grants the importer such access. Given data storage limitations, at this time... and the previous four CBP fiscal years because of data storage limitations. ABI filers may run an ABI... (``ACE'') Secure Data Portal Account can monitor the liquidation of their entries by using the reporting...

  11. Tribology of magnetic storage systems

    NASA Technical Reports Server (NTRS)

    Bhushan, Bharat

    1992-01-01

    The construction and the materials used in different magnetic storage devices are defined. The theories of friction and adhesion, interface temperatures, wear, and solid-liquid lubrication relevant to magnetic storage systems are presented. Experimental data are presented wherever possible to support the relevant theories advanced.

  12. Commercial applications for optical data storage

    NASA Astrophysics Data System (ADS)

    Tas, Jeroen

    1991-03-01

    Optical data storage has spurred the market for document imaging systems. These systems are increasingly being used to electronically manage the processing, storage and retrieval of documents. Applications range from straightforward archives to sophisticated workflow management systems. The technology is developing rapidly and within a few years optical imaging facilities will be incorporated in most of the office information systems. This paper gives an overview of the status of the market, the applications and the trends of optical imaging systems.

  13. State DOT use of web-based data storage.

    DOT National Transportation Integrated Search

    2013-01-01

    This study explores the experiences of state departments of transportation (DOT) in the use of web or : cloud-based data storage and related practices. The study provides results of a survey of State DOTs : and presents best practices of state govern...

  14. Biophotopol: A Sustainable Photopolymer for Holographic Data Storage Applications

    PubMed Central

    Ortuño, Manuel; Gallego, Sergi; Márquez, Andrés; Neipp, Cristian; Pascual, Inmaculada; Beléndez, Augusto

    2012-01-01

    Photopolymers have proved to be useful for different holographic applications such as holographic data storage or holographic optical elements. However, most photopolymers have certain undesirable features, such as the toxicity of some of their components or their low environmental compatibility. For this reason, the Holography and Optical Processing Group at the University of Alicante developed a new dry photopolymer with low toxicity and high thickness called biophotopol, which is very adequate for holographic data storage applications. In this paper we describe our recent studies on biophotopol and the main characteristics of this material. PMID:28817008

  15. Multiplexed Holographic Data Storage in Bacteriorhodopsin

    NASA Technical Reports Server (NTRS)

    Mehrl, David J.; Krile, Thomas F.

    1997-01-01

    High density optical data storage, driven by the information revolution, remains at the forefront of current research areas. Much of the current research has focused on photorefractive materials (SBN and LiNbO3) and polymers, despite various problems with expense, durability, response time and retention periods. Photon echo techniques, though promising, are questionable due to the need for cryogenic conditions. Bacteriorhodopsin (BR) films are an attractive alternative recording medium. Great strides have been made in refining BR, and materials with storage lifetimes as long as 100 days have recently become available. The ability to deposit this robust polycrystalline material as high quality optical films suggests the use of BR as a recording medium for commercial optical disks. Our own recent research has demonstrated the suitability of BR films for real time spatial filtering and holography. We propose to fully investigate the feasibility of performing holographic mass data storage in BR. Important aspects of the problem to be investigated include various data multiplexing techniques (e.g. angle- amplitude- and phase-encoded multiplexing, and in particular shift-multiplexing), multilayer recording techniques, SLM selection and data readout using crossed polarizers for noise rejection. Systems evaluations of storage parameters, including access times, memory refresh constraints, erasure, signal-to-noise ratios and bit error rates, will be included in our investigations.

  16. Data model and relational database design for the New Jersey Water-Transfer Data System (NJWaTr)

    USGS Publications Warehouse

    Tessler, Steven

    2003-01-01

    The New Jersey Water-Transfer Data System (NJWaTr) is a database design for the storage and retrieval of water-use data. NJWaTr can manage data encompassing many facets of water use, including (1) the tracking of various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the storage of descriptions, classifications and locations of places and organizations involved in water-use activities; (3) the storage of details about measured or estimated volumes of water associated with water-use activities; and (4) the storage of information about data sources and water resources associated with water use. In NJWaTr, each water transfer occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NJWaTr model are site, conveyance, transfer/volume, location, and owner. Other important entities include water resource (used for withdrawals and returns), data source, permit, and alias. Multiple water-exchange estimates based on different methods or data sources can be stored for individual transfers. Storage of user-defined details is accommodated for several of the main entities. Many tables contain classification terms to facilitate the detailed description of data items and can be used for routine or custom data summarization. NJWaTr accommodates single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database. Data stored in the NJWaTr structure can be retrieved in user-defined combinations to serve visualization and analytical applications. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  17. Name It! Store It! Protect It!: A Systems Approach to Managing Data in Research Core Facilities.

    PubMed

    DeVries, Matthew; Fenchel, Matthew; Fogarty, R E; Kim, Byong-Do; Timmons, Daniel; White, A Nicole

    2017-12-01

    As the capabilities of technology increase, so do the production of data and the need for data management. The need for data storage at many academic institutions is increasing exponentially. Technology is expanding rapidly, and institutions are recognizing the need to incorporate data management that can be available for future data sharing as a critical component of institutional services. The establishment of a process to manage the surge in data storage is complex and often hindered by not having a plan. Simple file naming-nomenclature-is also becoming ever more important to leave an established understanding of the contents in a file. This is especially the case as research experiences turnover from research projects and personnel. The indexing of files consistently also helps to identify past work. Finally, the protection of the data contents is becoming increasing challenging. As the genomic field expands, and medicine becomes more personalized, the identification of methods to protect the contents of data in both short- and long-term storage needs to be established so as not to risk the potential of revealing identifiable information. This is often something we do not consider in a nonclinical research environment. The need for establishing basic guidelines for institutions is critical, as individual research laboratories are unable to handle the scope of data storage required for their own research. In addition to the immediate needs for establishing guidelines on data storage and file naming and how to protect information, the recognition of the need for specialized support for data management supporting research cores and laboratories at academic institutions is becoming a critical component of institutional services. Here, we outline some case studies and methods that you may be able to adopt at your own institution.

  18. Planning for optical disk technology with digital cartography.

    USGS Publications Warehouse

    Light, D.L.

    1986-01-01

    A major shortfall that still exists in digital systems is the need for very large mass storage capacity. The decade of the 1980s has introduced laser optical disk storage technology, which may be the breakthrough needed for mass storage. This paper addresses system concepts for digital cartography during the transition period. Emphasis will be placed on determining USGS mass storage requirements and introducing laser optical disk technology for handling storage problems for digital data in this decade.-from Author

  19. PIMS Data Storage, Access, and Neural Network Processing

    NASA Technical Reports Server (NTRS)

    McPherson, Kevin M.; Moskowitz, Milton E.

    1998-01-01

    The Principal Investigator Microgravity Services (PIMS) project at NASA's Lewis Research Center has supported microgravity science Principal Investigator's (PIs) by processing, analyzing, and storing the acceleration environment data recorded on the NASA Space Shuttles and the Russian Mir space station. The acceleration data recorded in support of the microgravity science investigated on these platforms has been generated in discrete blocks totaling approximately 48 gigabytes for the Orbiter missions and 50 gigabytes for the Mir increments. Based on the anticipated volume of acceleration data resulting from continuous or nearly continuous operations, the International Space Station (ISS) presents a unique set of challenges regarding the storage of and access to microgravity acceleration environment data. This paper presents potential microgravity environment data storage, access, and analysis concepts for the ISS era.

  20. DNA MemoChip: Long-Term and High Capacity Information Storage and Select Retrieval.

    PubMed

    Stefano, George B; Wang, Fuzhou; Kream, Richard M

    2018-02-26

    Over the course of history, human beings have never stopped seeking effective methods for information storage. From rocks to paper, and through the past several decades of using computer disks, USB sticks, and on to the thin silicon "chips" and "cloud" storage of today, it would seem that we have reached an era of efficiency for managing innumerable and ever-expanding data. Astonishingly, when tracing this technological path, one realizes that our ancient methods of informational storage far outlast paper (10,000 vs. 1,000 years, respectively), let alone the computer-based memory devices that only last, on average, 5 to 25 years. During this time of fast-paced information generation, it becomes increasingly difficult for current storage methods to retain such massive amounts of data, and to maintain appropriate speeds with which to retrieve it, especially when in demand by a large number of users. Others have proposed that DNA-based information storage provides a way forward for information retention as a result of its temporal stability. It is now evident that DNA represents a potentially economical and sustainable mechanism for storing information, as demonstrated by its decoding from a 700,000 year-old horse genome. The fact that the human genome is present in a cell, containing also the varied mitochondrial genome, indicates DNA's great potential for large data storage in a 'smaller' space.

  1. DNA MemoChip: Long-Term and High Capacity Information Storage and Select Retrieval

    PubMed Central

    Wang, Fuzhou; Kream, Richard M.

    2018-01-01

    Over the course of history, human beings have never stopped seeking effective methods for information storage. From rocks to paper, and through the past several decades of using computer disks, USB sticks, and on to the thin silicon “chips” and “cloud” storage of today, it would seem that we have reached an era of efficiency for managing innumerable and ever-expanding data. Astonishingly, when tracing this technological path, one realizes that our ancient methods of informational storage far outlast paper (10,000 vs. 1,000 years, respectively), let alone the computer-based memory devices that only last, on average, 5 to 25 years. During this time of fast-paced information generation, it becomes increasingly difficult for current storage methods to retain such massive amounts of data, and to maintain appropriate speeds with which to retrieve it, especially when in demand by a large number of users. Others have proposed that DNA-based information storage provides a way forward for information retention as a result of its temporal stability. It is now evident that DNA represents a potentially economical and sustainable mechanism for storing information, as demonstrated by its decoding from a 700,000 year-old horse genome. The fact that the human genome is present in a cell, containing also the varied mitochondrial genome, indicates DNA’s great potential for large data storage in a ‘smaller’ space. PMID:29481548

  2. A user-defined data type for the storage of time series data allowing efficient similarity screening.

    PubMed

    Sorokin, Anatoly; Selkov, Gene; Goryanin, Igor

    2012-07-16

    The volume of the experimentally measured time series data is rapidly growing, while storage solutions offering better data types than simple arrays of numbers or opaque blobs for keeping series data are sorely lacking. A number of indexing methods have been proposed to provide efficient access to time series data, but none has so far been integrated into a tried-and-proven database system. To explore the possibility of such integration, we have developed a data type for time series storage in PostgreSQL, an object-relational database system, and equipped it with an access method based on SAX (Symbolic Aggregate approXimation). This new data type has been successfully tested in a database supporting a large-scale plant gene expression experiment, and it was additionally tested on a very large set of simulated time series data. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Using expert systems to implement a semantic data model of a large mass storage system

    NASA Technical Reports Server (NTRS)

    Roelofs, Larry H.; Campbell, William J.

    1990-01-01

    The successful development of large volume data storage systems will depend not only on the ability of the designers to store data, but on the ability to manage such data once it is in the system. The hypothesis is that mass storage data management can only be implemented successfully based on highly intelligent meta data management services. There now exists a proposed mass store system standard proposed by the IEEE that addresses many of the issues related to the storage of large volumes of data, however, the model does not consider a major technical issue, namely the high level management of stored data. However, if the model were expanded to include the semantics and pragmatics of the data domain using a Semantic Data Model (SDM) concept, the result would be data that is expressive of the Intelligent Information Fusion (IIF) concept and also organized and classified in context to its use and purpose. The results are presented of a demonstration prototype SDM implemented using the expert system development tool NEXPERT OBJECT. In the prototype, a simple instance of a SDM was created to support a hypothetical application for the Earth Observing System, Data Information System (EOSDIS). The massive amounts of data that EOSDIS will manage requires the definition and design of a powerful information management system in order to support even the most basic needs of the project. The application domain is characterized by a semantic like network that represents the data content and the relationships between the data based on user views and the more generalized domain architectural view of the information world. The data in the domain are represented by objects that define classes, types and instances of the data. In addition, data properties are selectively inherited between parent and daughter relationships in the domain. Based on the SDM a simple information system design is developed from the low level data storage media, through record management and meta data management to the user interface.

  4. First Experiences with CMS Data Storage on the GEMSS System at the INFN-CNAF Tier-1

    NASA Astrophysics Data System (ADS)

    Andreotti, D.; Bonacorsi, D.; Cavalli, A.; Pra, S. Dal; Dell'Agnello, L.; Forti, Alberto; Grandi, C.; Gregori, D.; Gioi, L. Li; Martelli, B.; Prosperini, A.; Ricci, P. P.; Ronchieri, Elisabetta; Sapunenko, V.; Sartirana, A.; Vagnoni, V.; Zappi, Riccardo

    A brand new Mass Storage System solution called "Grid-Enabled Mass Storage System" (GEMSS) -based on the Storage Resource Manager (StoRM) developed by INFN, on the General Parallel File System by IBM and on the Tivoli Storage Manager by IBM -has been tested and deployed at the INFNCNAF Tier-1 Computing Centre in Italy. After a successful stress test phase, the solution is now being used in production for the data custodiality of the CMS experiment at CNAF. All data previously recorded on the CASTOR system have been transferred to GEMSS. As final validation of the GEMSS system, some of the computing tests done in the context of the WLCG "Scale Test for the Experiment Program" (STEP'09) challenge were repeated in September-October 2009 and compared with the results previously obtained with CASTOR in June 2009. In this paper, the GEMSS system basics, the stress test activity and the deployment phase -as well as the reliability and performance of the system -are overviewed. The experiences in the use of GEMSS at CNAF in preparing for the first months of data taking of the CMS experiment at the Large Hadron Collider are also presented.

  5. Storage requirements for Arkansas streams

    USGS Publications Warehouse

    Patterson, James Lee

    1968-01-01

    The supply of good-quality surface water in Arkansas is abundant. owing to seasonal and annual variability of streamflow, however, storage must be provided to insure dependable year-round supplies in most of the State. Storage requirements for draft rates that are as much as 60 percent of the mean annual flow at 49 continuous-record gaging stations can be obtained from tabular data in this report. Through regional analyses of streamflow data, the State was divided into three regions. Draft-storage diagrams for each region provide a means of estimating storage requirements for sites on streams where data are scant, provided the drainage area, the mean annual flow, and the low-flow index are known. These data are tabulated for 53 gaging stations used in the analyses and for 132 partial-record sites where only base-flow measurements have been made. Mean annual flow can be determined for any stream whose drainage lies within the State by using the runoff map in this report. Low-flow indices can be estimated by correlating base flows, determined from several discharge measurements, with concurrent flows at nearby continuous-record gaging stations, whose low-flow indices have been determined.

  6. Reliable, Memory Speed Storage for Cluster Computing Frameworks

    DTIC Science & Technology

    2014-06-16

    specification API that can capture computations in many of today’s popular data -parallel computing models, e.g., MapReduce and SQL. We also ported the Hadoop ...today’s big data workloads: • Immutable data : Data is immutable once written, since dominant underlying storage systems, such as HDFS [3], only support...network transfers, so reads can be data -local. • Program size vs. data size: In big data processing, the same operation is repeatedly applied on massive

  7. Travel guidance system for vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takanabe, K.; Yamamoto, M.; Ito, K.

    1987-02-24

    A travel guidance system is described for vehicles including: a heading sensor for detecting a direction of movement of a vehicle; a distance sensor for detecting a distance traveled by the vehicle; a map data storage medium preliminarily storing map data; a control unit for receiving a heading signal from the heading sensor and a distance signal from the distance sensor to successively compute a present position of the vehicle and for generating video signals corresponding to display data including map data from the map data storage medium and data of the present position; and a display having first andmore » second display portions and responsive to the video signals from the control unit to display on the first display portion a map and a present portion mark, in which: the map data storage medium comprises means for preliminarily storing administrative division name data and landmark data; and the control unit comprises: landmark display means for: (1) determining a landmark closest to the present position, (2) causing a position of the landmark to be displayed on the map and (3) retrieving a landmark massage concerning the landmark from the storage medium to cause the display to display the landmark message on the second display portion; division name display means for retrieving the name of an administrative division to which the present position belongs from the storage medium and causing the display to display a division name message on the second display portion; and selection means for selectively actuating at least one of the landmark display means and the division name display means.« less

  8. Storage capacity of the Fena Valley Reservoir, Guam, Mariana Islands, 2014

    USGS Publications Warehouse

    Marineau, Mathieu D.; Wright, Scott A.

    2015-01-01

    Analyses of the bathymetric data indicate that the reservoir currently has 6,915 acre-feet of storage capacity. The engineering drawings of record show that the total reservoir capacity in 1951 was estimated to be 8,365 acre-feet. Thus, between 1951 and 2014, the total storage capacity decreased by 1,450 acre-feet (a loss of 17 percent of the original total storage capacity). The remaining live-storage capacity, or the volume of storage above the lowest-level reservoir outlet elevation, was calculated to be 5,511 acre-feet in 2014, indicating a decrease of 372 acre-feet (or 6 percent) of the original 5,883 acre-feet of live-storage capacity. The remaining dead-storage capacity, or volume of storage below the lowest-level outlet, was 1,404 acre-feet in 2014, indicating a decrease of 1,078 acre-feet (or 43 percent) of the original 2,482 acre-feet of dead-storage capacity.

  9. Eigenmode multiplexing with SLM for volume holographic data storage

    NASA Astrophysics Data System (ADS)

    Chen, Guanghao; Miller, Bo E.; Takashima, Yuzuru

    2017-08-01

    The cavity supports the orthogonal reference beam families as its eigenmodes while enhancing the reference beam power. Such orthogonal eigenmodes are used as additional degree of freedom to multiplex data pages, consequently increase storage densities for volume Holographic Data Storage Systems (HDSS) when the maximum number of multiplexed data page is limited by geometrical factor. Image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at multiple Bragg angles by using Liquid Crystal on Silicon (LCOS) spatial light modulators (SLMs) in reference arms. Total of nine holograms are recorded with three angular and three eigenmode.

  10. Mass storage technology in networks

    NASA Astrophysics Data System (ADS)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  11. Optimal read/write memory system components

    NASA Technical Reports Server (NTRS)

    Kozma, A.; Vander Lugt, A.; Klinger, D.

    1972-01-01

    Two holographic data storage and display systems, voltage gradient ionization system, and linear strain manipulation system are discussed in terms of creating fast, high bit density, storage device. Components described include: novel mounting fixture for photoplastic arrays; corona discharge device; and block data composer.

  12. The Microcomputer as an Administrative/Educational Tool in Education of the Hearing Impaired.

    ERIC Educational Resources Information Center

    Graham, Richard

    1982-01-01

    Administrative and instructional uses of microcomputers with hearing impaired students (infants to junior high level) are described. Uses include data storage and retrieval, maintenance of student history files, storage of test data, and vocabulary reinforcement for students. (CL)

  13. Research on an IP disaster recovery storage system

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Wang, Yusheng; Zhu, Jianfeng

    2008-12-01

    According to both the Fibre Channel (FC) Storage Area Network (SAN) switch and Fabric Application Interface Standard (FAIS) mechanism, an iSCSI storage controller is put forward and based upon it, an internet Small Computer System Interface (iSCSI) SAN construction strategy for disaster recovery (DR) is proposed and some multiple sites replication models and a closed queue performance analysis method are also discussed in this paper. The iSCSI storage controller lies in the fabric level of the networked storage infrastructure, and it can be used to connect to both the hybrid storage applications and storage subsystems, besides, it can provide virtualized storage environment and support logical volume access control, and by cooperating with the remote peerparts, a disaster recovery storage system can be built on the basis of the data replication, block-level snapshot and Internet Protocol (IP) take-over functions.

  14. Self-aligning and compressed autosophy video databases

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  15. Confidential storage and transmission of medical image data.

    PubMed

    Norcen, R; Podesser, M; Pommer, A; Schmidt, H-P; Uhl, A

    2003-05-01

    We discuss computationally efficient techniques for confidential storage and transmission of medical image data. Two types of partial encryption techniques based on AES are proposed. The first encrypts a subset of bitplanes of plain image data whereas the second encrypts parts of the JPEG2000 bitstream. We find that encrypting between 20% and 50% of the visual data is sufficient to provide high confidentiality.

  16. Move It or Lose It: Cloud-Based Data Storage

    ERIC Educational Resources Information Center

    Waters, John K.

    2010-01-01

    There was a time when school districts showed little interest in storing or backing up their data to remote servers. Nothing seemed less secure than handing off data to someone else. But in the last few years the buzz around cloud storage has grown louder, and the idea that data backup could be provided as a service has begun to gain traction in…

  17. Using dCache in Archiving Systems oriented to Earth Observation

    NASA Astrophysics Data System (ADS)

    Garcia Gil, I.; Perez Moreno, R.; Perez Navarro, O.; Platania, V.; Ozerov, D.; Leone, R.

    2012-04-01

    The object of LAST activity (Long term data Archive Study on new Technologies) is to perform an independent study on best practices and assessment of different archiving technologies mature for operation in the short and mid-term time frame, or available in the long-term with emphasis on technologies better suited to satisfy the requirements of ESA, LTDP and other European and Canadian EO partners in terms of digital information preservation and data accessibility and exploitation. During the last phase of the project, a testing of several archiving solutions has been performed in order to evaluate their suitability. In particular, dCache, aimed to provide a file system tree view of the data repository exchanging this data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, dCache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user Dcache was created to solve the requirements of big computer centers and universities with big amounts of data, putting their efforts together and founding EMI (European Middleware Initiative). At the moment being, Dcache is mature enough to be implemented, being used by several research centers of relevance (e.g. LHC storing up to 50TB/day). This solution has been not used so far in Earth Observation and the results of the study are summarized in this article, focusing on the capacities over a simulated environment to get in line with the ESA requirements for a geographically distributed storage. The challenge of a geographically distributed storage system can be summarized as the way to provide a maximum quality for storage and dissemination services with the minimum cost.

  18. Improved methods for estimating local terrestrial water dynamics from GRACE in the Northern High Plains

    NASA Astrophysics Data System (ADS)

    Seyoum, Wondwosen M.; Milewski, Adam M.

    2017-12-01

    Investigating terrestrial water cycle dynamics is vital for understanding the recent climatic variability and human impacts in the hydrologic cycle. In this study, a downscaling approach was developed and tested, to improve the applicability of terrestrial water storage (TWS) anomaly data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission for understanding local terrestrial water cycle dynamics in the Northern High Plains region. A non-parametric, artificial neural network (ANN)-based model, was utilized to downscale GRACE data by integrating it with hydrological variables (e.g. soil moisture) derived from satellite and land surface model data. The downscaling model, constructed through calibration and sensitivity analysis, was used to estimate TWS anomaly for watersheds ranging from 5000 to 20,000 km2 in the study area. The downscaled water storage anomaly data were evaluated using water storage data derived from an (1) integrated hydrologic model, (2) land surface model (e.g. Noah), and (3) storage anomalies calculated from in-situ groundwater level measurements. Results demonstrate the ANN predicts monthly TWS anomaly within the uncertainty (conservative error estimate = 34 mm) for most of the watersheds. Seasonal derived groundwater storage anomaly (GWSA) from the ANN correlated well (r = ∼0.85) with GWSAs calculated from in-situ groundwater level measurements for a watershed size as small as 6000 km2. ANN downscaled TWSA matches closely with Noah-based TWSA compared to standard GRACE extracted TWSA at a local scale. Moreover, the ANN-downscaled change in TWS replicated the water storage variability resulting from the combined effect of climatic and human impacts (e.g. abstraction). The implications of utilizing finer resolution GRACE data for improving local and regional water resources management decisions and applications are clear, particularly in areas lacking in-situ hydrologic monitoring networks.

  19. The Design of Data Disaster Recovery of National Fundamental Geographic Information System

    NASA Astrophysics Data System (ADS)

    Zhai, Y.; Chen, J.; Liu, L.; Liu, J.

    2014-04-01

    With the development of information technology, data security of information system is facing more and more challenges. The geographic information of surveying and mapping is fundamental and strategic resource, which is applied in all areas of national economic, defence and social development. It is especially vital to national and social interests when such classified geographic information is directly concerning Chinese sovereignty. Several urgent problems that needs to be resolved for surveying and mapping are how to do well in mass data storage and backup, establishing and improving the disaster backup system especially after sudden natural calamity accident, and ensuring all sectors rapidly restored on information system will operate correctly. For overcoming various disaster risks, protect the security of data and reduce the impact of the disaster, it's no doubt the effective way is to analysis and research on the features of storage and management and security requirements, as well as to ensure that the design of data disaster recovery system suitable for the surveying and mapping. This article analyses the features of fundamental geographic information data and the requirements of storage management, three site disaster recovery system of DBMS plan based on the popular network, storage and backup, data replication and remote switch of application technologies. In LAN that synchronous replication between database management servers and the local storage of backup management systems, simultaneously, remote asynchronous data replication between local storage backup management systems and remote database management servers. The core of the system is resolving local disaster in the remote site, ensuring data security and business continuity of local site. This article focuses on the following points: background, the necessity of disaster recovery system, the analysis of the data achievements and data disaster recovery plan. Features of this program is to use a hardware-based data hot backup, and remote online disaster recovery support for Oracle database system. The achievement of this paper is in summarizing and analysing the common characteristics of disaster of surveying and mapping business system requirements, while based on the actual situation of the industry, designed the basic GIS disaster recovery solutions, and we also give the conclusions about key technologies of RTO and RPO.

  20. Grid data access on widely distributed worker nodes using scalla and SRM

    NASA Astrophysics Data System (ADS)

    Jakl, P.; Lauret, J.; Hanushevsky, A.; Shoshani, A.; Sim, A.; Gu, J.

    2008-07-01

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.

  1. A SCR Model Calibration Approach with Spatially Resolved Measurements and NH 3 Storage Distributions

    DOE PAGES

    Song, Xiaobo; Parker, Gordon G.; Johnson, John H.; ...

    2014-11-27

    The selective catalytic reduction (SCR) is a technology used for reducing NO x emissions in the heavy-duty diesel (HDD) engine exhaust. In this study, the spatially resolved capillary inlet infrared spectroscopy (Spaci-IR) technique was used to study the gas concentration and NH 3 storage distributions in a SCR catalyst, and to provide data for developing a SCR model to analyze the axial gaseous concentration and axial distributions of NH 3 storage. A two-site SCR model is described for simulating the reaction mechanisms. The model equations and a calculation method was developed using the Spaci-IR measurements to determine the NH 3more » storage capacity and the relationships between certain kinetic parameters of the model. Moreover, a calibration approach was then applied for tuning the kinetic parameters using the spatial gaseous measurements and calculated NH3 storage as a function of axial position instead of inlet and outlet gaseous concentrations of NO, NO 2, and NH 3. The equations and the approach for determining the NH 3 storage capacity of the catalyst and a method of dividing the NH 3 storage capacity between the two storage sites are presented. It was determined that the kinetic parameters of the adsorption and desorption reactions have to follow certain relationships for the model to simulate the experimental data. Finally, the modeling results served as a basis for developing full model calibrations to SCR lab reactor and engine data and state estimator development as described in the references (Song et al. 2013a, b; Surenahalli et al. 2013).« less

  2. Experimental investigation of a page-oriented Lippmann holographic data storage system

    NASA Astrophysics Data System (ADS)

    Pauliat, Gilles; Contreras, Kevin

    2010-06-01

    Lippmann photography is a more than one century old interferometric process invented for recording colored images in thick black and white photographic emulsions. After a comparison between this photographic process and Denisyuk holography, we feature some hints to apply this technique to high density data storage by wavelength multiplexing in a page-oriented approach in thick media. For the first time we experimentally investigate this approach. We anticipated that this storage architecture should allow capacities as large as for conventional holography.

  3. Digitally programmable signal generator and method

    DOEpatents

    Priatko, G.J.; Kaskey, J.A.

    1989-11-14

    Disclosed is a digitally programmable waveform generator for generating completely arbitrary digital or analog waveforms from very low frequencies to frequencies in the gigasample per second range. A memory array with multiple parallel outputs is addressed; then the parallel output data is latched into buffer storage from which it is serially multiplexed out at a data rate many times faster than the access time of the memory array itself. While data is being multiplexed out serially, the memory array is accessed with the next required address and presents its data to the buffer storage before the serial multiplexing of the last group of data is completed, allowing this new data to then be latched into the buffer storage for smooth continuous serial data output. In a preferred implementation, a plurality of these serial data outputs are paralleled to form the input to a digital to analog converter, providing a programmable analog output. 6 figs.

  4. Digitally programmable signal generator and method

    DOEpatents

    Priatko, Gordon J.; Kaskey, Jeffrey A.

    1989-01-01

    A digitally programmable waveform generator for generating completely arbitrary digital or analog waveforms from very low frequencies to frequencies in the gigasample per second range. A memory array with multiple parallel outputs is addressed; then the parallel output data is latched into buffer storage from which it is serially multiplexed out at a data rate many times faster than the access time of the memory array itself. While data is being multiplexed out serially, the memory array is accessed with the next required address and presents its data to the buffer storage before the serial multiplexing of the last group of data is completed, allowing this new data to then be latched into the buffer storage for smooth continuous serial data output. In a preferred implementation, a plurality of these serial data outputs are paralleled to form the input to a digital to analog converter, providing a programmable analog output.

  5. Bioinformatics and Microarray Data Analysis on the Cloud.

    PubMed

    Calabrese, Barbara; Cannataro, Mario

    2016-01-01

    High-throughput platforms such as microarray, mass spectrometry, and next-generation sequencing are producing an increasing volume of omics data that needs large data storage and computing power. Cloud computing offers massive scalable computing and storage, data sharing, on-demand anytime and anywhere access to resources and applications, and thus, it may represent the key technology for facing those issues. In fact, in the recent years it has been adopted for the deployment of different bioinformatics solutions and services both in academia and in the industry. Although this, cloud computing presents several issues regarding the security and privacy of data, that are particularly important when analyzing patients data, such as in personalized medicine. This chapter reviews main academic and industrial cloud-based bioinformatics solutions; with a special focus on microarray data analysis solutions and underlines main issues and problems related to the use of such platforms for the storage and analysis of patients data.

  6. Digital Holographic Memories

    NASA Astrophysics Data System (ADS)

    Hesselink, Lambertus; Orlov, Sergei S.

    Optical data storage is a phenomenal success story. Since its introduction in the early 1980s, optical data storage devices have evolved from being focused primarily on music distribution, to becoming the prevailing data distribution and recording medium. Each year, billions of optical recordable and prerecorded disks are sold worldwide. Almost every computer today is shipped with a CD or DVD drive installed.

  7. High Density Data Storage, the SONY Data DiscMan Electronic Book, and the Unfolding Multi-Media Revolution.

    ERIC Educational Resources Information Center

    Kountz, John

    1991-01-01

    Description of high density data storage (HDDS) devices focuses on CD-ROMs and explores their impact on libraries, publishing, education, and library communications. Highlights include costs; technical standards; reading devices; authoring systems; robotics; the influence of new technology on the role of libraries; and royalty and copyright issues…

  8. Generation system impacts of storage heating and storage water heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gellings, C.W.; Quade, A.W.; Stovall, J.P.

    Thermal energy storage systems offer the electric utility a means to change customer energy use patterns. At present, however, the costs and benefit to both the customers and utility are uncertain. As part of a nationwide demonstration program Public Service Electric and Gas Company installed storage space heating and water heating appliances in residential homes. Both the test homes and similiar homes using conventional space and water heating appliances were monitored, allowing for detailed comparisons between the two systems. The purpose of this paper is to detail the methodology used and the results of studies completed on the generation systemmore » impacts of storage space and water heating systems. Other electric system impacts involving service entrance size, metering, secondary distribution and primary distribution were detailed in two previous IEEE Papers. This paper is organized into three main sections. The first gives background data on PSEandG and their experience in a nationwide thermal storage demonstration project. The second section details results of the demonstration project and studies that have been performed on the impacts of thermal storage equipment. The last section reports on the conclusions arrived at concerning the impacts of thermal storage on generation. The study was conducted in early 1982 using available data at that time, while PSEandG system plans have changed since then, the conclusions are pertinent and valuable to those contemplating inpacts of thermal energy storage.« less

  9. Preliminary Results from Powell Research Group on Integrating GRACE Satellite and Ground-based Estimates of Groundwater Storage Changes

    NASA Astrophysics Data System (ADS)

    Scanlon, B. R.; Zhang, Z.; Reitz, M.; Rodell, M.; Sanford, W. E.; Save, H.; Wiese, D. N.; Croteau, M. J.; McGuire, V. L.; Pool, D. R.; Faunt, C. C.; Zell, W.

    2017-12-01

    Groundwater storage depletion is a critical issue for many of the major aquifers in the U.S., particularly during intense droughts. GRACE (Gravity Recovery and Climate Experiment) satellite-based estimates of groundwater storage changes have attracted considerable media attention in the U.S. and globally and interest in GRACE products continues to increase. For this reason, a Powell Research Group was formed to: (1) Assess variations in groundwater storage using a variety of GRACE products and other storage components (snow, surface water, and soil moisture) for major aquifers in the U.S., (2) Quantify long-term trends in groundwater storage from ground-based monitoring and regional and national modeling, and (3) Use ground-based monitoring and modeling to interpret GRACE water storage changes within the context of extreme droughts and over-exploitation of groundwater. The group now has preliminary estimates from long-term trends and seasonal fluctuations in water storage using different GRACE solutions, including CSR, JPL and GSFC. Approaches to quantifying uncertainties in GRACE data are included. This work also shows how GRACE sees groundwater depletion in unconfined versus confined aquifers, and plans for future work will link GRACE data to regional groundwater models. The wealth of ground-based observations for the U.S. provides a unique opportunity to assess the reliability of GRACE-based estimates of groundwater storage changes.

  10. 40 CFR 60.113 - Monitoring of operations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  11. 40 CFR 60.115a - Monitoring of operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  12. 40 CFR 60.115a - Monitoring of operations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  13. 40 CFR 60.113 - Monitoring of operations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  14. 40 CFR 60.115a - Monitoring of operations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  15. 40 CFR 60.113 - Monitoring of operations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  16. 40 CFR 60.113 - Monitoring of operations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  17. 40 CFR 60.115a - Monitoring of operations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  18. 40 CFR 60.115a - Monitoring of operations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  19. 40 CFR 60.113 - Monitoring of operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  20. CO2 Storage related Groundwater Impacts and Protection

    NASA Astrophysics Data System (ADS)

    Fischer, Sebastian; Knopf, Stefan; May, Franz; Rebscher, Dorothee

    2016-03-01

    Injection of CO2 into the deep subsurface will affect physical and chemical conditions in the storage environment. Hence, geological CO2 storage can have potential impacts on groundwater resources. Shallow freshwater can only be affected if leakage pathways facilitate the ascent of CO2 or saline formation water. Leakage associated with CO2 storage cannot be excluded, but potential environmental impacts could be reduced by selecting suitable storage locations. In the framework of risk assessment, testing of models and scenarios against operational data has to be performed repeatedly in order to predict the long-term fate of CO2. Monitoring of a storage site should reveal any deviations from expected storage performance, so that corrective measures can be taken. Comprehensive R & D activities and experience from several storage projects will enhance the state of knowledge on geological CO2 storage, thus enabling safe storage operations at well-characterised and carefully selected storage sites while meeting the requirements of groundwater protection.

  1. Cost and performance of thermal storage concepts in solar thermal systems, Phase 2-liquid metal receivers

    NASA Astrophysics Data System (ADS)

    McKenzie, A. W.

    Cost and performance of various thermal storage concepts in a liquid metal receiver solar thermal power system application have been evaluated. The objectives of this study are to provide consistently calculated cost and performance data for thermal storage concepts integrated into solar thermal systems. Five alternative storage concepts are evaluated for a 100-MW(e) liquid metal-cooled receiver solar thermal power system for 1, 6, and 15 hours of storage: sodium 2-tank (reference system), molten draw salt 2-tank, sand moving bed, air/rock, and latent heat (phase change) with tube-intensive heat exchange (HX). The results indicate that the all sodium 2-tank thermal storage concept is not cost-effective for storage in excess of 3 or 4 hours; the molten draw salt 2-tank storage concept provides significant cost savings over the reference sodium 2-tank concept; and the air/rock storage concept with pressurized sodium buffer tanks provides the lowest evaluated cost of all storage concepts considered above 6 hours of storage.

  2. Focused ion beam micromilling and articles therefrom

    DOEpatents

    Lamartine, Bruce C.; Stutz, Roger A.

    1998-01-01

    An ultrahigh vacuum focused ion beam micromilling apparatus and process are isclosed. Additionally, a durable data storage medium using the micromilling process is disclosed, the durable data storage medium capable of storing, e.g., digital or alphanumeric characters as well as graphical shapes or characters.

  3. Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure

    DOE PAGES

    Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R.; ...

    2016-04-05

    While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulkmore » data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.« less

  4. Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R.

    While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulkmore » data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.« less

  5. Partial Storage Optimization and Load Control Strategy of Cloud Data Centers

    PubMed Central

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444

  6. Partial storage optimization and load control strategy of cloud data centers.

    PubMed

    Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

  7. Optimal micro-mirror tilt angle and sync mark design for digital micro-mirror device based collinear holographic data storage system.

    PubMed

    Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Liu, Jinyan; Huang, Yong; Tan, Xiaodi

    2017-06-01

    The collinear holographic data storage system (CHDSS) is a very promising storage system due to its large storage capacities and high transfer rates in the era of big data. The digital micro-mirror device (DMD) as a spatial light modulator is the key device of the CHDSS due to its high speed, high precision, and broadband working range. To improve the system stability and performance, an optimal micro-mirror tilt angle was theoretically calculated and experimentally confirmed by analyzing the relationship between the tilt angle of the micro-mirror on the DMD and the power profiles of diffraction patterns of the DMD at the Fourier plane. In addition, we proposed a novel chess board sync mark design in the data page to reduce the system bit error rate in circumstances of reduced aperture required to decrease noise and median exposure amount. It will provide practical guidance for future DMD based CHDSS development.

  8. Optical mass memory investigation

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The MASTER 1 optical mass storage system advanced working model (AWM) was designed to demonstrate recording and playback of imagery data and to enable quantitative data to be derived as to the statistical distribution of raw errors experienced through the system. The AWM consists of two subsystems, the recorder and storage and retrieval. The recorder subsystem utilizes key technologies such as an acoustic travelling wave lens to achieve recording of digital data on fiche at a rate of 30 Mbits/sec, whereas the storage and retrieval reproducer subsystem utilizes a less complex optical system that employs an acousto-optical beam deflector to achieve data readout at a 5 Mbits/sec rate. The system has the built in capability for detecting and collecting error statistics. The recorder and storage and retrieval subsystems operate independent of one another and are each constructed in modular form with each module performing independent functions. The operation of each module and its interface to other modules is controlled by one controller for both subsystems.

  9. Hydrologic considerations for estimation of storage-capacity requirements of impounding and side-channel reservoirs for water supply in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2001-01-01

    This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the storage-requirement estimates. The effects of an instream-flow requirement equal to the 80-percent-duration flow are also incorporated into the storage-requirement estimates.

  10. A new data collaboration service based on cloud computing security

    NASA Astrophysics Data System (ADS)

    Ying, Ren; Li, Hua-Wei; Wang, Li na

    2017-09-01

    With the rapid development of cloud computing, the storage and usage of data have undergone revolutionary changes. Data owners can store data in the cloud. While bringing convenience, it also brings many new challenges to cloud data security. A key issue is how to support a secure data collaboration service that supports access and updates to cloud data. This paper proposes a secure, efficient and extensible data collaboration service, which prevents data leaks in cloud storage, supports one to many encryption mechanisms, and also enables cloud data writing and fine-grained access control.

  11. Mass storage: The key to success in high performance computing

    NASA Technical Reports Server (NTRS)

    Lee, Richard R.

    1993-01-01

    There are numerous High Performance Computing & Communications Initiatives in the world today. All are determined to help solve some 'Grand Challenges' type of problem, but each appears to be dominated by the pursuit of higher and higher levels of CPU performance and interconnection bandwidth as the approach to success, without any regard to the impact of Mass Storage. My colleagues and I at Data Storage Technologies believe that all will have their performance against their goals ultimately measured by their ability to efficiently store and retrieve the 'deluge of data' created by end-users who will be using these systems to solve Scientific Grand Challenges problems, and that the issue of Mass Storage will become then the determinant of success or failure in achieving each projects goals. In today's world of High Performance Computing and Communications (HPCC), the critical path to success in solving problems can only be traveled by designing and implementing Mass Storage Systems capable of storing and manipulating the truly 'massive' amounts of data associated with solving these challenges. Within my presentation I will explore this critical issue and hypothesize solutions to this problem.

  12. Forensic Investigation of Cooperative Storage Cloud Service: Symform as a Case Study.

    PubMed

    Teing, Yee-Yang; Dehghantanha, Ali; Choo, Kim-Kwang Raymond; Dargahi, Tooska; Conti, Mauro

    2017-05-01

    Researchers envisioned Storage as a Service (StaaS) as an effective solution to the distributed management of digital data. Cooperative storage cloud forensic is relatively new and is an under-explored area of research. Using Symform as a case study, we seek to determine the data remnants from the use of cooperative cloud storage services. In particular, we consider both mobile devices and personal computers running various popular operating systems, namely Windows 8.1, Mac OS X Mavericks 10.9.5, Ubuntu 14.04.1 LTS, iOS 7.1.2, and Android KitKat 4.4.4. Potential artefacts recovered during the research include data relating to the installation and uninstallation of the cloud applications, log-in to and log-out from Symform account using the client application, file synchronization as well as their time stamp information. This research contributes to an in-depth understanding of the types of terrestrial artifacts that are likely to remain after the use of cooperative storage cloud on client devices. © 2016 American Academy of Forensic Sciences.

  13. Novel carbazole derivatives with quinoline ring: synthesis, electronic transition, and two-photon absorption three-dimensional optical data storage.

    PubMed

    Li, Liang; Wang, Ping; Hu, Yanlei; Lin, Geng; Wu, Yiqun; Huang, Wenhao; Zhao, Quanzhong

    2015-03-15

    We designed carbazole unit with an extended π conjugation by employing Vilsmeier formylation reaction and Knoevenagel condensation to facilitate the functional groups of quinoline from 3- or 3,6-position of carbazole. Two compounds doped with poly(methyl methacrylate) (PMMA) films were prepared. To explore the electronic transition properties of these compounds, one-photon absorption properties were experimentally measured and theoretically calculated by using the time-dependent density functional theory. We surveyed these films by using an 800 nm Ti:sapphire 120-fs laser with two-photon absorption (TPA) fluorescence emission properties and TPA coefficients to obtain the TPA cross sections. A three-dimensional optical data storage experiment was conducted by using a TPA photoreaction with an 800 nm-fs laser on the film to obtain a seven-layer optical data storage. The experiment proves that these carbazole derivatives are well suited for two-photon 3D optical storage, thus laying the foundation for the research of multilayer high-density and ultra-high-density optical information storage materials. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.

    2016-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  15. Approach to Privacy-Preserve Data in Two-Tiered Wireless Sensor Network Based on Linear System and Histogram

    NASA Astrophysics Data System (ADS)

    Dang, Van H.; Wohlgemuth, Sven; Yoshiura, Hiroshi; Nguyen, Thuc D.; Echizen, Isao

    Wireless sensor network (WSN) has been one of key technologies for the future with broad applications from the military to everyday life [1,2,3,4,5]. There are two kinds of WSN model models with sensors for sensing data and a sink for receiving and processing queries from users; and models with special additional nodes capable of storing large amounts of data from sensors and processing queries from the sink. Among the latter type, a two-tiered model [6,7] has been widely adopted because of its storage and energy saving benefits for weak sensors, as proved by the advent of commercial storage node products such as Stargate [8] and RISE. However, by concentrating storage in certain nodes, this model becomes more vulnerable to attack. Our novel technique, called zip-histogram, contributes to solving the problems of previous studies [6,7] by protecting the stored data's confidentiality and integrity (including data from the sensor and queries from the sink) against attackers who might target storage nodes in two-tiered WSNs.

  16. Proposal for implementation of CCSDS standards for use with spacecraft engineering/housekeeping data

    NASA Technical Reports Server (NTRS)

    Welch, Dave

    1994-01-01

    Many of today's low earth orbiting spacecraft are using the Consultative Committee for Space Data Systems (CCSDS) protocol for better optimization of down link RF bandwidth and onboard storage space. However, most of the associated housekeeping data has continued to be generated and down linked in a synchronous, Time Division Multiplexed (TDM) fashion. There are many economies that the CCSDS protocol will allow to better utilize the available bandwidth and storage space in order to optimize the housekeeping data for use in operational trending and analysis work. By only outputting what is currently important or of interest, finer resolution of critical items can be obtained. This can be accomplished by better utilizing the normally allocated housekeeping data down link and storage areas rather than taking space reserved for science.

  17. Proposal for implementation of CCSDS standards for use with spacecraft engineering/housekeeping data

    NASA Astrophysics Data System (ADS)

    Welch, Dave

    1994-11-01

    Many of today's low earth orbiting spacecraft are using the Consultative Committee for Space Data Systems (CCSDS) protocol for better optimization of down link RF bandwidth and onboard storage space. However, most of the associated housekeeping data has continued to be generated and down linked in a synchronous, Time Division Multiplexed (TDM) fashion. There are many economies that the CCSDS protocol will allow to better utilize the available bandwidth and storage space in order to optimize the housekeeping data for use in operational trending and analysis work. By only outputting what is currently important or of interest, finer resolution of critical items can be obtained. This can be accomplished by better utilizing the normally allocated housekeeping data down link and storage areas rather than taking space reserved for science.

  18. A 3D seismic investigation of the Ray Gas Storage Reef in Macomb County, Michigan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer, S.F.; Dixon, R.A.

    1995-09-01

    A 4.2 square mile 3D seismic survey was acquired over the Ray Niagaran Reef Gas Storage Field in southeast Michigan as part of a program to maximize storage capacity and gas deliverability of the field. Goals of the survey were: (1) to determine if additional storage capacity could be found, either as extensions to the main reef or as undiscovered satellite reefs, (2) to determine if 3D seismic data can be utilized to quantify reservoir parameters in order to maximize the productive capacity of infill wells, and (3) to investigate the relationship between the main reef body and a lowmore » relief/flow volume gas well east of the reef. Interpretation of the 3D seismic data resulted in a detailed image of the reef, using several interpretive techniques. A seismic reflection within the reef was correlated with a known porosity zone, and the relationship between porosity and seismic amplitude was investigated. A possible connection between the main reef and the low relief gas well was identified. This project illustrates the economic value of investigating an existing storage reef with 3D seismic data, and underscores the necessity of acquiring such a survey prior to developing a new storage reservoir.« less

  19. Efficient Storage Scheme of Covariance Matrix during Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Mao, D.; Yeh, T. J.

    2013-12-01

    During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.

  20. Mahanaxar: quality of service guarantees in high-bandwidth, real-time streaming data storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bigelow, David; Bent, John; Chen, Hsing-Bung

    2010-04-05

    Large radio telescopes, cyber-security systems monitoring real-time network traffic, and others have specialized data storage needs: guaranteed capture of an ultra-high-bandwidth data stream, retention of the data long enough to determine what is 'interesting,' retention of interesting data indefinitely, and concurrent read/write access to determine what data is interesting, without interrupting the ongoing capture of incoming data. Mahanaxar addresses this problem. Mahanaxar guarantees streaming real-time data capture at (nearly) the full rate of the raw device, allows concurrent read and write access to the device on a best-effort basis without interrupting the data capture, and retains data as long asmore » possible given the available storage. It has built in mechanisms for reliability and indexing, can scale to meet arbitrary bandwidth requirements, and handles both small and large data elements equally well. Results from our prototype implementation shows that Mahanaxar provides both better guarantees and better performance than traditional file systems.« less

Top