Sample records for data storage

  1. Data storage technology comparisons

    NASA Technical Reports Server (NTRS)

    Katti, Romney R.

    1990-01-01

    The role of data storage and data storage technology is an integral, though conceptually often underestimated, portion of data processing technology. Data storage is important in the mass storage mode in which generated data is buffered for later use. But data storage technology is also important in the data flow mode when data are manipulated and hence required to flow between databases, datasets and processors. This latter mode is commonly associated with memory hierarchies which support computation. VLSI devices can reasonably be defined as electronic circuit devices such as channel and control electronics as well as highly integrated, solid-state devices that are fabricated using thin film deposition technology. VLSI devices in both capacities play an important role in data storage technology. In addition to random access memories (RAM), read-only memories (ROM), and other silicon-based variations such as PROM's, EPROM's, and EEPROM's, integrated devices find their way into a variety of memory technologies which offer significant performance advantages. These memory technologies include magnetic tape, magnetic disk, magneto-optic disk, and vertical Bloch line memory. In this paper, some comparison between selected technologies will be made to demonstrate why more than one memory technology exists today, based for example on access time and storage density at the active bit and system levels.

  2. High Density Digital Data Storage System

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth D., II; Gray, David L.; Rowland, Wayne D.

    1991-01-01

    The High Density Digital Data Storage System was designed to provide a cost effective means for storing real-time data from the field-deployable digital acoustic measurement system. However, the high density data storage system is a standalone system that could provide a storage solution for many other real time data acquisition applications. The storage system has inputs for up to 20 channels of 16-bit digital data. The high density tape recorders presently being used in the storage system are capable of storing over 5 gigabytes of data at overall transfer rates of 500 kilobytes per second. However, through the use of data compression techniques the system storage capacity and transfer rate can be doubled. Two tape recorders have been incorporated into the storage system to produce a backup tape of data in real-time. An analog output is provided for each data channel as a means of monitoring the data as it is being recorded.

  3. The Petascale Data Storage Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Garth; Long, Darrell; Honeyman, Peter

    2013-07-01

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.

  4. High volume data storage architecture analysis

    NASA Technical Reports Server (NTRS)

    Malik, James M.

    1990-01-01

    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.

  5. High-Density Digital Data Storage System

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth D.; Gray, David L.

    1995-01-01

    High-density digital data storage system designed for cost-effective storage of large amounts of information acquired during experiments. System accepts up to 20 channels of 16-bit digital data with overall transfer rates of 500 kilobytes per second. Data recorded on 8-millimeter magnetic tape in cartridges, each capable of holding up to five gigabytes of data. Each cartridge mounted on one of two tape drives. Operator chooses to use either or both of drives. One drive used for primary storage of data while other can be used to make a duplicate record of data. Alternatively, other drive serves as backup data-storage drive when primary one fails.

  6. Electron trapping data storage system and applications

    NASA Technical Reports Server (NTRS)

    Brower, Daniel; Earman, Allen; Chaffin, M. H.

    1993-01-01

    The advent of digital information storage and retrieval has led to explosive growth in data transmission techniques, data compression alternatives, and the need for high capacity random access data storage. Advances in data storage technologies are limiting the utilization of digitally based systems. New storage technologies will be required which can provide higher data capacities and faster transfer rates in a more compact format. Magnetic disk/tape and current optical data storage technologies do not provide these higher performance requirements for all digital data applications. A new technology developed at the Optex Corporation out-performs all other existing data storage technologies. The Electron Trapping Optical Memory (ETOM) media is capable of storing as much as 14 gigabytes of uncompressed data on a single, double-sided 54 inch disk with a data transfer rate of up to 12 megabits per second. The disk is removable, compact, lightweight, environmentally stable, and robust. Since the Write/Read/Erase (W/R/E) processes are carried out 100 percent photonically, no heating of the recording media is required. Therefore, the storage media suffers no deleterious effects from repeated Write/Read/Erase cycling.

  7. ICI optical data storage tape: An archival mass storage media

    NASA Technical Reports Server (NTRS)

    Ruddick, Andrew J.

    1993-01-01

    At the 1991 Conference on Mass Storage Systems and Technologies, ICI Imagedata presented a paper which introduced ICI Optical Data Storage Tape. This paper placed specific emphasis on the media characteristics and initial data was presented which illustrated the archival stability of the media. More exhaustive analysis that was carried out on the chemical stability of the media is covered. Equally important, it also addresses archive management issues associated with, for example, the benefits of reduced rewind requirements to accommodate tape relaxation effects that result from careful tribology control in ICI Optical Tape media. ICI Optical Tape media was designed to meet the most demanding requirements of archival mass storage. It is envisaged that the volumetric data capacity, long term stability and low maintenance characteristics demonstrated will have major benefits in increasing reliability and reducing the costs associated with archival storage of large data volumes.

  8. Durable High-Density Data Storage

    NASA Technical Reports Server (NTRS)

    Lamartine, Bruce C.; Stutz, Roger A.

    1996-01-01

    The focus ion beam (FIB) micromilling process for data storage provides a new non-magnetic storage method for archiving large amounts of data. The process stores data on robust materials such as steel, silicon, and gold coated silicon. The storage process was developed to provide a method to insure the long term storage life of data. We estimate that the useful life of data written on silicon or gold-coated silicon to be on the order of a few thousand years without the need to rewrite the data every few years. The process uses an ion beam to carve material from the surface, much like stone cutters in ancient civilizations removed material from stone. The deeper the information is carved into the media, the longer the expected life of the information. The process can record information in three formats: (1) binary at densities of 23 Gbits/square inch, (2) alphanumeric at optical or non-optical density, and (3) graphical at optical and non-optical density. The formats can be mixed on the same media; and thus, it is possible to record, in a human-viewable format, instructions that can be read using an optical microscope. These instructions provide guidance on reading the remaining higher density information.

  9. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  10. Optical Data Storage Capabilities of Bacteriorhodopsin

    NASA Technical Reports Server (NTRS)

    Gary, Charles

    1998-01-01

    We present several measurements of the data storage capability of bacteriorhodopsin films to help establish the baseline performance of this material as a medium for holographic data storage. In particular, we examine the decrease in diffraction efficiency with the density of holograms stored at one location in the film, and we also analyze the recording schedule needed to produce a set of equal intensity holograms at a single location in the film. Using this information along with the assumptions about the performance of the optical system, we can estimate potential data storage densities in bacteriorhodopsin.

  11. Triboelectrification-Enabled Self-Powered Data Storage.

    PubMed

    Kuang, Shuang Yang; Zhu, Guang; Wang, Zhong Lin

    2018-02-01

    Data storage by any means usually requires an electric driving power for writing or reading. A novel approach for self-powered, triboelectrification-enabled data storage (TEDS) is presented. Data are incorporated into a set of metal-based surface patterns. As a probe slides across the patterned surface, triboelectrification between the scanning probe and the patterns produces alternatively varying voltage signal in quasi-square wave. The trough and crest of the quasi-square wave signal are coded as binary bits of "0" and "1," respectively, while the time span of the trough and the crest is associated with the number of bits. The storage of letters and sentences is demonstrated through either square-shaped or disc-shaped surface patterns. Based on experimental data and numerical calculation, the theoretically predicted maximum data storage density could reach as high as 38.2 Gbit in -2 . Demonstration of real-time data retrieval is realized with the assistance of software interface. For the TEDS reported in this work, the measured voltage signal is self-generated as a result of triboelectrification without the reliance on an external power source. This feature brings about not only low power consumption but also a much more simplified structure. Therefore, this work paves a new path to a unique approach of high-density data storage that may have widespread applications.

  12. Compact Holographic Data Storage

    NASA Technical Reports Server (NTRS)

    Chao, T. H.; Reyes, G. F.; Zhou, H.

    2001-01-01

    NASA's future missions would require massive high-speed onboard data storage capability to Space Science missions. For Space Science, such as the Europa Lander mission, the onboard data storage requirements would be focused on maximizing the spacecraft's ability to survive fault conditions (i.e., no loss in stored science data when spacecraft enters the 'safe mode') and autonomously recover from them during NASA's long-life and deep space missions. This would require the development of non-volatile memory. In order to survive in the stringent environment during space exploration missions, onboard memory requirements would also include: (1) survive a high radiation environment (1 Mrad), (2) operate effectively and efficiently for a very long time (10 years), and (3) sustain at least a billion write cycles. Therefore, memory technologies requirements of NASA's Earth Science and Space Science missions are large capacity, non-volatility, high-transfer rate, high radiation resistance, high storage density, and high power efficiency. JPL, under current sponsorship from NASA Space Science and Earth Science Programs, is developing a high-density, nonvolatile and rad-hard Compact Holographic Data Storage (CHDS) system to enable large-capacity, high-speed, low power consumption, and read/write of data in a space environment. The entire read/write operation will be controlled with electrooptic mechanism without any moving parts. This CHDS will consist of laser diodes, photorefractive crystal, spatial light modulator, photodetector array, and I/O electronic interface. In operation, pages of information would be recorded and retrieved with random access and high-speed. The nonvolatile, rad-hard characteristics of the holographic memory will provide a revolutionary memory technology meeting the high radiation challenge facing the Europa Lander mission. Additional information is contained in the original extended abstract.

  13. Optical storage media data integrity studies

    NASA Technical Reports Server (NTRS)

    Podio, Fernando L.

    1994-01-01

    Optical disk-based information systems are being used in private industry and many Federal Government agencies for on-line and long-term storage of large quantities of data. The storage devices that are part of these systems are designed with powerful, but not unlimited, media error correction capacities. The integrity of data stored on optical disks does not only depend on the life expectancy specifications for the medium. Different factors, including handling and storage conditions, may result in an increase of medium errors in size and frequency. Monitoring the potential data degradation is crucial, especially for long term applications. Efforts are being made by the Association for Information and Image Management Technical Committee C21, Storage Devices and Applications, to specify methods for monitoring and reporting to the user medium errors detected by the storage device while writing, reading or verifying the data stored in that medium. The Computer Systems Laboratory (CSL) of the National Institute of Standard and Technology (NIST) has a leadership role in the development of these standard techniques. In addition, CSL is researching other data integrity issues, including the investigation of error-resilient compression algorithms. NIST has conducted care and handling experiments on optical disk media with the objective of identifying possible causes of degradation. NIST work in data integrity and related standards activities is described.

  14. Electron trapping optical data storage system and applications

    NASA Technical Reports Server (NTRS)

    Brower, Daniel; Earman, Allen; Chaffin, M. H.

    1993-01-01

    A new technology developed at Optex Corporation out-performs all other existing data storage technologies. The Electron Trapping Optical Memory (ETOM) media stores 14 gigabytes of uncompressed data on a single, double-sided 130 mm disk with a data transfer rate of up to 120 megabits per second. The disk is removable, compact, lightweight, environmentally stable, and robust. Since the Write/Read/Erase (W/R/E) processes are carried out photonically, no heating of the recording media is required. Therefore, the storage media suffers no deleterious effects from repeated W/R/E cycling. This rewritable data storage technology has been developed for use as a basis for numerous data storage products. Industries that can benefit from the ETOM data storage technologies include: satellite data and information systems, broadcasting, video distribution, image processing and enhancement, and telecommunications. Products developed for these industries are well suited for the demanding store-and-forward buffer systems, data storage, and digital video systems needed for these applications.

  15. ENERGY STAR Certified Data Center Storage

    EPA Pesticide Factsheets

    Certified models meet all ENERGY STAR requirements as listed in the Version 1.0 ENERGY STAR Program Requirements for Data Center Storage that are effective as of December 2, 2013. A detailed listing of key efficiency criteria are available at http://www.energystar.gov/certified-products/detail/data_center_storage

  16. Surface-Enhanced Raman Optical Data Storage system

    DOEpatents

    Vo-Dinh, T.

    1991-03-12

    A method and apparatus for a Surface-Enhanced Raman Optical Data Storage (SERODS) System are disclosed. A medium which exhibits the Surface Enhanced Raman Scattering (SERS) phenomenon has data written onto its surface of microenvironment by means of a write-on procedure which disturbs the surface or microenvironment of the medium and results in the medium having a changed SERS emission when excited. The write-on procedure is controlled by a signal that corresponds to the data to be stored so that the disturbed regions on the storage device (e.g., disk) represent the data. After the data is written onto the storage device it is read by exciting the surface of the storage device with an appropriate radiation source and detecting changes in the SERS emission to produce a detection signal. The data is then reproduced from the detection signal. 5 figures.

  17. Surface-enhanced raman optical data storage system

    DOEpatents

    Vo-Dinh, Tuan

    1991-01-01

    A method and apparatus for a Surface-Enhanced Raman Optical Data Storage (SERODS) System is disclosed. A medium which exhibits the Surface Enhanced Raman Scattering (SERS) phenomenon has data written onto its surface of microenvironment by means of a write-on procedure which disturbs the surface or microenvironment of the medium and results in the medium having a changed SERS emission when excited. The write-on procedure is controlled by a signal that corresponds to the data to be stored so that the disturbed regions on the storage device (e.g., disk) represent the data. After the data is written onto the storage device it is read by exciting the surface of the storage device with an appropriate radiation source and detecting changes in the SERS emission to produce a detection signal. The data is then reproduced from the detection signal.

  18. Towards rewritable multilevel optical data storage in single nanocrystals.

    PubMed

    Riesen, Nicolas; Pan, Xuanzhao; Badek, Kate; Ruan, Yinlan; Monro, Tanya M; Zhao, Jiangbo; Ebendorff-Heidepriem, Heike; Riesen, Hans

    2018-04-30

    Novel approaches for digital data storage are imperative, as storage capacities are drastically being outpaced by the exponential growth in data generation. Optical data storage represents the most promising alternative to traditional magnetic and solid-state data storage. In this paper, a novel and energy efficient approach to optical data storage using rare-earth ion doped inorganic insulators is demonstrated. In particular, the nanocrystalline alkaline earth halide BaFCl:Sm is shown to provide great potential for multilevel optical data storage. Proof-of-concept demonstrations reveal for the first time that these phosphors could be used for rewritable, multilevel optical data storage on the physical dimensions of a single nanocrystal. Multilevel information storage is based on the very efficient and reversible conversion of Sm 3+ to Sm 2+ ions upon exposure to UV-C light. The stored information is then read-out using confocal optics by employing the photoluminescence of the Sm 2+ ions in the nanocrystals, with the signal strength depending on the UV-C fluence used during the write step. The latter serves as the mechanism for multilevel data storage in the individual nanocrystals, as demonstrated in this paper. This data storage platform has the potential to be extended to 2D and 3D memory for storage densities that could potentially approach petabyte/cm 3 levels.

  19. Holographic Optical Data Storage

    NASA Technical Reports Server (NTRS)

    Timucin, Dogan A.; Downie, John D.; Norvig, Peter (Technical Monitor)

    2000-01-01

    Although the basic idea may be traced back to the earlier X-ray diffraction studies of Sir W. L. Bragg, the holographic method as we know it was invented by D. Gabor in 1948 as a two-step lensless imaging technique to enhance the resolution of electron microscopy, for which he received the 1971 Nobel Prize in physics. The distinctive feature of holography is the recording of the object phase variations that carry the depth information, which is lost in conventional photography where only the intensity (= squared amplitude) distribution of an object is captured. Since all photosensitive media necessarily respond to the intensity incident upon them, an ingenious way had to be found to convert object phase into intensity variations, and Gabor achieved this by introducing a coherent reference wave along with the object wave during exposure. Gabor's in-line recording scheme, however, required the object in question to be largely transmissive, and could provide only marginal image quality due to unwanted terms simultaneously reconstructed along with the desired wavefront. Further handicapped by the lack of a strong coherent light source, optical holography thus seemed fated to remain just another scientific curiosity, until the field was revolutionized in the early 1960s by some major breakthroughs: the proposition and demonstration of the laser principle, the introduction of off-axis holography, and the invention of volume holography. Consequently, the remainder of that decade saw an exponential growth in research on theory, practice, and applications of holography. Today, holography not only boasts a wide variety of scientific and technical applications (e.g., holographic interferometry for strain, vibration, and flow analysis, microscopy and high-resolution imagery, imaging through distorting media, optical interconnects, holographic optical elements, optical neural networks, three-dimensional displays, data storage, etc.), but has become a prominent am advertising

  20. Storage Optimization of Educational System Data

    ERIC Educational Resources Information Center

    Boja, Catalin

    2006-01-01

    There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…

  1. Trade-off study of data storage technologies

    NASA Technical Reports Server (NTRS)

    Kadyszewski, R. V.

    1977-01-01

    The need to store and retrieve large quantities of data at modest cost has generated the need for an economical, compact, archival mass storage system. Very significant improvements in the state-of-the-art of mass storage systems have been accomplished through the development of a number of magnetic, electro-optical, and other related devices. This study was conducted in order to do a trade-off between these data storage devices and the related technologies in order to determine an optimum approach for an archival mass data storage system based upon a comparison of the projected capabilities and characteristics of these devices to yield operational systems in the early 1980's.

  2. Scientific Data Storage for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Readey, J.

    2014-12-01

    Traditionally data storage used for geophysical software systems has centered on file-based systems and libraries such as NetCDF and HDF5. In contrast cloud based infrastructure providers such as Amazon AWS, Microsoft Azure, and the Google Cloud Platform generally provide storage technologies based on an object based storage service (for large binary objects) complemented by a database service (for small objects that can be represented as key-value pairs). These systems have been shown to be highly scalable, reliable, and cost effective. We will discuss a proposed system that leverages these cloud-based storage technologies to provide an API-compatible library for traditional NetCDF and HDF5 applications. This system will enable cloud storage suitable for geophysical applications that can scale up to petabytes of data and thousands of users. We'll also cover other advantages of this system such as enhanced metadata search.

  3. Surface-Enhanced Raman Optical Data Storage system

    DOEpatents

    Vo-Dinh, T.

    1994-06-28

    An improved Surface-Enhanced Raman Optical Data Storage System (SERODS) is disclosed. In the improved system, entities capable of existing in multiple reversible states are present on the storage device. Such entities result in changed Surface-Enhanced Raman Scattering (SERS) when localized state changes are effected in less than all of the entities. Therefore, by changing the state of entities in localized regions of a storage device, the SERS emissions in such regions will be changed. When a write-on device is controlled by a data signal, such a localized regions of changed SERS emissions will correspond to the data written on the device. The data may be read by illuminating the surface of the storage device with electromagnetic radiation of an appropriate frequency and detecting the corresponding SERS emissions. Data may be deleted by reversing the state changes of entities in regions where the data was initially written. In application, entities may be individual molecules which allows for the writing of data at the molecular level. A read/write/delete head utilizing near-field quantum techniques can provide for a write/read/delete device capable of effecting state changes in individual molecules, thus providing for the effective storage of data at the molecular level. 18 figures.

  4. Surface-enhanced raman optical data storage system

    DOEpatents

    Vo-Dinh, Tuan

    1994-01-01

    An improved Surface-Enhanced Raman Optical Data Storage System (SERODS) is disclosed. In the improved system, entities capable of existing in multiple reversible states are present on the storage device. Such entities result in changed Surface-Enhanced Raman Scattering (SERS) when localized state changes are effected in less than all of the entities. Therefore, by changing the state of entities in localized regions of a storage device, the SERS emissions in such regions will be changed. When a write-on device is controlled by a data signal, such a localized regions of changed SERS emissions will correspond to the data written on the device. The data may be read by illuminating the surface of the storage device with electromagnetic radiation of an appropriate frequency and detecting the corresponding SERS emissions. Data may be deleted by reversing the state changes of entities in regions where the data was initially written. In application, entities may be individual molecules which allows for the writing of data at the molecular level. A read/write/delete head utilizing near-field quantum techniques can provide for a write/read/delete device capable of effecting state changes in individual molecules, thus providing for the effective storage of data at the molecular level.

  5. ICI optical data storage tape

    NASA Technical Reports Server (NTRS)

    Mclean, Robert A.; Duffy, Joseph F.

    1991-01-01

    Optical data storage tape is now a commercial reality. The world's first successful development of a digital optical tape system is complete. This is based on the Creo 1003 optical tape recorder with ICI 1012 write-once optical tape media. Several other optical tape drive development programs are underway, including one using the IBM 3480 style cartridge at LaserTape Systems. In order to understand the significance and potential of this step change in recording technology, it is useful to review the historical progress of optical storage. This has been slow to encroach on magnetic storage, and has not made any serious dent on the world's mountains of paper and microfilm. Some of the reasons for this are the long time needed for applications developers, systems integrators, and end users to take advantage of the potential storage capacity; access time and data transfer rate have traditionally been too slow for high-performance applications; and optical disk media has been expensive compared with magnetic tape. ICI's strategy in response to these concerns was to concentrate its efforts on flexible optical media; in particular optical tape. The manufacturing achievements, media characteristics, and media lifetime of optical media are discussed.

  6. Dynamic-RAM Data Storage Unit

    NASA Technical Reports Server (NTRS)

    Sturman, J. C.

    1985-01-01

    Dynamic random-access-memory (RAM) data delay and storage unit developed to insure data received from satellite is stored and not lost when satellite is not within range of ground station. Stores 256K of serial data, with independent read and write capability.

  7. Daily GRACE storage anomaly data for characterization of dynamic storage-discharge relationships of natural drainage basins

    NASA Astrophysics Data System (ADS)

    Sharma, D.; Patnaik, S.; Reager, J. T., II; Biswal, B.

    2017-12-01

    Despite the fact that streamflow occurs mainly due to depletion of storage, our knowledge on how a drainage basin stores and releases water is very limited because of measurement limitations. As a result storage has largely remained an elusive entity in hydrological analysis and modelling. A window of opportunity, however, is given to us by GRACE satellite mission that provides storage anomaly (TWSA) data for the entire globe. Many studies have used TWSA data for storage-discharge analysis, uncovering a range of potential applications of TWSA data. Here we argue that the capability of GRACE satellite mission has not been fully explored as most of the studies in the past have performed storage-discharge analysis using monthly TWSA data for large river basins. With such coarse data we are quite unlikely to fully understand variation of storage and discharge in space and time. In this study, we therefore use daily TWSA data for several mid-sized catchments and perform storage-discharge analysis. Daily storage-discharge relationship is highly dynamic, which generates large amount of scatter in storage-discharge plots. Yet a careful analysis of those scatter plots reveals interesting information on storage-discharge relationships of basins, particularly by looking at the relationships during individual recession events. It is observed that storage-discharge relationship is exponential in nature, contrary to the general assumption that the relationship is linear. We find that there is a strong relationship between power-law recession coefficient and initial storage (TWSA at the beginning of recession event). Furthermore, appreciable relationships are observed between recession coefficient and past TWSA values implying that storage takes time to deplete completely. Overall, insights drawn from this study expands our knowledge on how discharge is dynamically linked to storage.

  8. Optical data storage and metallization of polymers

    NASA Technical Reports Server (NTRS)

    Roland, C. M.; Sonnenschein, M. F.

    1991-01-01

    The utilization of polymers as media for optical data storage offers many potential benefits and consequently has been widely explored. New developments in thermal imaging are described, wherein high resolution lithography is accomplished without thermal smearing. The emphasis was on the use of poly(ethylene terephthalate) film, which simultaneously serves as both the substrate and the data storage medium. Both physical and chemical changes can be induced by the application of heat and, thereby, serve as a mechanism for high resolution optical data storage in polymers. The extension of the technique to obtain high resolution selective metallization of poly(ethylene terephthalate) is also described.

  9. Stand-alone digital data storage control system including user control interface

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth D. (Inventor); Gray, David L. (Inventor)

    1994-01-01

    A storage control system includes an apparatus and method for user control of a storage interface to operate a storage medium to store data obtained by a real-time data acquisition system. Digital data received in serial format from the data acquisition system is first converted to a parallel format and then provided to the storage interface. The operation of the storage interface is controlled in accordance with instructions based on user control input from a user. Also, a user status output is displayed in accordance with storage data obtained from the storage interface. By allowing the user to control and monitor the operation of the storage interface, a stand-alone, user-controllable data storage system is provided for storing the digital data obtained by a real-time data acquisition system.

  10. Multiplexed Holographic Data Storage in Bacteriorhodopsin

    NASA Technical Reports Server (NTRS)

    Mehrl, David J.; Krile, Thomas F.

    1997-01-01

    High density optical data storage, driven by the information revolution, remains at the forefront of current research areas. Much of the current research has focused on photorefractive materials (SBN and LiNbO3) and polymers, despite various problems with expense, durability, response time and retention periods. Photon echo techniques, though promising, are questionable due to the need for cryogenic conditions. Bacteriorhodopsin (BR) films are an attractive alternative recording medium. Great strides have been made in refining BR, and materials with storage lifetimes as long as 100 days have recently become available. The ability to deposit this robust polycrystalline material as high quality optical films suggests the use of BR as a recording medium for commercial optical disks. Our own recent research has demonstrated the suitability of BR films for real time spatial filtering and holography. We propose to fully investigate the feasibility of performing holographic mass data storage in BR. Important aspects of the problem to be investigated include various data multiplexing techniques (e.g. angle- amplitude- and phase-encoded multiplexing, and in particular shift-multiplexing), multilayer recording techniques, SLM selection and data readout using crossed polarizers for noise rejection. Systems evaluations of storage parameters, including access times, memory refresh constraints, erasure, signal-to-noise ratios and bit error rates, will be included in our investigations.

  11. Genomic big data hitting the storage bottleneck.

    PubMed

    Papageorgiou, Louis; Eleni, Picasi; Raftopoulou, Sofia; Mantaiou, Meropi; Megalooikonomou, Vasileios; Vlachakis, Dimitrios

    2018-01-01

    During the last decades, there is a vast data explosion in bioinformatics. Big data centres are trying to face this data crisis, reaching high storage capacity levels. Although several scientific giants examine how to handle the enormous pile of information in their cupboards, the problem remains unsolved. On a daily basis, there is a massive quantity of permanent loss of extensive information due to infrastructure and storage space problems. The motivation for sequencing has fallen behind. Sometimes, the time that is spent to solve storage space problems is longer than the one dedicated to collect and analyse data. To bring sequencing to the foreground, scientists have to slide over such obstacles and find alternative ways to approach the issue of data volume. Scientific community experiences the data crisis era, where, out of the box solutions may ease the typical research workflow, until technological development meets the needs of Bioinformatics.

  12. The Analysis of RDF Semantic Data Storage Optimization in Large Data Era

    NASA Astrophysics Data System (ADS)

    He, Dandan; Wang, Lijuan; Wang, Can

    2018-03-01

    With the continuous development of information technology and network technology in China, the Internet has also ushered in the era of large data. In order to obtain the effective acquisition of information in the era of large data, it is necessary to optimize the existing RDF semantic data storage and realize the effective query of various data. This paper discusses the storage optimization of RDF semantic data under large data.

  13. Embedded optical interconnect technology in data storage systems

    NASA Astrophysics Data System (ADS)

    Pitwon, Richard C. A.; Hopkins, Ken; Milward, Dave; Muggeridge, Malcolm

    2010-05-01

    As both data storage interconnect speeds increase and form factors in hard disk drive technologies continue to shrink, the density of printed channels on the storage array midplane goes up. The dominant interconnect protocol on storage array midplanes is expected to increase to 12 Gb/s by 2012 thereby exacerbating the performance bottleneck in future digital data storage systems. The design challenges inherent to modern data storage systems are discussed and an embedded optical infrastructure proposed to mitigate this bottleneck. The proposed solution is based on the deployment of an electro-optical printed circuit board and active interconnect technology. The connection architecture adopted would allow for electronic line cards with active optical edge connectors to be plugged into and unplugged from a passive electro-optical midplane with embedded polymeric waveguides. A demonstration platform has been developed to assess the viability of embedded electro-optical midplane technology in dense data storage systems and successfully demonstrated at 10.3 Gb/s. Active connectors incorporate optical transceiver interfaces operating at 850 nm and are connected in an in-plane coupling configuration to the embedded waveguides in the midplane. In addition a novel method of passively aligning and assembling passive optical devices to embedded polymer waveguide arrays has also been demonstrated.

  14. New Trends of Digital Data Storage in DNA

    PubMed Central

    2016-01-01

    With the exponential growth in the capacity of information generated and the emerging need for data to be stored for prolonged period of time, there emerges a need for a storage medium with high capacity, high storage density, and possibility to withstand extreme environmental conditions. DNA emerges as the prospective medium for data storage with its striking features. Diverse encoding models for reading and writing data onto DNA, codes for encrypting data which addresses issues of error generation, and approaches for developing codons and storage styles have been developed over the recent past. DNA has been identified as a potential medium for secret writing, which achieves the way towards DNA cryptography and stenography. DNA utilized as an organic memory device along with big data storage and analytics in DNA has paved the way towards DNA computing for solving computational problems. This paper critically analyzes the various methods used for encoding and encrypting data onto DNA while identifying the advantages and capability of every scheme to overcome the drawbacks identified priorly. Cryptography and stenography techniques have been analyzed in a critical approach while identifying the limitations of each method. This paper also identifies the advantages and limitations of DNA as a memory device and memory applications. PMID:27689089

  15. New Trends of Digital Data Storage in DNA.

    PubMed

    De Silva, Pavani Yashodha; Ganegoda, Gamage Upeksha

    With the exponential growth in the capacity of information generated and the emerging need for data to be stored for prolonged period of time, there emerges a need for a storage medium with high capacity, high storage density, and possibility to withstand extreme environmental conditions. DNA emerges as the prospective medium for data storage with its striking features. Diverse encoding models for reading and writing data onto DNA, codes for encrypting data which addresses issues of error generation, and approaches for developing codons and storage styles have been developed over the recent past. DNA has been identified as a potential medium for secret writing, which achieves the way towards DNA cryptography and stenography. DNA utilized as an organic memory device along with big data storage and analytics in DNA has paved the way towards DNA computing for solving computational problems. This paper critically analyzes the various methods used for encoding and encrypting data onto DNA while identifying the advantages and capability of every scheme to overcome the drawbacks identified priorly. Cryptography and stenography techniques have been analyzed in a critical approach while identifying the limitations of each method. This paper also identifies the advantages and limitations of DNA as a memory device and memory applications.

  16. Storage and retrieval of medical images from data warehouses

    NASA Astrophysics Data System (ADS)

    Tikekar, Rahul V.; Fotouhi, Farshad A.; Ragan, Don P.

    1995-11-01

    As our applications continue to become more sophisticated, the demand for more storage continues to rise. Hence many businesses are looking toward data warehousing technology to satisfy their storage needs. A warehouse is different from a conventional database and hence deserves a different approach while storing data that might be retrieved at a later point in time. In this paper we look at the problem of storing and retrieving medical image data from a warehouse. We regard the warehouse as a pyramid with fast storage devices at the top and slower storage devices at the bottom. Our approach is to store the most needed information abstract at the top of the pyramid and more detailed and storage consuming data toward the end of the pyramid. This information is linked for browsing purposes. In a similar fashion, during the retrieval of data, the user is given a sample representation with browse option of the detailed data and, as required, more and more details are made available.

  17. Emerging Network Storage Management Standards for Intelligent Data Storage Subsystems

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    This paper discusses the need for intelligent storage devices and subsystems that can provide data integrity metadata, the content of the existing data integrity standard for optical disks and techniques and metadata to verify stored data on optical tapes developed by the Association for Information and Image Management (AIIM) Optical Tape Committee.

  18. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  19. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  20. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  1. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Specimen and data storage facilities... PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space shall be provided for archives, limited to access by authorized personnel only, for the storage and...

  2. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  3. National assessment of geologic carbon dioxide storage resources: data

    USGS Publications Warehouse

    ,

    2013-01-01

    In 2012, the U.S. Geological Survey (USGS) completed the national assessment of geologic carbon dioxide storage resources. Its data and results are reported in three publications: the assessment data publication (this report), the assessment results publication (U.S. Geological Survey Geologic Carbon Dioxide Storage Resources Assessment Team, 2013a, USGS Circular 1386), and the assessment summary publication (U.S. Geological Survey Geologic Carbon Dioxide Storage Resources Assessment Team, 2013b, USGS Fact Sheet 2013–3020). This data publication supports the results publication and contains (1) individual storage assessment unit (SAU) input data forms with all input parameters and details on the allocation of the SAU surface land area by State and general land-ownership category; (2) figures representing the distribution of all storage classes for each SAU; (3) a table containing most input data and assessment result values for each SAU; and (4) a pairwise correlation matrix specifying geological and methodological dependencies between SAUs that are needed for aggregation of results.

  4. 28 CFR 115.289 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Community Confinement Facilities Data Collection and Review § 115.289 Data storage, publication, and destruction. (a) The agency shall ensure that data collected...

  5. 28 CFR 115.289 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Community Confinement Facilities Data Collection and Review § 115.289 Data storage, publication, and destruction. (a) The agency shall ensure that data collected...

  6. 28 CFR 115.289 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Community Confinement Facilities Data Collection and Review § 115.289 Data storage, publication, and destruction. (a) The agency shall ensure that data collected...

  7. A new tape product for optical data storage

    NASA Technical Reports Server (NTRS)

    Larsen, T. L.; Woodard, F. E.; Pace, S. J.

    1993-01-01

    A new tape product has been developed for optical data storage. Laser data recording is based on hole or pit formation in a low melting metallic alloy system. The media structure, sputter deposition process, and media characteristics, including write sensitivity, error rates, wear resistance, and archival storage are discussed.

  8. 28 CFR 115.389 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Juvenile Facilities Data Collection and Review § 115.389 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115...

  9. 28 CFR 115.389 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Juvenile Facilities Data Collection and Review § 115.389 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115...

  10. 28 CFR 115.89 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Adult Prisons and Jails Data Collection and Review § 115.89 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to...

  11. 28 CFR 115.89 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Adult Prisons and Jails Data Collection and Review § 115.89 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to...

  12. 28 CFR 115.89 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Adult Prisons and Jails Data Collection and Review § 115.89 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to...

  13. 28 CFR 115.389 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Juvenile Facilities Data Collection and Review § 115.389 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115...

  14. 28 CFR 115.189 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Lockups Data Collection and Review § 115.189 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115.187 are...

  15. 28 CFR 115.189 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Lockups Data Collection and Review § 115.189 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115.187 are...

  16. 28 CFR 115.189 - Data storage, publication, and destruction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Data storage, publication, and... ELIMINATION ACT NATIONAL STANDARDS Standards for Lockups Data Collection and Review § 115.189 Data storage, publication, and destruction. (a) The agency shall ensure that data collected pursuant to § 115.187 are...

  17. A study of mass data storage technology for rocket engine data

    NASA Technical Reports Server (NTRS)

    Ready, John F.; Benser, Earl T.; Fritz, Bernard S.; Nelson, Scott A.; Stauffer, Donald R.; Volna, William M.

    1990-01-01

    The results of a nine month study program on mass data storage technology for rocket engine (especially the Space Shuttle Main Engine) health monitoring and control are summarized. The program had the objective of recommending a candidate mass data storage technology development for rocket engine health monitoring and control and of formulating a project plan and specification for that technology development. The work was divided into three major technical tasks: (1) development of requirements; (2) survey of mass data storage technologies; and (3) definition of a project plan and specification for technology development. The first of these tasks reviewed current data storage technology and developed a prioritized set of requirements for the health monitoring and control applications. The second task included a survey of state-of-the-art and newly developing technologies and a matrix-based ranking of the technologies. It culminated in a recommendation of optical disk technology as the best candidate for technology development. The final task defined a proof-of-concept demonstration, including tasks required to develop, test, analyze, and demonstrate the technology advancement, plus an estimate of the level of effort required. The recommended demonstration emphasizes development of an optical disk system which incorporates an order-of-magnitude increase in writing speed above the current state of the art.

  18. Balloon-borne video cassette recorders for digital data storage

    NASA Technical Reports Server (NTRS)

    Althouse, W. E.; Cook, W. R.

    1985-01-01

    A high speed, high capacity digital data storage system was developed for a new balloon-borne gamma-ray telescope. The system incorporates economical consumer products: the portable video cassette recorder (VCR) and a relatively newer item - the digital audio processor. The in-flight recording system employs eight VCRs and will provide a continuous data storage rate of 1.4 megabits/sec throughout a 40 hour balloon flight. Data storage capacity is 25 gigabytes and power consumption is only 10 watts.

  19. Damsel: A Data Model Storage Library for Exascale Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Liao, Wei-keng

    Computational science applications have been described as having one of seven motifs (the “seven dwarfs”), each having a particular pattern of computation and communication. From a storage and I/O perspective, these applications can also be grouped into a number of data model motifs describing the way data is organized and accessed during simulation, analysis, and visualization. Major storage data models developed in the 1990s, such as Network Common Data Format (netCDF) and Hierarchical Data Format (HDF) projects, created support for more complex data models. Development of both netCDF and HDF5 was influenced by multi-dimensional dataset storage requirements, but their accessmore » models and formats were designed with sequential storage in mind (e.g., a POSIX I/O model). Although these and other high-level I/O libraries have had a beneficial impact on large parallel applications, they do not always attain a high percentage of peak I/O performance due to fundamental design limitations, and they do not address the full range of current and future computational science data models. The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. The project consists of three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community. The product of this project, Damsel library, is openly available for download from http://cucis.ece.northwestern.edu/projects/DAMSEL. Several case studies and application programming

  20. Evolving Requirements for Magnetic Tape Data Storage Systems

    NASA Technical Reports Server (NTRS)

    Gniewek, John J.

    1996-01-01

    Magnetic tape data storage systems have evolved in an environment where the major applications have been back-up/restore, disaster recovery, and long term archive. Coincident with the rapidly improving price-performance of disk storage systems, the prime requirements for tape storage systems have remained: (1) low cost per MB, (2) a data rate balanced to the remaining system components. Little emphasis was given to configuring the technology components to optimize retrieval of the stored data. Emerging new applications such as network attached high speed memory (HSM), and digital libraries, place additional emphasis and requirements on the retrieval of the stored data. It is therefore desirable to consider the system to be defined both by STorage And Retrieval System (STARS) requirements. It is possible to provide comparative performance analysis of different STARS by incorporating parameters related to (1) device characteristics, and (2) application characteristics in combination with queuing theory analysis. Results of these analyses are presented here in the form of response time as a function of system configuration for two different types of devices and for a variety of applications.

  1. Two-Level Verification of Data Integrity for Data Storage in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping

    Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.

  2. 21 CFR 58.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Specimen and data storage facilities. 58.51..., for the storage and retrieval of all raw data and specimens from completed studies. ... GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Facilities § 58.51 Specimen and data...

  3. 40 CFR 792.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 33 2013-07-01 2013-07-01 false Specimen and data storage facilities..., for the storage and retrieval of all raw data and specimens from completed studies. ... SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS Facilities § 792.51 Specimen and data...

  4. 40 CFR 792.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 32 2014-07-01 2014-07-01 false Specimen and data storage facilities..., for the storage and retrieval of all raw data and specimens from completed studies. ... SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS Facilities § 792.51 Specimen and data...

  5. 21 CFR 58.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Specimen and data storage facilities. 58.51..., for the storage and retrieval of all raw data and specimens from completed studies. ... GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Facilities § 58.51 Specimen and data...

  6. 21 CFR 58.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Specimen and data storage facilities. 58.51..., for the storage and retrieval of all raw data and specimens from completed studies. ... GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Facilities § 58.51 Specimen and data...

  7. 40 CFR 792.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 32 2011-07-01 2011-07-01 false Specimen and data storage facilities..., for the storage and retrieval of all raw data and specimens from completed studies. ... SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS Facilities § 792.51 Specimen and data...

  8. 21 CFR 58.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Specimen and data storage facilities. 58.51..., for the storage and retrieval of all raw data and specimens from completed studies. ... GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Facilities § 58.51 Specimen and data...

  9. 21 CFR 58.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Specimen and data storage facilities. 58.51..., for the storage and retrieval of all raw data and specimens from completed studies. ... GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Facilities § 58.51 Specimen and data...

  10. 40 CFR 792.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 33 2012-07-01 2012-07-01 false Specimen and data storage facilities..., for the storage and retrieval of all raw data and specimens from completed studies. ... SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS Facilities § 792.51 Specimen and data...

  11. DataForge: Modular platform for data storage and analysis

    NASA Astrophysics Data System (ADS)

    Nozik, Alexander

    2018-04-01

    DataForge is a framework for automated data acquisition, storage and analysis based on modern achievements of applied programming. The aim of the DataForge is to automate some standard tasks like parallel data processing, logging, output sorting and distributed computing. Also the framework extensively uses declarative programming principles via meta-data concept which allows a certain degree of meta-programming and improves results reproducibility.

  12. Photographic memory: The storage and retrieval of data

    NASA Technical Reports Server (NTRS)

    Horton, J.

    1984-01-01

    The concept of density encoding digital data in a mass-storage computer peripheral is proposed. This concept requires that digital data be encoded as distinguishable density levels (DDLS) of the film to be used as the storage medium. These DDL's are then recorded on the film in relatively large pixels. Retrieval of the data would be accomplished by scanning the photographic record using a relatively small aperture. Multiplexing of the pixels is used to store data of a range greater than the number of DDL's supportable by the film in question. Although a cartographic application is used as an example for the photographic storage of data, any digital data can be stored in a like manner. When the data is inherently spatially-distributed, the aptness of the proposed scheme is even more evident. In such a case, human-readability is an advantage which can be added to those mentioned earlier: speed of acquisition, ease of implementation, and cost effectiveness.

  13. Proposal for massively parallel data storage system

    NASA Technical Reports Server (NTRS)

    Mansuripur, M.

    1992-01-01

    An architecture for integrating large numbers of data storage units (drives) to form a distributed mass storage system is proposed. The network of interconnected units consists of nodes and links. At each node there resides a controller board, a data storage unit and, possibly, a local/remote user-terminal. The links (twisted-pair wires, coax cables, or fiber-optic channels) provide the communications backbone of the network. There is no central controller for the system as a whole; all decisions regarding allocation of resources, routing of messages and data-blocks, creation and distribution of redundant data-blocks throughout the system (for protection against possible failures), frequency of backup operations, etc., are made locally at individual nodes. The system can handle as many user-terminals as there are nodes in the network. Various users compete for resources by sending their requests to the local controller-board and receiving allocations of time and storage space. In principle, each user can have access to the entire system, and all drives can be running in parallel to service the requests for one or more users. The system is expandable up to a maximum number of nodes, determined by the number of routing-buffers built into the controller boards. Additional drives, controller-boards, user-terminals, and links can be simply plugged into an existing system in order to expand its capacity.

  14. Balloon-borne video cassette recorders for digital data storage

    NASA Technical Reports Server (NTRS)

    Althouse, W. E.; Cook, W. R.

    1985-01-01

    A high-speed, high-capacity digital data storage system has been developed for a new balloon-borne gamma-ray telescope. The system incorporates sophisticated, yet easy to use and economical consumer products: the portable video cassette recorder (VCR) and a relatively newer item - the digital audio processor. The in-flight recording system employs eight VCRs and will provide a continuous data storage rate of 1.4 megabits/sec throughout a 40 hour balloon flight. Data storage capacity is 25 gigabytes and power consumption is only 10 watts.

  15. Damsel: A Data Model Storage Library for Exascale Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koziol, Quincey

    The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.

  16. Telemetry data storage systems technology for the Space Station Freedom era

    NASA Technical Reports Server (NTRS)

    Dalton, John T.

    1989-01-01

    This paper examines the requirements and functions of the telemetry-data recording and storage systems, and the data-storage-system technology projected for the Space Station, with particular attention given to the Space Optical Disk Recorder, an on-board storage subsystem based on 160 gigabit erasable optical disk units each capable of operating at 300 M bits per second. Consideration is also given to storage systems for ground transport recording, which include systems for data capture, buffering, processing, and delivery on the ground. These can be categorized as the first in-first out storage, the fast random-access storage, and the slow access with staging. Based on projected mission manifests and data rates, the worst case requirements were developed for these three storage architecture functions. The results of the analysis are presented.

  17. Cost-effective data storage/archival subsystem for functional PACS

    NASA Astrophysics Data System (ADS)

    Chen, Y. P.; Kim, Yongmin

    1993-09-01

    Not the least of the requirements of a workable PACS is the ability to store and archive vast amounts of information. A medium-size hospital will generate between 1 and 2 TBytes of data annually on a fully functional PACS. A high-speed image transmission network coupled with a comparably high-speed central data storage unit can make local memory and magnetic disks in the PACS workstations less critical and, in an extreme case, unnecessary. Under these circumstances, the capacity and performance of the central data storage subsystem and database is critical in determining the response time at the workstations, thus significantly affecting clinical acceptability. The central data storage subsystem not only needs to provide sufficient capacity to store about ten days worth of images (five days worth of new studies, and on the average, about one comparison study for each new study), but also supplies images to the requesting workstation in a timely fashion. The database must provide fast retrieval responses upon users' requests for images. This paper analyzes both advantages and disadvantages of multiple parallel transfer disks versus RAID disks for short-term central data storage subsystem, as well as optical disk jukebox versus digital recorder tape subsystem for long-term archive. Furthermore, an example high-performance cost-effective storage subsystem which integrates both the RAID disks and high-speed digital tape subsystem as a cost-effective PACS data storage/archival unit are presented.

  18. Interactive Educational Multimedia: Coping with the Need for Increasing Data Storage.

    ERIC Educational Resources Information Center

    Malhotra, Yogesh; Erickson, Ranel E.

    1994-01-01

    Discusses the storage requirements for data forms used in interactive multimedia education and presently available storage devices. Highlights include characteristics of educational multimedia; factors determining data storage requirements; storage devices for video and audio needs; laserdiscs and videodiscs; compact discs; magneto-optical drives;…

  19. Portable and Error-Free DNA-Based Data Storage.

    PubMed

    Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica

    2017-07-10

    DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.

  20. Commercial applications for optical data storage

    NASA Astrophysics Data System (ADS)

    Tas, Jeroen

    1991-03-01

    Optical data storage has spurred the market for document imaging systems. These systems are increasingly being used to electronically manage the processing, storage and retrieval of documents. Applications range from straightforward archives to sophisticated workflow management systems. The technology is developing rapidly and within a few years optical imaging facilities will be incorporated in most of the office information systems. This paper gives an overview of the status of the market, the applications and the trends of optical imaging systems.

  1. 40 CFR 160.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Specimen and data storage facilities. 160.51 Section 160.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS GOOD LABORATORY PRACTICE STANDARDS Facilities § 160.51 Specimen and data storage facilities. Space...

  2. Multiplexed Holographic Data Storage in Bacteriorhodopsin

    NASA Technical Reports Server (NTRS)

    Mehrl, David J.; Krile, Thomas F.

    1999-01-01

    Biochrome photosensitive films in particular Bacteriorhodopsin exhibit features which make these materials an attractive recording medium for optical data storage and processing. Bacteriorhodopsin films find numerous applications in a wide range of optical data processing applications; however the short-term memory characteristics of BR limits their applications for holographic data storage. The life-time of the BR can be extended using cryogenic temperatures [1], although this method makes the system overly complicated and unstable. Longer life-times can be provided in one modification of BR - the "blue" membrane BR [2], however currently available films are characterized by both low diffraction efficiency and difficulties in providing photoreversible recording. In addition, as a dynamic recording material, the BR requires different wavelengths for recording and reconstructing of optical data in order to prevent the information erasure during its readout. This fact also put constraints on a BR-based Optical Memory, due to information loss in holographic memory systems employing the two-lambda technique for reading-writing thick multiplexed holograms.

  3. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question:more » Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms« less

  4. Holographic data storage crystals for LDEF (A0044)

    NASA Technical Reports Server (NTRS)

    Callen, W. R.; Gaylord, T. K.

    1984-01-01

    Electro-optic holographic recording systems were developed. The spaceworthiness of electro-optic crystals for use in ultrahigh capacity space data storage and retrieval systems are examined. The crystals for this experiment are included with the various electro-optical components of LDEF experiment. The effects of long-duration exposure on active optical system components is investigated. The concept of data storage in an optical-phase holographic memory is illustrated.

  5. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  6. 10 CFR 1016.21 - Protection of Restricted Data in storage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Protection of Restricted Data in storage. 1016.21 Section 1016.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.21 Protection of Restricted Data in storage. (a) Persons who possess Restricted Data...

  7. 10 CFR 1016.21 - Protection of Restricted Data in storage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Protection of Restricted Data in storage. 1016.21 Section 1016.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.21 Protection of Restricted Data in storage. (a) Persons who possess Restricted Data...

  8. 10 CFR 1016.21 - Protection of Restricted Data in storage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Protection of Restricted Data in storage. 1016.21 Section 1016.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.21 Protection of Restricted Data in storage. (a) Persons who possess Restricted Data...

  9. 10 CFR 1016.21 - Protection of Restricted Data in storage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Protection of Restricted Data in storage. 1016.21 Section 1016.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.21 Protection of Restricted Data in storage. (a) Persons who possess Restricted Data...

  10. 10 CFR 1016.21 - Protection of Restricted Data in storage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Protection of Restricted Data in storage. 1016.21 Section 1016.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Physical Security § 1016.21 Protection of Restricted Data in storage. (a) Persons who possess Restricted Data...

  11. Ultra-high density optical data storage in common transparent plastics.

    PubMed

    Kallepalli, Deepak L N; Alshehri, Ali M; Marquez, Daniela T; Andrzejewski, Lukasz; Scaiano, Juan C; Bhardwaj, Ravi

    2016-05-25

    The ever-increasing demand for high data storage capacity has spurred research on development of innovative technologies and new storage materials. Conventional GByte optical discs (DVDs and Bluray) can be transformed into ultrahigh capacity storage media by encoding multi-level and multiplexed information within the three dimensional volume of a recording medium. However, in most cases the recording medium had to be photosensitive requiring doping with photochromic molecules or nanoparticles in a multilayer stack or in the bulk material. Here, we show high-density data storage in commonly available plastics without any special material preparation. A pulsed laser was used to record data in micron-sized modified regions. Upon excitation by the read laser, each modified region emits fluorescence whose intensity represents 32 grey levels corresponding to 5 bits. We demonstrate up to 20 layers of embedded data. Adjusting the read laser power and detector sensitivity storage capacities up to 0.2 TBytes can be achieved in a standard 120 mm disc.

  12. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis?

    NASA Technical Reports Server (NTRS)

    Halem, M.; Shaffer, F.; Palm, N.; Salmon, E.; Raghavan, S.; Kempster, L.

    1998-01-01

    This technology assessment of long-term high capacity data storage systems identifies an emerging crisis of severe proportions related to preserving important historical data in science, healthcare, manufacturing, finance and other fields. For the last 50 years, the information revolution, which has engulfed all major institutions of modem society, centered itself on data-their collection, storage, retrieval, transmission, analysis and presentation. The transformation of long term historical data records into information concepts, according to Drucker, is the next stage in this revolution towards building the new information based scientific and business foundations. For this to occur, data survivability, reliability and evolvability of long term storage media and systems pose formidable technological challenges. Unlike the Y2K problem, where the clock is ticking and a crisis is set to go off at a specific time, large capacity data storage repositories face a crisis similar to the social security system in that the seriousness of the problem emerges after a decade or two. The essence of the storage crisis is as follows: since it could take a decade to migrate a peta-byte of data to a new media for preservation, and the life expectancy of the storage media itself is only a decade, then it may not be possible to complete the transfer before an irrecoverable data loss occurs. Over the last two decades, a number of anecdotal crises have occurred where vital scientific and business data were lost or would have been lost if not for major expenditures of resources and funds to save this data, much like what is happening today to solve the Y2K problem. A pr-ime example was the joint NASA/NSF/NOAA effort to rescue eight years worth of TOVS/AVHRR data from an obsolete system, which otherwise would have not resulted in the valuable 20-year long satellite record of global warming. Current storage systems solutions to long-term data survivability rest on scalable architectures

  13. LVFS: A Big Data File Storage Bridge for the HPC Community

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.

    2015-12-01

    Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.

  14. High-performance metadata indexing and search in petascale data storage systems

    NASA Astrophysics Data System (ADS)

    Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.

    2008-07-01

    Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.

  15. EMASS (tm): An expandable solution for NASA space data storage needs

    NASA Technical Reports Server (NTRS)

    Peterson, Anthony L.; Cardwell, P. Larry

    1992-01-01

    The data acquisition, distribution, processing, and archiving requirements of NASA and other U.S. Government data centers present significant data management challenges that must be met in the 1990's. The Earth Observing System (EOS) project alone is expected to generate daily data volumes greater than 2 Terabytes (2(10)(exp 12) Bytes). As the scientific community makes use of this data their work product will result in larger, increasingly complex data sets to be further exploited and managed. The challenge for data storage systems is to satisfy the initial data management requirements with cost effective solutions that provide for planned growth. This paper describes the expandable architecture of the E-Systems Modular Automated Storage System (EMASS (TM)), a mass storage system which is designed to support NASA's data capture, storage, distribution, and management requirements into the 21st century.

  16. EMASS (trademark): An expandable solution for NASA space data storage needs

    NASA Technical Reports Server (NTRS)

    Peterson, Anthony L.; Cardwell, P. Larry

    1991-01-01

    The data acquisition, distribution, processing, and archiving requirements of NASA and other U.S. Government data centers present significant data management challenges that must be met in the 1990's. The Earth Observing System (EOS) project alone is expected to generate daily data volumes greater than 2 Terabytes (2 x 10(exp 12) Bytes). As the scientific community makes use of this data, their work will result in larger, increasingly complex data sets to be further exploited and managed. The challenge for data storage systems is to satisfy the initial data management requirements with cost effective solutions that provide for planned growth. The expendable architecture of the E-Systems Modular Automated Storage System (EMASS(TM)), a mass storage system which is designed to support NASA's data capture, storage, distribution, and management requirements into the 21st century is described.

  17. Environmental Data Store (EDS): A multi-node Data Storage Facility for diverse sets of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Piasecki, M.; Ji, P.

    2014-12-01

    Geoscience data comes in many flavors that are determined by type of data such as continous on a grid or mesh or discrete colelcted at point either as one time samples or a stream of data coming of sensors, but coudl also encompass digital files of any time type such text files, WORD or EXCEL documents, or audio and video files. We present a storage facility that is comprsed of 6 nodes each of speciaized to host a certain data type: grid based data (netCDF on a THREDDS server), GIS data (shapefiles using GeoServer), point time series data (CUAHSI ODM), sample data (EDBS), and any digital data (RAMADAA) plus a server fro Remote sensing data and its products. While there is overlap in data type storage capabilities (rasters can go into several of these nodes) we prefer to use dedicated storage facilities that are a) freeware, and b) have a good degree of maturity, and c) have shown their utility for stroing a cetain type. In addition it allows to place these commonly used software stacks and storage solutiosn side-by-side to develop interoprability strategies. We have used a DRUPAL based system to handle user regoistration and authentication, and also use the system for data submission and data search. In support for tis system we developed an extensive controlled vocabulary system that is an amalgamation of various CVs used in the geosciecne community in order to achieve as high a degree of recognition, such the CF conventions, CUAHSI Cvs, , NASA (GCMD), EPA and USGS taxonomies, GEMET, in addition to ontological representations such as SWEET.

  18. 21 CFR 58.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Storage and retrieval of records and data. 58.190...) There shall be archives for orderly storage and expedient retrieval of all raw data, documentation... GENERAL GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Records and Reports § 58.190 Storage...

  19. 21 CFR 58.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Storage and retrieval of records and data. 58.190...) There shall be archives for orderly storage and expedient retrieval of all raw data, documentation... GENERAL GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Records and Reports § 58.190 Storage...

  20. 21 CFR 58.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Storage and retrieval of records and data. 58.190...) There shall be archives for orderly storage and expedient retrieval of all raw data, documentation... GENERAL GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Records and Reports § 58.190 Storage...

  1. 21 CFR 58.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Storage and retrieval of records and data. 58.190...) There shall be archives for orderly storage and expedient retrieval of all raw data, documentation... GENERAL GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Records and Reports § 58.190 Storage...

  2. 21 CFR 58.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Storage and retrieval of records and data. 58.190...) There shall be archives for orderly storage and expedient retrieval of all raw data, documentation... GENERAL GOOD LABORATORY PRACTICE FOR NONCLINICAL LABORATORY STUDIES Records and Reports § 58.190 Storage...

  3. Reliable data storage system design and implementation for acoustic logging while drilling

    NASA Astrophysics Data System (ADS)

    Hao, Xiaolong; Ju, Xiaodong; Wu, Xiling; Lu, Junqiang; Men, Baiyong; Yao, Yongchao; Liu, Dong

    2016-12-01

    Owing to the limitations of real-time transmission, reliable downhole data storage and fast ground reading have become key technologies in developing tools for acoustic logging while drilling (LWD). In order to improve the reliability of the downhole storage system in conditions of high temperature, intensive shake and periodic power supply, improvements were made in terms of hardware and software. In hardware, we integrated the storage system and data acquisition control module into one circuit board, to reduce the complexity of the storage process, by adopting the controller combination of digital signal processor and field programmable gate array. In software, we developed a systematic management strategy for reliable storage. Multiple-backup independent storage was employed to increase the data redundancy. A traditional error checking and correction (ECC) algorithm was improved and we embedded the calculated ECC code into all management data and waveform data. A real-time storage algorithm for arbitrary length data was designed to actively preserve the storage scene and ensure the independence of the stored data. The recovery procedure of management data was optimized to realize reliable self-recovery. A new bad block management idea of static block replacement and dynamic page mark was proposed to make the period of data acquisition and storage more balanced. In addition, we developed a portable ground data reading module based on a new reliable high speed bus to Ethernet interface to achieve fast reading of the logging data. Experiments have shown that this system can work stably below 155 °C with a periodic power supply. The effective ground data reading rate reaches 1.375 Mbps with 99.7% one-time success rate at room temperature. This work has high practical application significance in improving the reliability and field efficiency of acoustic LWD tools.

  4. Volume Holographic Storage of Digital Data Implemented in Photorefractive Media

    NASA Astrophysics Data System (ADS)

    Heanue, John Frederick

    A holographic data storage system is fundamentally different from conventional storage devices. Information is recorded in a volume, rather than on a two-dimensional surface. Data is transferred in parallel, on a page-by -page basis, rather than serially. These properties, combined with a limited need for mechanical motion, lead to the potential for a storage system with high capacity, fast transfer rate, and short access time. The majority of previous volume holographic storage experiments have involved direct storage and retrieval of pictorial information. Success in the development of a practical holographic storage device requires an understanding of the performance capabilities of a digital system. This thesis presents a number of contributions toward this goal. A description of light diffraction from volume gratings is given. The results are used as the basis for a theoretical and numerical analysis of interpage crosstalk in both angular and wavelength multiplexed holographic storage. An analysis of photorefractive grating formation in photovoltaic media such as lithium niobate is presented along with steady-state expressions for the space-charge field in thermal fixing. Thermal fixing by room temperature recording followed by ion compensation at elevated temperatures is compared to simultaneous recording and compensation at high temperature. In particular, the tradeoff between diffraction efficiency and incomplete Bragg matching is evaluated. An experimental investigation of orthogonal phase code multiplexing is described. Two unique capabilities, the ability to perform arithmetic operations on stored data pages optically, rather than electronically, and encrypted data storage, are demonstrated. A comparison of digital signal representations, or channel codes, is carried out. The codes are compared in terms of bit-error rate performance at constant capacity. A well-known one-dimensional digital detection technique, maximum likelihood sequence estimation, is

  5. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE PAGES

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  6. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  7. Low-cost high performance distributed data storage for multi-channel observations

    NASA Astrophysics Data System (ADS)

    Liu, Ying-bo; Wang, Feng; Deng, Hui; Ji, Kai-fan; Dai, Wei; Wei, Shou-lin; Liang, Bo; Zhang, Xiao-li

    2015-10-01

    The New Vacuum Solar Telescope (NVST) is a 1-m solar telescope that aims to observe the fine structures in both the photosphere and the chromosphere of the Sun. The observational data acquired simultaneously from one channel for the chromosphere and two channels for the photosphere bring great challenges to the data storage of NVST. The multi-channel instruments of NVST, including scientific cameras and multi-band spectrometers, generate at least 3 terabytes data per day and require high access performance while storing massive short-exposure images. It is worth studying and implementing a storage system for NVST which would balance the data availability, access performance and the cost of development. In this paper, we build a distributed data storage system (DDSS) for NVST and then deeply evaluate the availability of real-time data storage on a distributed computing environment. The experimental results show that two factors, i.e., the number of concurrent read/write and the file size, are critically important for improving the performance of data access on a distributed environment. Referring to these two factors, three strategies for storing FITS files are presented and implemented to ensure the access performance of the DDSS under conditions of multi-host write and read simultaneously. The real applications of the DDSS proves that the system is capable of meeting the requirements of NVST real-time high performance observational data storage. Our study on the DDSS is the first attempt for modern astronomical telescope systems to store real-time observational data on a low-cost distributed system. The research results and corresponding techniques of the DDSS provide a new option for designing real-time massive astronomical data storage system and will be a reference for future astronomical data storage.

  8. Digital data storage systems, computers, and data verification methods

    DOEpatents

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  9. Nanophase change for data storage applications.

    PubMed

    Shi, L P; Chong, T C

    2007-01-01

    Phase change materials are widely used for date storage. The most widespread and important applications are rewritable optical disc and Phase Change Random Access Memory (PCRAM), which utilizes the light and electric induced phase change respectively. For decades, miniaturization has been the major driving force to increase the density. Now the working unit area of the current data storage media is in the order of nano-scale. On the nano-scale, extreme dimensional and nano-structural constraints and the large proportion of interfaces will cause the deviation of the phase change behavior from that of bulk. Hence an in-depth understanding of nanophase change and the related issues has become more and more important. Nanophase change can be defined as: phase change at the scale within nano range of 100 nm, which is size-dependent, interface-dominated and surrounding materials related. Nanophase change can be classified into two groups, thin film related and structure related. Film thickness and clapping materials are key factors for thin film type, while structure shape, size and surrounding materials are critical parameters for structure type. In this paper, the recent development of nanophase change is reviewed, including crystallization of small element at nano size, thickness dependence of crystallization, effect of clapping layer on the phase change of phase change thin film and so on. The applications of nanophase change technology on data storage is introduced, including optical recording such as super lattice like optical disc, initialization free disc, near field, super-RENS, dual layer, multi level, probe storage, and PCRAM including, superlattice-like structure, side edge structure, and line type structure. Future key research issues of nanophase change are also discussed.

  10. Antenna data storage concept for phased array radio astronomical instruments

    NASA Astrophysics Data System (ADS)

    Gunst, André W.; Kruithof, Gert H.

    2018-04-01

    Low frequency Radio Astronomy instruments like LOFAR and SKA-LOW use arrays of dipole antennas for the collection of radio signals from the sky. Due to the large number of antennas involved, the total data rate produced by all the antennas is enormous. Storage of the antenna data is both economically and technologically infeasible using the current state of the art storage technology. Therefore, real-time processing of the antenna voltage data using beam forming and correlation is applied to achieve a data reduction throughout the signal chain. However, most science could equally well be performed using an archive of raw antenna voltage data coming straight from the A/D converters instead of capturing and processing the antenna data in real time over and over again. Trends on storage and computing technology make such an approach feasible on a time scale of approximately 10 years. The benefits of such a system approach are more science output and a higher flexibility with respect to the science operations. In this paper we present a radically new system concept for a radio telescope based on storage of raw antenna data. LOFAR is used as an example for such a future instrument.

  11. Fast non-interferometric iterative phase retrieval for holographic data storage.

    PubMed

    Lin, Xiao; Huang, Yong; Shimura, Tsutomu; Fujimura, Ryushi; Tanaka, Yoshito; Endo, Masao; Nishimoto, Hajimu; Liu, Jinpeng; Li, Yang; Liu, Ying; Tan, Xiaodi

    2017-12-11

    Fast non-interferometric phase retrieval is a very important technique for phase-encoded holographic data storage and other phase based applications due to its advantage of easy implementation, simple system setup, and robust noise tolerance. Here we present an iterative non-interferometric phase retrieval for 4-level phase encoded holographic data storage based on an iterative Fourier transform algorithm and known portion of the encoded data, which increases the storage code rate to two-times that of an amplitude based method. Only a single image at the Fourier plane of the beam is captured for the iterative reconstruction. Since beam intensity at the Fourier plane of the reconstructed beam is more concentrated than the reconstructed beam itself, the requirement of diffractive efficiency of the recording media is reduced, which will improve the dynamic range of recording media significantly. The phase retrieval only requires 10 iterations to achieve a less than 5% phase data error rate, which is successfully demonstrated by recording and reconstructing a test image data experimentally. We believe our method will further advance the holographic data storage technique in the era of big data.

  12. Analysis and Research on Spatial Data Storage Model Based on Cloud Computing Platform

    NASA Astrophysics Data System (ADS)

    Hu, Yong

    2017-12-01

    In this paper, the data processing and storage characteristics of cloud computing are analyzed and studied. On this basis, a cloud computing data storage model based on BP neural network is proposed. In this data storage model, it can carry out the choice of server cluster according to the different attributes of the data, so as to complete the spatial data storage model with load balancing function, and have certain feasibility and application advantages.

  13. The challenge of a data storage hierarchy

    NASA Technical Reports Server (NTRS)

    Ruderman, Michael

    1992-01-01

    A discussion of Mesa Archival Systems' data archiving system is presented. This data archiving system is strictly a software system that is implemented on a mainframe and manages the data into permanent file storage. Emphasis is placed on the fact that any kind of client system on the network can be connected through the Unix interface of the data archiving system.

  14. Data Acquisition and Mass Storage

    NASA Astrophysics Data System (ADS)

    Vande Vyvre, P.

    2004-08-01

    The experiments performed at supercolliders will constitute a new challenge in several disciplines of High Energy Physics and Information Technology. This will definitely be the case for data acquisition and mass storage. The microelectronics, communication, and computing industries are maintaining an exponential increase of the performance of their products. The market of commodity products remains the largest and the most competitive market of technology products. This constitutes a strong incentive to use these commodity products extensively as components to build the data acquisition and computing infrastructures of the future generation of experiments. The present generation of experiments in Europe and in the US already constitutes an important step in this direction. The experience acquired in the design and the construction of the present experiments has to be complemented by a large R&D effort executed with good awareness of industry developments. The future experiments will also be expected to follow major trends of our present world: deliver physics results faster and become more and more visible and accessible. The present evolution of the technologies and the burgeoning of GRID projects indicate that these trends will be made possible. This paper includes a brief overview of the technologies currently used for the different tasks of the experimental data chain: data acquisition, selection, storage, processing, and analysis. The major trends of the computing and networking technologies are then indicated with particular attention paid to their influence on the future experiments. Finally, the vision of future data acquisition and processing systems and their promise for future supercolliders is presented.

  15. Data storage systems technology for the Space Station era

    NASA Technical Reports Server (NTRS)

    Dalton, John; Mccaleb, Fred; Sos, John; Chesney, James; Howell, David

    1987-01-01

    The paper presents the results of an internal NASA study to determine if economically feasible data storage solutions are likely to be available to support the ground data transport segment of the Space Station mission. An internal NASA effort to prototype a portion of the required ground data processing system is outlined. It is concluded that the requirements for all ground data storage functions can be met with commercial disk and tape drives assuming conservative technology improvements and that, to meet Space Station data rates with commercial technology, the data will have to be distributed over multiple devices operating in parallel and in a sustained maximum throughput mode.

  16. PIMS Data Storage, Access, and Neural Network Processing

    NASA Technical Reports Server (NTRS)

    McPherson, Kevin M.; Moskowitz, Milton E.

    1998-01-01

    The Principal Investigator Microgravity Services (PIMS) project at NASA's Lewis Research Center has supported microgravity science Principal Investigator's (PIs) by processing, analyzing, and storing the acceleration environment data recorded on the NASA Space Shuttles and the Russian Mir space station. The acceleration data recorded in support of the microgravity science investigated on these platforms has been generated in discrete blocks totaling approximately 48 gigabytes for the Orbiter missions and 50 gigabytes for the Mir increments. Based on the anticipated volume of acceleration data resulting from continuous or nearly continuous operations, the International Space Station (ISS) presents a unique set of challenges regarding the storage of and access to microgravity acceleration environment data. This paper presents potential microgravity environment data storage, access, and analysis concepts for the ISS era.

  17. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis

    NASA Technical Reports Server (NTRS)

    Halem, M.; Shaffer, F.; Palm, N.; Salmon, E.; Raghavan, S.; Kempster, L.

    1998-01-01

    The density of digital storage media in our information-intensive society increases by a factor of four every three years, while the rate at which this data can be migrated to viable long-term storage has been increasing by a factor of only four every nine years. Meanwhile, older data stored on increasingly obsolete media, are at considerable risk. When the systems for which the media were designed are no longer serviced by their manufacturers (many of whom are out of business), the data will no longer be accessible. In some cases, older media suffer from a physical breakdown of components - tapes simply lose their magnetic properties after a long time in storage. The scale of the crisis is compatible to that facing the Social Security System. Greater financial and intellectual resources to the development and refinement of new storage media and migration technologies in order to preserve as much data as possible.

  18. Multilevel recording of complex amplitude data pages in a holographic data storage system using digital holography.

    PubMed

    Nobukawa, Teruyoshi; Nomura, Takanori

    2016-09-05

    A holographic data storage system using digital holography is proposed to record and retrieve multilevel complex amplitude data pages. Digital holographic techniques are capable of modulating and detecting complex amplitude distribution using current electronic devices. These techniques allow the development of a simple, compact, and stable holographic storage system that mainly consists of a single phase-only spatial light modulator and an image sensor. As a proof-of-principle experiment, complex amplitude data pages with binary amplitude and four-level phase are recorded and retrieved. Experimental results show the feasibility of the proposed holographic data storage system.

  19. 40 CFR 792.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....190 Storage and retrieval of records and data. (a) All raw data, documentation, records, protocols... 40 Protection of Environment 33 2012-07-01 2012-07-01 false Storage and retrieval of records and data. 792.190 Section 792.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  20. 40 CFR 792.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....190 Storage and retrieval of records and data. (a) All raw data, documentation, records, protocols... 40 Protection of Environment 32 2011-07-01 2011-07-01 false Storage and retrieval of records and data. 792.190 Section 792.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  1. 40 CFR 792.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ....190 Storage and retrieval of records and data. (a) All raw data, documentation, records, protocols... 40 Protection of Environment 33 2013-07-01 2013-07-01 false Storage and retrieval of records and data. 792.190 Section 792.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  2. 40 CFR 792.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ....190 Storage and retrieval of records and data. (a) All raw data, documentation, records, protocols... 40 Protection of Environment 32 2014-07-01 2014-07-01 false Storage and retrieval of records and data. 792.190 Section 792.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  3. Disk storage management for LHCb based on Data Popularity estimator

    NASA Astrophysics Data System (ADS)

    Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-12-01

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.

  4. Holographic memory for high-density data storage and high-speed pattern recognition

    NASA Astrophysics Data System (ADS)

    Gu, Claire

    2002-09-01

    As computers and the internet become faster and faster, more and more information is transmitted, received, and stored everyday. The demand for high density and fast access time data storage is pushing scientists and engineers to explore all possible approaches including magnetic, mechanical, optical, etc. Optical data storage has already demonstrated its potential in the competition against other storage technologies. CD and DVD are showing their advantages in the computer and entertainment market. What motivated the use of optical waves to store and access information is the same as the motivation for optical communication. Light or an optical wave has an enormous capacity (or bandwidth) to carry information because of its short wavelength and parallel nature. In optical storage, there are two types of mechanism, namely localized and holographic memories. What gives the holographic data storage an advantage over localized bit storage is the natural ability to read the stored information in parallel, therefore, meeting the demand for fast access. Another unique feature that makes the holographic data storage attractive is that it is capable of performing associative recall at an incomparable speed. Therefore, volume holographic memory is particularly suitable for high-density data storage and high-speed pattern recognition. In this paper, we review previous works on volume holographic memories and discuss the challenges for this technology to become a reality.

  5. Recent Advances of Flexible Data Storage Devices Based on Organic Nanoscaled Materials.

    PubMed

    Zhou, Li; Mao, Jingyu; Ren, Yi; Han, Su-Ting; Roy, Vellaisamy A L; Zhou, Ye

    2018-03-01

    Following the trend of miniaturization as per Moore's law, and facing the strong demand of next-generation electronic devices that should be highly portable, wearable, transplantable, and lightweight, growing endeavors have been made to develop novel flexible data storage devices possessing nonvolatile ability, high-density storage, high-switching speed, and reliable endurance properties. Nonvolatile organic data storage devices including memory devices on the basis of floating-gate, charge-trapping, and ferroelectric architectures, as well as organic resistive memory are believed to be favorable candidates for future data storage applications. In this Review, typical information on device structure, memory characteristics, device operation mechanisms, mechanical properties, challenges, and recent progress of the above categories of flexible data storage devices based on organic nanoscaled materials is summarized. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Eternal 5D data storage by ultrafast laser writing in glass

    NASA Astrophysics Data System (ADS)

    Zhang, J.; ČerkauskaitÄ--, A.; Drevinskas, R.; Patel, A.; Beresna, M.; Kazansky, P. G.

    2016-03-01

    Securely storing large amounts of information over relatively short timescales of 100 years, comparable to the span of the human memory, is a challenging problem. Conventional optical data storage technology used in CDs and DVDs has reached capacities of hundreds of gigabits per square inch, but its lifetime is limited to a decade. DNA based data storage can hold hundreds of terabytes per gram, but the durability is limited. The major challenge is the lack of appropriate combination of storage technology and medium possessing the advantages of both high capacity and long lifetime. The recording and retrieval of the digital data with a nearly unlimited lifetime was implemented by femtosecond laser nanostructuring of fused quartz. The storage allows unprecedented properties including hundreds of terabytes per disc data capacity, thermal stability up to 1000 °C, and virtually unlimited lifetime at room temperature opening a new era of eternal data archiving.

  7. Achievements in optical data storage and retrieval

    NASA Technical Reports Server (NTRS)

    Nelson, R. H.; Shuman, C. A.

    1977-01-01

    The present paper deals with the current achievements in two technology efforts, one of which is a wideband holographic recorder which uses multichannel recording of data in the form of holograms on roll film for storage and retrieval of large unit records at hundreds of megabit per second. The second effort involves a system (termed DIGIMEN) which uses binary spot recording on photographic film in the form of microfiche to provide a mass storage capability with automatic computer-controlled random access to stored records. Some potential design improvements are noted.

  8. Managing high-bandwidth real-time data storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bigelow, David D.; Brandt, Scott A; Bent, John M

    2009-09-23

    There exist certain systems which generate real-time data at high bandwidth, but do not necessarily require the long-term retention of that data in normal conditions. In some cases, the data may not actually be useful, and in others, there may be too much data to permanently retain in long-term storage whether it is useful or not. However, certain portions of the data may be identified as being vitally important from time to time, and must therefore be retained for further analysis or permanent storage without interrupting the ongoing collection of new data. We have developed a system, Mahanaxar, intended tomore » address this problem. It provides quality of service guarantees for incoming real-time data streams and simultaneous access to already-recorded data on a best-effort basis utilizing any spare bandwidth. It has built in mechanisms for reliability and indexing, can scale upwards to meet increasing bandwidth requirements, and handles both small and large data elements equally well. We will show that a prototype version of this system provides better performance than a flat file (traditional filesystem) based version, particularly with regard to quality of service guarantees and hard real-time requirements.« less

  9. Proactive replica checking to assure reliability of data in cloud storage with minimum replication

    NASA Astrophysics Data System (ADS)

    Murarka, Damini; Maheswari, G. Uma

    2017-11-01

    The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.

  10. Research Data Storage: A Framework for Success. ECAR Working Group Paper

    ERIC Educational Resources Information Center

    Blair, Douglas; Dawson, Barbara E.; Fary, Michael; Hillegas, Curtis W.; Hopkins, Brian W.; Lyons, Yolanda; McCullough, Heather; McMullen, Donald F.; Owen, Kim; Ratliff, Mark; Williams, Harry

    2014-01-01

    The EDUCAUSE Center for Analysis and Research Data Management Working Group (ECAR-DM) has created a framework for research data storage as an aid for higher education institutions establishing and evaluating their institution's research data storage efforts. This paper describes areas for consideration and suggests graduated criteria to assist in…

  11. Using RFID to Enhance Security in Off-Site Data Storage

    PubMed Central

    Lopez-Carmona, Miguel A.; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R.

    2010-01-01

    Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system’s benefits in terms of efficiency and failure prevention. PMID:22163638

  12. Using RFID to enhance security in off-site data storage.

    PubMed

    Lopez-Carmona, Miguel A; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R

    2010-01-01

    Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system's benefits in terms of efficiency and failure prevention.

  13. A Hybrid Multilevel Storage Architecture for Electric Power Dispatching Big Data

    NASA Astrophysics Data System (ADS)

    Yan, Hu; Huang, Bibin; Hong, Bowen; Hu, Jing

    2017-10-01

    Electric power dispatching is the center of the whole power system. In the long run time, the power dispatching center has accumulated a large amount of data. These data are now stored in different power professional systems and form lots of information isolated islands. Integrating these data and do comprehensive analysis can greatly improve the intelligent level of power dispatching. In this paper, a hybrid multilevel storage architecture for electrical power dispatching big data is proposed. It introduces relational database and NoSQL database to establish a power grid panoramic data center, effectively meet power dispatching big data storage needs, including the unified storage of structured and unstructured data fast access of massive real-time data, data version management and so on. It can be solid foundation for follow-up depth analysis of power dispatching big data.

  14. Towards Efficient Scientific Data Management Using Cloud Storage

    NASA Technical Reports Server (NTRS)

    He, Qiming

    2013-01-01

    A software prototype allows users to backup and restore data to/from both public and private cloud storage such as Amazon's S3 and NASA's Nebula. Unlike other off-the-shelf tools, this software ensures user data security in the cloud (through encryption), and minimizes users operating costs by using space- and bandwidth-efficient compression and incremental backup. Parallel data processing utilities have also been developed by using massively scalable cloud computing in conjunction with cloud storage. One of the innovations in this software is using modified open source components to work with a private cloud like NASA Nebula. Another innovation is porting the complex backup to- cloud software to embedded Linux, running on the home networking devices, in order to benefit more users.

  15. Development of climate data storage and processing model

    NASA Astrophysics Data System (ADS)

    Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.

    2016-11-01

    We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.

  16. Random access in large-scale DNA data storage.

    PubMed

    Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin

    2018-03-01

    Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.

  17. Review of ultra-high density optical storage technologies for big data center

    NASA Astrophysics Data System (ADS)

    Hao, Ruan; Liu, Jie

    2016-10-01

    In big data center, optical storage technologies have many advantages, such as energy saving and long lifetime. However, how to improve the storage density of optical storage is still a huge challenge. Maybe the multilayer optical storage technology is the good candidate for big data center in the years to come. Due to the number of layers is primarily limited by transmission of each layer, the largest capacities of the multilayer disc are around 1 TB/disc and 10 TB/ cartridge. Holographic data storage (HDS) is a volumetric approach, but its storage capacity is also strictly limited by the diffractive nature of light. For a holographic disc with total thickness of 1.5mm, its potential capacities are not more than 4TB/disc and 40TB/ cartridge. In recent years, the development of super resolution optical storage technology has attracted more attentions. Super-resolution photoinduction-inhibition nanolithography (SPIN) technology with 9 nm feature size and 52nm two-line resolution was reported 3 years ago. However, turning this exciting principle into a real storage system is a huge challenge. It can be expected that in the future, the capacities of 10TB/disc and 100TB/cartridge can be achieved. More importantly, due to breaking the diffraction limit of light, SPIN technology will open the door to improve the optical storage capacity steadily to meet the need of the developing big data center.

  18. Using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.

    2016-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  19. dCache: Big Data storage for HEP communities and beyond

    NASA Astrophysics Data System (ADS)

    Millar, A. P.; Behrmann, G.; Bernardt, C.; Fuhrmann, P.; Litvintsev, D.; Mkrtchyan, T.; Petersen, A.; Rossi, A.; Schwank, K.

    2014-06-01

    With over ten years in production use dCache data storage system has evolved to match ever changing lansdcape of continually evolving storage technologies with new solutions to both existing problems and new challenges. In this paper, we present three areas of innovation in dCache: providing efficient access to data with NFS v4.1 pNFS, adoption of CDMI and WebDAV as an alternative to SRM for managing data, and integration with alternative authentication mechanisms.

  20. Groundwater Change in Storage Estimation by Using Monitoring Wells Data

    NASA Astrophysics Data System (ADS)

    Flores, C. I.

    2016-12-01

    In present times, remarkable attention is being given to models and data in hydrology, regarding their role in meeting water management requirements to enable well-informed decisions. Water management under the Sustainable Groundwater Management Act (SGMA) is currently challenging, due to it requires that groundwater sustainability agencies (GSAs) formulate groundwater sustainability plans (GSPs) to comply with new regulations and perform a responsible management to secure California's groundwater resources, particularly when droughts and climate change conditions are present. In this scenario, water budgets and change in groundwater storage estimations are key components for decision makers, but their computation is often difficult, lengthy and uncertain. Therefore, this work presents an innovative approach to integrate hydrologic modeling and available groundwater data into a single simplified tool, a proxy function, that estimate in real time the change in storage based on monitoring wells data. A hydrologic model was developed and calibrated for water years 1970 to 2015, the Yolo County IWFM, which was applied to generate the proxy as a study case, by regressing simulated change in storage versus change in head for the cities of Davis and Woodland area, and obtain a linear function dependent on heads variations over time. Later, the proxy was applied to actual groundwater data in this region to predict the change in storage. Results from this work provide proxy functions to approximate change in storage based on monitoring data for daily, monthly and yearly frameworks, being as well easily transferable to any spreadsheet or database to perform simply yet crucial computations in real time for sustainable groundwater management.

  1. 40 CFR 160.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... retained. (b) There shall be archives for orderly storage and expedient retrieval of all raw data... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Storage and retrieval of records and data. 160.190 Section 160.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  2. 40 CFR 160.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... retained. (b) There shall be archives for orderly storage and expedient retrieval of all raw data... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Storage and retrieval of records and data. 160.190 Section 160.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  3. 40 CFR 160.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... retained. (b) There shall be archives for orderly storage and expedient retrieval of all raw data... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Storage and retrieval of records and data. 160.190 Section 160.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  4. 40 CFR 160.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... retained. (b) There shall be archives for orderly storage and expedient retrieval of all raw data... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Storage and retrieval of records and data. 160.190 Section 160.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED...

  5. Digital Library Storage using iRODS Data Grids

    NASA Astrophysics Data System (ADS)

    Hedges, Mark; Blanke, Tobias; Hasan, Adil

    Digital repository software provides a powerful and flexible infrastructure for managing and delivering complex digital resources and metadata. However, issues can arise in managing the very large, distributed data files that may constitute these resources. This paper describes an implementation approach that combines the Fedora digital repository software with a storage layer implemented as a data grid, using the iRODS middleware developed by DICE (Data Intensive Cyber Environments) as the successor to SRB. This approach allows us to use Fedoras flexible architecture to manage the structure of resources and to provide application- layer services to users. The grid-based storage layer provides efficient support for managing and processing the underlying distributed data objects, which may be very large (e.g. audio-visual material). The Rule Engine built into iRODS is used to integrate complex workflows at the data level that need not be visible to users, e.g. digital preservation functionality.

  6. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  7. Long-term data storage in diamond

    PubMed Central

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A.

    2016-01-01

    The negatively charged nitrogen vacancy (NV−) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV− optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multicolor optical microscopy to read, write, and reset arbitrary data sets with two-dimensional (2D) binary bit density comparable to present digital-video-disk (DVD) technology. Leveraging on the singular dynamics of NV− ionization, we encode information on different planes of the diamond crystal with no cross-talk, hence extending the storage capacity to three dimensions. Furthermore, we correlate the center’s charge state and the nuclear spin polarization of the nitrogen host and show that the latter is robust to a cycle of NV− ionization and recharge. In combination with super-resolution microscopy techniques, these observations provide a route toward subdiffraction NV charge control, a regime where the storage capacity could exceed present technologies. PMID:27819045

  8. Long-term data storage in diamond.

    PubMed

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A

    2016-10-01

    The negatively charged nitrogen vacancy (NV - ) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV - optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multicolor optical microscopy to read, write, and reset arbitrary data sets with two-dimensional (2D) binary bit density comparable to present digital-video-disk (DVD) technology. Leveraging on the singular dynamics of NV - ionization, we encode information on different planes of the diamond crystal with no cross-talk, hence extending the storage capacity to three dimensions. Furthermore, we correlate the center's charge state and the nuclear spin polarization of the nitrogen host and show that the latter is robust to a cycle of NV - ionization and recharge. In combination with super-resolution microscopy techniques, these observations provide a route toward subdiffraction NV charge control, a regime where the storage capacity could exceed present technologies.

  9. Multiferroic composites for magnetic data storage beyond the super-paramagnetic limit

    NASA Astrophysics Data System (ADS)

    Vopson, M. M.; Zemaityte, E.; Spreitzer, M.; Namvar, E.

    2014-09-01

    Ultra high-density magnetic data storage requires magnetic grains of <5 nm diameters. Thermal stability of such small magnetic grain demands materials with very large magneto-crystalline anisotropy, which makes data write process almost impossible, even when Heat Assisted Magnetic Recording (HAMR) technology is deployed. Here, we propose an alternative method of strengthening the thermal stability of the magnetic grains via elasto-mechanical coupling between the magnetic data storage layer and a piezo-ferroelectric substrate. Using Stoner-Wohlfarth single domain model, we show that the correct tuning of this coupling can increase the effective magneto-crystalline anisotropy of the magnetic grains making them stable beyond the super-paramagnetic limit. However, the effective magnetic anisotropy can also be lowered or even switched off during the write process by simply altering the applied voltage to the substrate. Based on these effects, we propose two magnetic data storage protocols, one of which could potentially replace HAMR technology, with both schemes promising unprecedented increases in the data storage areal density beyond the super-paramagnetic size limit.

  10. A privacy-preserving solution for compressed storage and selective retrieval of genomic data

    PubMed Central

    Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S.; Molyneaux, Adam; Xu, Zhenyu; Hubaux, Jean-Pierre

    2016-01-01

    In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients’ complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. PMID:27789525

  11. Collection, storage, retrieval, and publication of water-resources data

    USGS Publications Warehouse

    Showen, C. R.

    1978-01-01

    This publication represents a series of papers devoted to the subject of collection, storage, retrieval, and publication of hydrologic data. The papers were presented by members of the U.S. Geological Survey at the International Seminar on Organization and Operation of Hydrologic Services, Ottawa, Canada, July 15-16, 1976, sponsored by the World Meteorological Organization. The first paper, ' Standardization of Hydrologic Measurements, ' by George F. Smoot discusses the need for standardization of the methods and instruments used in measuring hydrologic data. The second paper, ' Use of Earth Satellites for Automation of Hydrologic Data Collection, ' by Richard W. Paulson discusses the use of inexpensive battery-operated radios to transmit realtime hydrologic data to earth satellites and back to ground receiving stations for computer processing. The third paper, ' Operation Hydrometeorological Data-Collection System for the Columbia River, ' by Nicholas A. Kallio discusses the operation of a complex water-management system for a large river basin utilizing the latest automatic telemetry and processing devices. The fourth paper, ' Storage and Retrieval of Water-Resources Data, ' by Charles R. Showen discusses the U.S. Geological Survey 's National Water Data Storage and Retrieval System (WATSTORE) and its use in processing water resources data. The final paper, ' Publication of Water Resources Data, ' by S. M. Lang and C. B. Ham discusses the requirement for publication of water-resources data to meet the needs of a widespread audience and for archival purposes. (See W78-09324 thru W78-09328) (Woodard-USGS)

  12. Biophotopol: A Sustainable Photopolymer for Holographic Data Storage Applications

    PubMed Central

    Ortuño, Manuel; Gallego, Sergi; Márquez, Andrés; Neipp, Cristian; Pascual, Inmaculada; Beléndez, Augusto

    2012-01-01

    Photopolymers have proved to be useful for different holographic applications such as holographic data storage or holographic optical elements. However, most photopolymers have certain undesirable features, such as the toxicity of some of their components or their low environmental compatibility. For this reason, the Holography and Optical Processing Group at the University of Alicante developed a new dry photopolymer with low toxicity and high thickness called biophotopol, which is very adequate for holographic data storage applications. In this paper we describe our recent studies on biophotopol and the main characteristics of this material. PMID:28817008

  13. Biophotopol: A Sustainable Photopolymer for Holographic Data Storage Applications.

    PubMed

    Ortuño, Manuel; Gallego, Sergi; Márquez, Andrés; Neipp, Cristian; Pascual, Inmaculada; Beléndez, Augusto

    2012-05-02

    Photopolymers have proved to be useful for different holographic applications such as holographic data storage or holographic optical elements. However, most photopolymers have certain undesirable features, such as the toxicity of some of their components or their low environmental compatibility. For this reason, the Holography and Optical Processing Group at the University of Alicante developed a new dry photopolymer with low toxicity and high thickness called biophotopol, which is very adequate for holographic data storage applications. In this paper we describe our recent studies on biophotopol and the main characteristics of this material.

  14. Mass storage systems for data transport in the early space station era 1992-1998

    NASA Technical Reports Server (NTRS)

    Carper, Richard (Editor); Dalton, John (Editor); Healey, Mike (Editor); Kempster, Linda (Editor); Martin, John (Editor); Mccaleb, Fred (Editor); Sobieski, Stanley (Editor); Sos, John (Editor)

    1987-01-01

    NASA's Space Station Program will provide a vehicle to deploy an unprecedented number of data producing experiments and operational devices. Peak down link data rates are expected to be in the 500 megabit per second range and the daily data volume could reach 2.4 terabytes. Such startling requirements inspired an internal NASA study to determine if economically viable data storage solutions are likely to be available to support the Ground Data Transport segment of the NASA data system. To derive the requirements for data storage subsystems, several alternative data transport architectures were identified with different degrees of decentralization. Data storage operations at each subsystem were categorized based on access time and retrieval functions, and reduced to the following types of subsystems: First in First out (FIFO) storage, fast random access storage, and slow access with staging. The study showed that industry funded magnetic and optical storage technology has a reasonable probability of meeting these requirements. There are, however, system level issues that need to be addressed in the near term.

  15. Decision feedback equalizer for holographic data storage.

    PubMed

    Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo

    2018-05-20

    Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.

  16. An open-source data storage and visualization back end for experimental data.

    PubMed

    Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert; Nielsen, Jane H; Chorkendorff, Ib

    2014-04-01

    In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component for the data back end has furthermore been written that enables live visualization of data on any device capable of displaying Web pages. The system consists of three parts: data-logging clients, a data server, and a data presentation Web site. The logging of data from independent clients leads to high resilience to equipment failure, whereas the central storage of data dramatically eases backup and data exchange. The visualization front end allows direct monitoring of acquired data to see live progress of long-duration experiments. This enables the user to alter experimental conditions based on these data and to interfere with the experiment if needed. The data stored consist both of specific measurements and of continuously logged system parameters. The latter is crucial to a variety of automation and surveillance features, and three cases of such features are described: monitoring system health, getting status of long-duration experiments, and implementation of instant alarms in the event of failure.

  17. Data on subsurface storage of liquid waste near Pensacola, Florida, 1963-1980

    USGS Publications Warehouse

    Hull, R.W.; Martin, J.B.

    1982-01-01

    Since 1963, when industrial waste was first injected into the subsurface in northwest Florida, considerable data have been collected relating to the geochemistry of subsurface waste storage. This report presents hydrogeologic data on two subsurface waste storage. This report presents hydrogeologic data on two subsurface storage systems near Pensacola, Fla., which inject liquid industrial waste through deep wells into a saline aquifer. Injection sites are described giving a history of well construction, injection, and testing; geologic data from cores and grab samples; hydrographs of injection rates, volume, pressure, and water levels; and chemical and physical data from water-quality samples collected from injection and monitor wells. (USGS)

  18. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage.

    PubMed

    Guo, Yeting; Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-04-13

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query.

  19. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage

    PubMed Central

    Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-01-01

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query. PMID:29652810

  20. Multibit data storage states formed in plasma-treated MoS₂ transistors.

    PubMed

    Chen, Mikai; Nam, Hongsuk; Wi, Sungjin; Priessnitz, Greg; Gunawan, Ivan Manuel; Liang, Xiaogan

    2014-04-22

    New multibit memory devices are desirable for improving data storage density and computing speed. Here, we report that multilayer MoS2 transistors, when treated with plasmas, can dramatically serve as low-cost, nonvolatile, highly durable memories with binary and multibit data storage capability. We have demonstrated binary and 2-bit/transistor (or 4-level) data states suitable for year-scale data storage applications as well as 3-bit/transistor (or 8-level) data states for day-scale data storage. This multibit memory capability is hypothesized to be attributed to plasma-induced doping and ripple of the top MoS2 layers in a transistor, which could form an ambipolar charge-trapping layer interfacing the underlying MoS2 channel. This structure could enable the nonvolatile retention of charged carriers as well as the reversible modulation of polarity and amount of the trapped charge, ultimately resulting in multilevel data states in memory transistors. Our Kelvin force microscopy results strongly support this hypothesis. In addition, our research suggests that the programming speed of such memories can be improved by using nanoscale-area plasma treatment. We anticipate that this work would provide important scientific insights for leveraging the unique structural property of atomically layered two-dimensional materials in nanoelectronic applications.

  1. A privacy-preserving solution for compressed storage and selective retrieval of genomic data.

    PubMed

    Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S; Molyneaux, Adam; Xu, Zhenyu; Fellay, Jacques; Steinmetz, Lars M; Hubaux, Jean-Pierre

    2016-12-01

    In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients' complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. © 2016 Huang et al.; Published by Cold Spring Harbor Laboratory Press.

  2. Educational outreach at the NSF Engineering Research Center for Data Storage Systems

    NASA Astrophysics Data System (ADS)

    Williams, James E., Jr.

    1996-07-01

    An aspect of the National Science Foundation Engineering Research Center in Data Storage Systems (DSSC) program that is valued by our sponsors is the way we use our different educational programs to impact the data storage industry in a positive fashion. The most common way to teach data storage materials is in classes that are offered as part of the Carnegie Mellon curriculum. Another way the DSSC attempts to educate students is through outreach programs such as the NSF Research Experiences for Undergraduates and Young Scholars programs, both of which have been very successful and place emphasis and including women, under represented minorities and disable d students. The Center has also established cooperative outreach partnerships which serve to both educate students and benefit the industry. One example is the cooperative program we have had with the Magnetics Technology Centre at the National University of Singapore to help strengthen their research and educational efforts to benefit U.S. data storage companies with plants in Singapore. In addition, the Center has started a program that will help train outstanding students from technical institutes to increase their value as technicians to the data storage industry when they graduate.

  3. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    NASA Astrophysics Data System (ADS)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16

  4. Eigenmode multiplexing with SLM for volume holographic data storage

    NASA Astrophysics Data System (ADS)

    Chen, Guanghao; Miller, Bo E.; Takashima, Yuzuru

    2017-08-01

    The cavity supports the orthogonal reference beam families as its eigenmodes while enhancing the reference beam power. Such orthogonal eigenmodes are used as additional degree of freedom to multiplex data pages, consequently increase storage densities for volume Holographic Data Storage Systems (HDSS) when the maximum number of multiplexed data page is limited by geometrical factor. Image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at multiple Bragg angles by using Liquid Crystal on Silicon (LCOS) spatial light modulators (SLMs) in reference arms. Total of nine holograms are recorded with three angular and three eigenmode.

  5. Bacteriorhodopsin films for optical signal processing and data storage

    NASA Technical Reports Server (NTRS)

    Walkup, John F. (Principal Investigator); Mehrl, David J. (Principal Investigator)

    1996-01-01

    This report summarizes the research results obtained on NASA Ames Grant NAG 2-878 entitled 'Investigations of Bacteriorhodopsin Films for Optical Signal Processing and Data Storage.' Specifically we performed research, at Texas Tech University, on applications of Bacteriorhodopisin film to both (1) dynamic spatial filtering and (2) holographic data storage. In addition, measurements of the noise properties of an acousto-optical matrix-vestor multiplier built for NASA Ames by Photonic Systems Inc. were performed at NASA Ames' Photonics Laboratory. This research resulted in two papers presented at major optical data processing conferences and a journal paper which is to appear in APPLIED OPTICS. A new proposal for additional BR research has recently been submitted to NASA Ames Research Center.

  6. RAIN: A Bio-Inspired Communication and Data Storage Infrastructure.

    PubMed

    Monti, Matteo; Rasmussen, Steen

    2017-01-01

    We summarize the results and perspectives from a companion article, where we presented and evaluated an alternative architecture for data storage in distributed networks. We name the bio-inspired architecture RAIN, and it offers file storage service that, in contrast with current centralized cloud storage, has privacy by design, is open source, is more secure, is scalable, is more sustainable, has community ownership, is inexpensive, and is potentially faster, more efficient, and more reliable. We propose that a RAIN-style architecture could form the backbone of the Internet of Things that likely will integrate multiple current and future infrastructures ranging from online services and cryptocurrency to parts of government administration.

  7. From the surface to volume: concepts for the next generation of optical-holographic data-storage materials.

    PubMed

    Bruder, Friedrich-Karl; Hagen, Rainer; Rölle, Thomas; Weiser, Marc-Stephan; Fäcke, Thomas

    2011-05-09

    Optical data storage has had a major impact on daily life since its introduction to the market in 1982. Compact discs (CDs), digital versatile discs (DVDs), and Blu-ray discs (BDs) are universal data-storage formats with the advantage that the reading and writing of the digital data does not require contact and is therefore wear-free. These formats allow convenient and fast data access, high transfer rates, and electricity-free data storage with low overall archiving costs. The driving force for development in this area is the constant need for increased data-storage capacity and transfer rate. The use of holographic principles for optical data storage is an elegant way to increase the storage capacity and the transfer rate, because by this technique the data can be stored in the volume of the storage material and, moreover, it can be optically processed in parallel. This Review describes the fundamental requirements for holographic data-storage materials and compares the general concepts for the materials used. An overview of the performance of current read-write devices shows how far holographic data storage has already been developed. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Encrypted holographic data storage based on orthogonal-phase-code multiplexing.

    PubMed

    Heanue, J F; Bashaw, M C; Hesselink, L

    1995-09-10

    We describe an encrypted holographic data-storage system that combines orthogonal-phase-code multiplexing with a random-phase key. The system offers the security advantages of random-phase coding but retains the low cross-talk performance and the minimum code storage requirements typical in an orthogonal-phase-code-multiplexing system.

  9. Low latency and persistent data storage

    DOEpatents

    Fitch, Blake G; Franceschini, Michele M; Jagmohan, Ashish; Takken, Todd E

    2014-02-18

    Persistent data storage is provided by a method that includes receiving a low latency store command that includes write data. The write data is written to a first memory device that is implemented by a nonvolatile solid-state memory technology characterized by a first access speed. It is acknowledged that the write data has been successfully written to the first memory device. The write data is written to a second memory device that is implemented by a volatile memory technology. At least a portion of the data in the first memory device is written to a third memory device when a predetermined amount of data has been accumulated in the first memory device. The third memory device is implemented by a nonvolatile solid-state memory technology characterized by a second access speed that is slower than the first access speed.

  10. Cavity enhanced eigenmode multiplexing for volume holographic data storage

    NASA Astrophysics Data System (ADS)

    Miller, Bo E.; Takashima, Yuzuru

    2017-08-01

    Previously, we proposed and experimentally demonstrated enhanced recording speeds by using a resonant optical cavity to semi-passively increase the reference beam power while recording image bearing holograms. In addition to enhancing the reference beam power the cavity supports the orthogonal reference beam families of its eigenmodes, which can be used as a degree of freedom to multiplex data pages and increase storage densities for volume Holographic Data Storage Systems (HDSS). While keeping the increased recording speed of a cavity enhanced reference arm, image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at two Bragg angles for expedited recording of four multiplexed holograms. We experimentally confirmed write rates are enhanced by an average factor of 1.1, and page crosstalk is about 2.5%. This hybrid multiplexing opens up a pathway to increase storage density while minimizing modifications to current angular multiplexing HDSS.

  11. Confidential storage and transmission of medical image data.

    PubMed

    Norcen, R; Podesser, M; Pommer, A; Schmidt, H-P; Uhl, A

    2003-05-01

    We discuss computationally efficient techniques for confidential storage and transmission of medical image data. Two types of partial encryption techniques based on AES are proposed. The first encrypts a subset of bitplanes of plain image data whereas the second encrypts parts of the JPEG2000 bitstream. We find that encrypting between 20% and 50% of the visual data is sufficient to provide high confidentiality.

  12. Vector and Raster Data Storage Based on Morton Code

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.

    2018-05-01

    Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.

  13. Efficient and secure outsourcing of genomic data storage.

    PubMed

    Sousa, João Sá; Lefebvre, Cédric; Huang, Zhicong; Raisaro, Jean Louis; Aguilar-Melchor, Carlos; Killijian, Marc-Olivier; Hubaux, Jean-Pierre

    2017-07-26

    Cloud computing is becoming the preferred solution for efficiently dealing with the increasing amount of genomic data. Yet, outsourcing storage and processing sensitive information, such as genomic data, comes with important concerns related to privacy and security. This calls for new sophisticated techniques that ensure data protection from untrusted cloud providers and that still enable researchers to obtain useful information. We present a novel privacy-preserving algorithm for fully outsourcing the storage of large genomic data files to a public cloud and enabling researchers to efficiently search for variants of interest. In order to protect data and query confidentiality from possible leakage, our solution exploits optimal encoding for genomic variants and combines it with homomorphic encryption and private information retrieval. Our proposed algorithm is implemented in C++ and was evaluated on real data as part of the 2016 iDash Genome Privacy-Protection Challenge. Results show that our solution outperforms the state-of-the-art solutions and enables researchers to search over millions of encrypted variants in a few seconds. As opposed to prior beliefs that sophisticated privacy-enhancing technologies (PETs) are unpractical for real operational settings, our solution demonstrates that, in the case of genomic data, PETs are very efficient enablers.

  14. Low latency and persistent data storage

    DOEpatents

    Fitch, Blake G; Franceschini, Michele M; Jagmohan, Ashish; Takken, Todd

    2014-11-04

    Persistent data storage is provided by a computer program product that includes computer program code configured for receiving a low latency store command that includes write data. The write data is written to a first memory device that is implemented by a nonvolatile solid-state memory technology characterized by a first access speed. It is acknowledged that the write data has been successfully written to the first memory device. The write data is written to a second memory device that is implemented by a volatile memory technology. At least a portion of the data in the first memory device is written to a third memory device when a predetermined amount of data has been accumulated in the first memory device. The third memory device is implemented by a nonvolatile solid-state memory technology characterized by a second access speed that is slower than the first access speed.

  15. High-speed data duplication/data distribution: An adjunct to the mass storage equation

    NASA Technical Reports Server (NTRS)

    Howard, Kevin

    1993-01-01

    The term 'mass storage' invokes the image of large on-site disk and tape farms which contain huge quantities of low- to medium-access data. Although the cost of such bulk storage is recognized, the cost of the bulk distribution of this data rarely is given much attention. Mass data distribution becomes an even more acute problem if the bulk data is part of a national or international system. If the bulk data distribution is to travel from one large data center to another large data center then fiber-optic cables or the use of satellite channels is feasible. However, if the distribution must be disseminated from a central site to a number of much smaller, and, perhaps varying sites, then cost prohibits the use of fiber-optic cable or satellite communication. Given these cost constraints much of the bulk distribution of data will continue to be disseminated via inexpensive magnetic tape using the various next day postal service options. For non-transmitted bulk data, our working hypotheses are that the desired duplication efficiency of the total bulk data should be established before selecting any particular data duplication system; and, that the data duplication algorithm should be determined before any bulk data duplication method is selected.

  16. A Secure and Efficient Audit Mechanism for Dynamic Shared Data in Cloud Storage

    PubMed Central

    2014-01-01

    With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data. PMID:24959630

  17. A secure and efficient audit mechanism for dynamic shared data in cloud storage.

    PubMed

    Kwon, Ohmin; Koo, Dongyoung; Shin, Yongjoo; Yoon, Hyunsoo

    2014-01-01

    With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data.

  18. Data storage technology: Hardware and software, Appendix B

    NASA Technical Reports Server (NTRS)

    Sable, J. D.

    1972-01-01

    This project involves the development of more economical ways of integrating and interfacing new storage devices and data processing programs into a computer system. It involves developing interface standards and a software/hardware architecture which will make it possible to develop machine independent devices and programs. These will interface with the machine dependent operating systems of particular computers. The development project will not be to develop the software which would ordinarily be the responsibility of the manufacturer to supply, but to develop the standards with which that software is expected to confirm in providing an interface with the user or storage system.

  19. Computer Storage and Retrieval of Position - Dependent Data.

    DTIC Science & Technology

    1982-06-01

    This thesis covers the design of a new digital database system to replace the merged (observation and geographic location) record, one file per cruise...68 "The Digital Data Library System: Library Storage and Retrieval of Digital Geophysical Data" by Robert C. Groan) provided a relatively simple...dependent, ’geophysical’ data. The system is operational on a Digital Equipment Corporation VAX-11/780 computer. Values of measured and computed

  20. Fast, axis-agnostic, dynamically summarized storage and retrieval for mass spectrometry data.

    PubMed

    Handy, Kyle; Rosen, Jebediah; Gillan, André; Smith, Rob

    2017-01-01

    Mass spectrometry, a popular technique for elucidating the molecular contents of experimental samples, creates data sets comprised of millions of three-dimensional (m/z, retention time, intensity) data points that correspond to the types and quantities of analyzed molecules. Open and commercial MS data formats are arranged by retention time, creating latency when accessing data across multiple m/z. Existing MS storage and retrieval methods have been developed to overcome the limitations of retention time-based data formats, but do not provide certain features such as dynamic summarization and storage and retrieval of point meta-data (such as signal cluster membership), precluding efficient viewing applications and certain data-processing approaches. This manuscript describes MzTree, a spatial database designed to provide real-time storage and retrieval of dynamically summarized standard and augmented MS data with fast performance in both m/z and RT directions. Performance is reported on real data with comparisons against related published retrieval systems.

  1. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  2. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  3. 40 CFR 792.51 - Specimen and data storage facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 31 2010-07-01 2010-07-01 true Specimen and data storage facilities. 792.51 Section 792.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT (CONTINUED) GOOD LABORATORY PRACTICE STANDARDS Facilities § 792.51 Specimen and data...

  4. State DOT use of web-based data storage.

    DOT National Transportation Integrated Search

    2013-01-01

    This study explores the experiences of state departments of transportation (DOT) in the use of web or : cloud-based data storage and related practices. The study provides results of a survey of State DOTs : and presents best practices of state govern...

  5. Data storage and retrieval system

    NASA Technical Reports Server (NTRS)

    Nakamoto, Glen

    1991-01-01

    The Data Storage and Retrieval System (DSRS) consists of off-the-shelf system components integrated as a file server supporting very large files. These files are on the order of one gigabyte of data per file, although smaller files on the order of one megabyte can be accommodated as well. For instance, one gigabyte of data occupies approximately six 9 track tape reels (recorded at 6250 bpi). Due to this large volume of media, it was desirable to shrink the size of the proposed media to a single portable cassette. In addition to large size, a key requirement was that the data needs to be transferred to a (VME based) workstation at very high data rates. One gigabyte (GB) of data needed to be transferred from an archiveable media on a file server to a workstation in less than 5 minutes. Equivalent size, on-line data needed to be transferred in less than 3 minutes. These requirements imply effective transfer rates on the order of four to eight megabytes per second (4-8 MB/s). The DSRS also needed to be able to send and receive data from a variety of other sources accessible from an Ethernet local area network.

  6. Data storage and retrieval system

    NASA Technical Reports Server (NTRS)

    Nakamoto, Glen

    1992-01-01

    The Data Storage and Retrieval System (DSRS) consists of off-the-shelf system components integrated as a file server supporting very large files. These files are on the order of one gigabyte of data per file, although smaller files on the order of one megabyte can be accommodated as well. For instance, one gigabyte of data occupies approximately six 9-track tape reels (recorded at 6250 bpi). Due to this large volume of media, it was desirable to 'shrink' the size of the proposed media to a single portable cassette. In addition to large size, a key requirement was that the data needs to be transferred to a (VME based) workstation at very high data rates. One gigabyte (GB) of data needed to be transferred from an archiveable media on a file server to a workstation in less than 5 minutes. Equivalent size, on-line data needed to be transferred in less than 3 minutes. These requirements imply effective transfer rates on the order of four to eight megabytes per second (4-8 MB/s). The DSRS also needed to be able to send and receive data from a variety of other sources accessible from an Ethernet local area network.

  7. Managing security and privacy concerns over data storage in healthcare research.

    PubMed

    Mackenzie, Isla S; Mantay, Brian J; McDonnell, Patrick G; Wei, Li; MacDonald, Thomas M

    2011-08-01

    Issues surrounding data security and privacy are of great importance when handling sensitive health-related data for research. The emphasis in the past has been on balancing the risks to individuals with the benefit to society of the use of databases for research. However, a new way of looking at such issues is that by optimising procedures and policies regarding security and privacy of data to the extent that there is no appreciable risk to the privacy of individuals, we can create a 'win-win' situation in which everyone benefits, and pharmacoepidemiological research can flourish with public support. We discuss holistic measures, involving both information technology and people, taken to improve the security and privacy of data storage. After an internal review, we commissioned an external audit by an independent consultant with a view to optimising our data storage and handling procedures. Improvements to our policies and procedures were implemented as a result of the audit. By optimising our storage of data, we hope to inspire public confidence and hence cooperation with the use of health care data in research. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Asynchronous Object Storage with QoS for Scientific and Commercial Big Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brim, Michael J; Dillow, David A; Oral, H Sarp

    2013-01-01

    This paper presents our design for an asynchronous object storage system intended for use in scientific and commercial big data workloads. Use cases from the target workload do- mains are used to motivate the key abstractions used in the application programming interface (API). The architecture of the Scalable Object Store (SOS), a prototype object stor- age system that supports the API s facilities, is presented. The SOS serves as a vehicle for future research into scalable and resilient big data object storage. We briefly review our research into providing efficient storage servers capable of providing quality of service (QoS) contractsmore » relevant for big data use cases.« less

  9. Storage, retrieval, and analysis of ST data

    NASA Technical Reports Server (NTRS)

    Albrecht, R.

    1984-01-01

    Space Telescope can generate multidimensional image data, very similar in nature to data produced with microdensitometers. An overview is presented of the ST science ground system between carrying out the observations and the interactive analysis of preprocessed data. The ground system elements used in data archival and retrieval are described and operational procedures are discussed. Emphasis is given to aspects of the ground system that are relevant to the science user and to general principles of system software development in a production environment. While the system being developed uses relatively conservative concepts for the launch baseline, concepts were developed to enhance the ground system. This includes networking, remote access, and the utilization of alternate data storage technologies.

  10. Laser Card For Compact Optical Data Storage Systems

    NASA Astrophysics Data System (ADS)

    Drexler, Jerome

    1982-05-01

    The principal thrust of the optical data storage industry to date has been the 10 billion bit optical disc system. Mass memory has been the primary objective. Another objective that is beginning to demand recognition is compact memory of 1 million to 40 million bits--on a wallet-size, laser recordable card. Drexler Technology has addressed this opportunity and has succeeded in demonstrating laser writing and readback using a 16 mm by 85 mm recording stripe mounted on a card. The write/read apparatus was developed by SRI International. With this unit, 5 micron holes have been recorded using a 10 milliwatt, 830 nanometer semiconductor-diode laser. Data is entered on an Apple II keyboard using the ASCII code. The recorded reflective surface is scanned with the same laser at lower power to generate a reflected bit stream which is converted into alphanumerics and which appear on the monitor. We are pleased to report that the combination of the DREXONTM laser recordable card ("Laser Card"), the semiconductor-diode laser, arrays of large recorded holes, and human interactive data rates are all mutually compatible and point the way forward to economically feasible, compact, data-storage systems.

  11. BRISK--research-oriented storage kit for biology-related data.

    PubMed

    Tan, Alan; Tripp, Ben; Daley, Denise

    2011-09-01

    In genetic science, large-scale international research collaborations represent a growing trend. These collaborations have demanding and challenging database, storage, retrieval and communication needs. These studies typically involve demographic and clinical data, in addition to the results from numerous genomic studies (omics studies) such as gene expression, eQTL, genome-wide association and methylation studies, which present numerous challenges, thus the need for data integration platforms that can handle these complex data structures. Inefficient methods of data transfer and access control still plague research collaboration. As science becomes more and more collaborative in nature, the need for a system that adequately manages data sharing becomes paramount. Biology-Related Information Storage Kit (BRISK) is a package of several web-based data management tools that provide a cohesive data integration and management platform. It was specifically designed to provide the architecture necessary to promote collaboration and expedite data sharing between scientists. The software, documentation, Java source code and demo are available at http://genapha.icapture.ubc.ca/brisk/index.jsp. BRISK was developed in Java, and tested on an Apache Tomcat 6 server with a MySQL database. denise.daley@hli.ubc.ca.

  12. Simulation of mass storage systems operating in a large data processing facility

    NASA Technical Reports Server (NTRS)

    Holmes, R.

    1972-01-01

    A mass storage simulation program was written to aid system designers in the design of a data processing facility. It acts as a tool for measuring the overall effect on the facility of on-line mass storage systems, and it provides the means of measuring and comparing the performance of competing mass storage systems. The performance of the simulation program is demonstrated.

  13. A protect solution for data security in mobile cloud storage

    NASA Astrophysics Data System (ADS)

    Yu, Xiaojun; Wen, Qiaoyan

    2013-03-01

    It is popular to access the cloud storage by mobile devices. However, this application suffer data security risk, especial the data leakage and privacy violate problem. This risk exists not only in cloud storage system, but also in mobile client platform. To reduce the security risk, this paper proposed a new security solution. It makes full use of the searchable encryption and trusted computing technology. Given the performance limit of the mobile devices, it proposes the trusted proxy based protection architecture. The design basic idea, deploy model and key flows are detailed. The analysis from the security and performance shows the advantage.

  14. Cavity-enhanced eigenmode and angular hybrid multiplexing in holographic data storage systems.

    PubMed

    Miller, Bo E; Takashima, Yuzuru

    2016-12-26

    Resonant optical cavities have been demonstrated to improve energy efficiencies in Holographic Data Storage Systems (HDSS). The orthogonal reference beams supported as cavity eigenmodes can provide another multiplexing degree of freedom to push storage densities toward the limit of 3D optical data storage. While keeping the increased energy efficiency of a cavity enhanced reference arm, image bearing holograms are multiplexed by orthogonal phase code multiplexing via Hermite-Gaussian eigenmodes in a Fe:LiNbO3 medium with a 532 nm laser at two Bragg angles. We experimentally confirmed write rates are enhanced by an average factor of 1.1, and page crosstalk is about 2.5%. This hybrid multiplexing opens up a pathway to increase storage density while minimizing modification of current angular multiplexing HDSS.

  15. Impact of Data Placement on Resilience in Large-Scale Object Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carns, Philip; Harms, Kevin; Jenkins, John

    Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model tomore » investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.« less

  16. High bit rate mass data storage device

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The HDDR-II mass data storage system consists of a Leach MTR 7114 recorder reproducer, a wire wrapped, integrated circuit flat plane and necessary power supplies for the flat plane. These units, with interconnecting cables and control panel are enclosed in a common housing mounted on casters. The electronics used in the HDDR-II double density decoding and encoding techniques are described.

  17. ICI optical data storage tape

    NASA Technical Reports Server (NTRS)

    Mclean, Robert A.; Duffy, Joseph F.

    1992-01-01

    Optical data storage tape is now a commercial reality. The world's first successful development of a digital optical tape system is complete. This is based on the Creo 1003 optical tape recorder with ICI 1012 write-once optical tape media. Flexible optical media offers many benefits in terms of manufacture; for a given capital investment, continuous, web-coating techniques produce more square meters of media than batch coating. The coated layers consist of a backcoat on the non-active side; on the active side there is a subbing layer, then reflector, dye/polymer, and transparent protective overcoat. All these layers have been tailored for ease of manufacture and specific functional characteristics.

  18. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    NASA Astrophysics Data System (ADS)

    Zotes Resines, M.; Heikkila, S. S.; Duellmann, D.; Adde, G.; Toebbicke, R.; Hughes, J.; Wang, L.

    2014-06-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  19. Move It or Lose It: Cloud-Based Data Storage

    ERIC Educational Resources Information Center

    Waters, John K.

    2010-01-01

    There was a time when school districts showed little interest in storing or backing up their data to remote servers. Nothing seemed less secure than handing off data to someone else. But in the last few years the buzz around cloud storage has grown louder, and the idea that data backup could be provided as a service has begun to gain traction in…

  20. LVFS: A Scalable Petabye/Exabyte Data Storage System

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.

    2013-12-01

    Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system

  1. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  2. Analyzing the Impact of Storage Shortage on Data Availability in Decentralized Online Social Networks

    PubMed Central

    He, Ligang; Liao, Xiangke; Huang, Chenlin

    2014-01-01

    Maintaining data availability is one of the biggest challenges in decentralized online social networks (DOSNs). The existing work often assumes that the friends of a user can always contribute to the sufficient storage capacity to store all data. However, this assumption is not always true in today's online social networks (OSNs) due to the fact that nowadays the users often use the smart mobile devices to access the OSNs. The limitation of the storage capacity in mobile devices may jeopardize the data availability. Therefore, it is desired to know the relation between the storage capacity contributed by the OSN users and the level of data availability that the OSNs can achieve. This paper addresses this issue. In this paper, the data availability model over storage capacity is established. Further, a novel method is proposed to predict the data availability on the fly. Extensive simulation experiments have been conducted to evaluate the effectiveness of the data availability model and the on-the-fly prediction. PMID:24892095

  3. Analyzing the impact of storage shortage on data availability in decentralized online social networks.

    PubMed

    Fu, Songling; He, Ligang; Liao, Xiangke; Li, Kenli; Huang, Chenlin

    2014-01-01

    Maintaining data availability is one of the biggest challenges in decentralized online social networks (DOSNs). The existing work often assumes that the friends of a user can always contribute to the sufficient storage capacity to store all data. However, this assumption is not always true in today's online social networks (OSNs) due to the fact that nowadays the users often use the smart mobile devices to access the OSNs. The limitation of the storage capacity in mobile devices may jeopardize the data availability. Therefore, it is desired to know the relation between the storage capacity contributed by the OSN users and the level of data availability that the OSNs can achieve. This paper addresses this issue. In this paper, the data availability model over storage capacity is established. Further, a novel method is proposed to predict the data availability on the fly. Extensive simulation experiments have been conducted to evaluate the effectiveness of the data availability model and the on-the-fly prediction.

  4. Global root zone storage capacity from satellite-based evaporation data

    NASA Astrophysics Data System (ADS)

    Wang-Erlandsson, Lan; Bastiaanssen, Wim; Gao, Hongkai; Jägermeyr, Jonas; Senay, Gabriel; van Dijk, Albert; Guerschman, Juan; Keys, Patrick; Gordon, Line; Savenije, Hubert

    2016-04-01

    We present an "earth observation-based" method for estimating root zone storage capacity - a critical, yet uncertain parameter in hydrological and land surface modelling. By assuming that vegetation optimises its root zone storage capacity to bridge critical dry periods, we were able to use state-of-the-art satellite-based evaporation data computed with independent energy balance equations to derive gridded root zone storage capacity at global scale. This approach does not require soil or vegetation information, is model independent, and is in principle scale-independent. In contrast to traditional look-up table approaches, our method captures the variability in root zone storage capacity within land cover type, including in rainforests where direct measurements of root depth otherwise are scarce. Implementing the estimated root zone storage capacity in the global hydrological model STEAM improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. We find that evergreen forests are able to create a large storage to buffer for extreme droughts (with a return period of up to 60 years), in contrast to short vegetation and crops (which seem to adapt to a drought return period of about 2 years). The presented method to estimate root zone storage capacity eliminates the need for soils and rooting depth information, which could be a game-changer in global land surface modelling.

  5. Partial storage optimization and load control strategy of cloud data centers.

    PubMed

    Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

  6. Partial Storage Optimization and Load Control Strategy of Cloud Data Centers

    PubMed Central

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444

  7. Dual-Wavelength Sensitized Photopolymer for Holographic Data Storage

    NASA Astrophysics Data System (ADS)

    Tao, Shiquan; Zhao, Yuxia; Wan, Yuhong; Zhai, Qianli; Liu, Pengfei; Wang, Dayong; Wu, Feipeng

    2010-08-01

    Novel photopolymers for holographic storage were investigated by combining acrylate monomers and/or vinyl monomers as recording media and liquid epoxy resins plus an amine harder as binder. In order to improve the holographic performances of the material at blue-green wavelength band two novel dyes were used as sensitizer. The methods of evaluating the holographic performances of the material, including the shrinkage and noise characteristics, are described in detail. Preliminary experiments show that samples with optimized composite have good holographic performances, and it is possible to record dual-wavelength hologram simultaneously in this photopolymer by sharing the same optical system, thus the storage density and data rate can be doubly increased.

  8. An intelligent data model for the storage of structured grids

    NASA Astrophysics Data System (ADS)

    Clyne, John; Norton, Alan

    2013-04-01

    With support from the U.S. National Science Foundation we have developed, and currently maintain, VAPOR: a geosciences-focused, open source visual data analysis package. VAPOR enables highly interactive exploration, as well as qualitative and quantitative analysis of high-resolution simulation outputs using only a commodity, desktop computer. The enabling technology behind VAPOR's ability to interact with a data set, whose size would overwhelm all but the largest analysis computing resources, is a progressive data access file format, called the VAPOR Data Collection (VDC). The VDC is based on the discrete wavelet transform and their information compaction properties. Prior to analysis, raw data undergo a wavelet transform, concentrating the information content into a fraction of the coefficients. The coefficients are then sorted by their information content (magnitude) into a small number of bins. Data are reconstructed by applying an inverse wavelet transform. If all of the coefficient bins are used during reconstruction the process is lossless (up to floating point round-off). If only a subset of the bins are used, an approximation of the original data is produced. A crucial point here is that the principal benefit to reconstruction from a subset of wavelet coefficients is a reduction in I/O. Further, if smaller coefficients are simply discarded, or perhaps stored on more capacious tertiary storage, secondary storage requirements (e.g. disk) can be reduced as well. In practice, these reductions in I/O or storage can be on the order of tens or even hundreds. This talk will briefly describe the VAPOR Data Collection, and will present real world success stories from the geosciences that illustrate how progressive data access enables highly interactive exploration of Big Data.

  9. A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.

    2017-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  10. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    PubMed

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  11. Problems in the long-term storage of data obtained from scientific space experiments

    NASA Technical Reports Server (NTRS)

    Zlotin, G. N.; Khovanskiy, Y. D.

    1975-01-01

    It is shown that long-term data storage systems can be achieved when the system which organizes and conducts the scientific space experiments is equipped with a specialized subsystem: the information filing system. Its main functions are described along with the necessity of stage-by-stage development and compatibility with the data processing systems. The requirements for long-term data storage media are discussed.

  12. In-network Coding for Resilient Sensor Data Storage and Efficient Data Mule Collection

    NASA Astrophysics Data System (ADS)

    Albano, Michele; Gao, Jie

    In a sensor network of n nodes in which k of them have sensed interesting data, we perform in-network erasure coding such that each node stores a linear combination of all the network data with random coefficients. This scheme greatly improves data resilience to node failures: as long as there are k nodes that survive an attack, all the data produced in the sensor network can be recovered with high probability. The in-network coding storage scheme also improves data collection rate by mobile mules and allows for easy scheduling of data mules.

  13. Using semantic data modeling techniques to organize an object-oriented database for extending the mass storage model

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik

    1991-01-01

    A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.

  14. A novel data storage logic in the cloud

    PubMed Central

    Mátyás, Bence; Szarka, Máté; Járvás, Gábor; Kusper, Gábor; Argay, István; Fialowski, Alice

    2016-01-01

    Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table. PMID:29026521

  15. A novel data storage logic in the cloud.

    PubMed

    Mátyás, Bence; Szarka, Máté; Járvás, Gábor; Kusper, Gábor; Argay, István; Fialowski, Alice

    2016-01-01

    Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table.

  16. Digital super-resolution holographic data storage based on Hermitian symmetry for achieving high areal density.

    PubMed

    Nobukawa, Teruyoshi; Nomura, Takanori

    2017-01-23

    Digital super-resolution holographic data storage based on Hermitian symmetry is proposed to store digital data in a tiny area of a medium. In general, reducing a recording area with an aperture leads to the improvement in the storage capacity of holographic data storage. Conventional holographic data storage systems however have a limitation in reducing a recording area. This limitation is called a Nyquist size. Unlike the conventional systems, our proposed system can overcome the limitation with the help of a digital holographic technique and digital signal processing. Experimental result shows that the proposed system can record and retrieve a hologram in a smaller area than the Nyquist size on the basis of Hermitian symmetry.

  17. Linear phase encoding for holographic data storage with a single phase-only spatial light modulator.

    PubMed

    Nobukawa, Teruyoshi; Nomura, Takanori

    2016-04-01

    A linear phase encoding is presented for realizing a compact and simple holographic data storage system with a single spatial light modulator (SLM). This encoding method makes it possible to modulate a complex amplitude distribution with a single phase-only SLM in a holographic storage system. In addition, an undesired light due to the imperfection of an SLM can be removed by spatial frequency filtering with a Nyquist aperture. The linear phase encoding is introduced to coaxial holographic data storage. The generation of a signal beam using linear phase encoding is experimentally verified in an interferometer. In a coaxial holographic data storage system, single data recording, shift selectivity, and shift multiplexed recording are experimentally demonstrated.

  18. Micro-optic lens for data storage

    NASA Technical Reports Server (NTRS)

    Milster, T. D.; Trusty, R. M.; Wang, M. S.; Froehlich, F. F.; Erwin, J. Kevin

    1991-01-01

    A new type of microlens for data storage applications that has improved off-axis performance is described. The lens consists of a micro Fresnel pattern on a curved substrate. The radius of the substrate is equal to the focal length of the lens. If the pattern and substrate are thin, the combination satisfies the Abbe sine condition. Therefore, the lens is free of coma. We analyze a 0.5 numerical aperture, 0.50 mm focal length lens in detail. A 0.16 numerical aperture lens was fabricated holographically, and results are presented.

  19. Multiplexed Holographic Optical Data Storage In Thick Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Ozcan, Meric; Smithey, Daniel T.; Crew, Marshall

    1998-01-01

    The optical data storage capacity of photochromic bacteriorhodopsin films is investigated by means of theoretical calculations, numerical simulations, and experimental measurements on sequential recording of angularly multiplexed diffraction gratings inside a thick D85N BR film.

  20. LOGISTIC MANAGEMENT INFORMATION SYSTEM - MANUAL DATA STORAGE AND RETRIEVAL SYSTEM.

    DTIC Science & Technology

    Logistics Management Information System . The procedures are applicable to manual storage and retrieval of all data used in the Logistics Management ... Information System (LMIS) and include the following: (1) Action Officer data source file. (2) Action Officer presentation format file. (3) LMI Coordination

  1. Land Water Storage within the Congo Basin Inferred from GRACE Satellite Gravity Data

    NASA Technical Reports Server (NTRS)

    Crowley, John W.; Mitrovica, Jerry X.; Bailey, Richard C.; Tamisiea, Mark E.; Davis, James L.

    2006-01-01

    GRACE satellite gravity data is used to estimate terrestrial (surface plus ground) water storage within the Congo Basin in Africa for the period of April, 2002 - May, 2006. These estimates exhibit significant seasonal (30 +/- 6 mm of equivalent water thickness) and long-term trends, the latter yielding a total loss of approximately 280 km(exp 3) of water over the 50-month span of data. We also combine GRACE and precipitation data set (CMAP, TRMM) to explore the relative contributions of the source term to the seasonal hydrological balance within the Congo Basin. We find that the seasonal water storage tends to saturate for anomalies greater than 30-44 mm of equivalent water thickness. Furthermore, precipitation contributed roughly three times the peak water storage after anomalously rainy seasons, in early 2003 and 2005, implying an approximately 60-70% loss from runoff and evapotranspiration. Finally, a comparison of residual land water storage (monthly estimates minus best-fitting trends) in the Congo and Amazon Basins shows an anticorrelation, in agreement with the 'see-saw' variability inferred by others from runoff data.

  2. A user-defined data type for the storage of time series data allowing efficient similarity screening.

    PubMed

    Sorokin, Anatoly; Selkov, Gene; Goryanin, Igor

    2012-07-16

    The volume of the experimentally measured time series data is rapidly growing, while storage solutions offering better data types than simple arrays of numbers or opaque blobs for keeping series data are sorely lacking. A number of indexing methods have been proposed to provide efficient access to time series data, but none has so far been integrated into a tried-and-proven database system. To explore the possibility of such integration, we have developed a data type for time series storage in PostgreSQL, an object-relational database system, and equipped it with an access method based on SAX (Symbolic Aggregate approXimation). This new data type has been successfully tested in a database supporting a large-scale plant gene expression experiment, and it was additionally tested on a very large set of simulated time series data. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Quasi-light storage for optical data packets.

    PubMed

    Schneider, Thomas; Preußler, Stefan

    2014-02-06

    Today's telecommunication is based on optical packets which transmit the information in optical fiber networks around the world. Currently, the processing of the signals is done in the electrical domain. Direct storage in the optical domain would avoid the transfer of the packets to the electrical and back to the optical domain in every network node and, therefore, increase the speed and possibly reduce the energy consumption of telecommunications. However, light consists of photons which propagate with the speed of light in vacuum. Thus, the storage of light is a big challenge. There exist some methods to slow down the speed of the light, or to store it in excitations of a medium. However, these methods cannot be used for the storage of optical data packets used in telecommunications networks. Here we show how the time-frequency-coherence, which holds for every signal and therefore for optical packets as well, can be exploited to build an optical memory. We will review the background and show in detail and through examples, how a frequency comb can be used for the copying of an optical packet which enters the memory. One of these time domain copies is then extracted from the memory by a time domain switch. We will show this method for intensity as well as for phase modulated signals.

  4. 31. Perimeter acquisition radar building room #318, data storage "racks"; ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    31. Perimeter acquisition radar building room #318, data storage "racks"; sign read: M&D controller, logic control buffer, data transmission controller - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND

  5. Towards Highly-Efficient Phototriggered Data Storage by Utilizing a Diketopyrrolopyrrole-Based Photoelectronic Small Molecule.

    PubMed

    Li, Yang; Li, Hua; He, Jinghui; Xu, Qingfeng; Li, Najun; Chen, Dongyun; Lu, Jianmei

    2016-07-20

    A cooperative photoelectrical strategy is proposed for effectively modulating the performance of a multilevel data-storage device. By taking advantage of organic photoelectronic molecules as storage media, the fabricated device exhibited enhanced working parameters under the action of both optical and electrical inputs. In cooperation with UV light, the operating voltages of the memory device were decreased, which was beneficial for low energy consumption. Moreover, the ON/OFF current ratio was more tunable and facilitated high-resolution multilevel storage. Compared with previous methods that focused on tuning the storage media, this study provides an easy approach for optimizing organic devices through multiple physical channels. More importantly, this method holds promise for integrating multiple functionalities into high-density data-storage devices. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Up-to-date state of storage techniques used for large numerical data files

    NASA Technical Reports Server (NTRS)

    Chlouba, V.

    1975-01-01

    Methods for data storage and output in data banks and memory files are discussed along with a survey of equipment available for this. Topics discussed include magnetic tapes, magnetic disks, Terabit magnetic tape memory, Unicon 690 laser memory, IBM 1360 photostore, microfilm recording equipment, holographic recording, film readers, optical character readers, digital data storage techniques, and photographic recording. The individual types of equipment are summarized in tables giving the basic technical parameters.

  7. ESGF and WDCC: The Double Structure of the Digital Data Storage at DKRZ

    NASA Astrophysics Data System (ADS)

    Toussaint, F.; Höck, H.

    2016-12-01

    Since a couple of years, Digital Repositories of climate science face new challenges: International projects are global collaborations. The data storage in parallel moved to federated, distributed storage systems like ESGF. For the long term archival storage (LTA) on the other hand, communities, funders, and data users make stronger demands for data and metadata quality to facilitate data use and reuse. At DKRZ, this situation led to a twofold data dissemination system - a situation which has influence on administration, workflows, and sustainability of the data. The ESGF system is focused on the needs of users as partners in global projects. It includes replication tools, detailed global project standards, and efficient search for the data to download. In contrast, DKRZ's classical CERA LTA storage aims for long term data holding and data curation as well as for data reuse requiring high metadata quality standards. In addition, for LTA data a Digital Object Identifier publication service for the direct integration of research data in scientific publications has been implemented. The editorial process at DKRZ-LTA ensures the quality of metadata and research data. The DOI and a citation code are provided and afterwards registered under DataCite's (datacite.org) regulations. In the overall data life cycle continuous reliability of the data and metadata quality is essential to allow for data handling at Petabytes level, data long term usability, and adequate publication of the results. These considerations lead to the question "What is quality" - with respect to data, to the repository itself, to the publisher, and the user? Global consensus is needed for these assessments as the phases of the end to end workflow gear into each other: For data and metadata, checks need to go hand in hand with the processes of production and storage. The results can be judged following a Quality Maturity Matrix (QMM). Repositories can be certified according to their trustworthiness

  8. GRACE, GLDAS and measured groundwater data products show water storage loss in Western Jilin, China.

    PubMed

    Moiwo, Juana Paul; Lu, Wenxi; Tao, Fulu

    2012-01-01

    Water storage depletion is a worsening hydrological problem that limits agricultural production in especially arid/semi-arid regions across the globe. Quantifying water storage dynamics is critical for developing water resources management strategies that are sustainable and protective of the environment. This study uses GRACE (Gravity Recovery and Climate Experiment), GLDAS (Global Land Data Assimilation System) and measured groundwater data products to quantify water storage in Western Jilin (a proxy for semi-arid wetland ecosystems) for the period from January 2002 to December 2009. Uncertainty/bias analysis shows that the data products have an average error <10% (p < 0.05). Comparisons of the storage variables show favorable agreements at various temporal cycles, with R(2) = 0.92 and RMSE = 7.43 mm at the average seasonal cycle. There is a narrowing soil moisture storage change, a widening groundwater storage loss, and an overall storage depletion of 0.85 mm/month in the region. There is possible soil-pore collapse, and land subsidence due to storage depletion in the study area. Invariably, storage depletion in this semi-arid region could have negative implications for agriculture, valuable/fragile wetland ecosystems and people's livelihoods. For sustainable restoration and preservation of wetland ecosystems in the region, it is critical to develop water resources management strategies that limit groundwater extraction rate to that of recharge rate.

  9. Analysis Report for Exascale Storage Requirements for Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, Thomas M.

    Over the next 10 years, the Department of Energy will be transitioning from Petascale to Exascale Computing resulting in data storage, networking, and infrastructure requirements to increase by three orders of magnitude. The technologies and best practices used today are the result of a relatively slow evolution of ancestral technologies developed in the 1950s and 1960s. These include magnetic tape, magnetic disk, networking, databases, file systems, and operating systems. These technologies will continue to evolve over the next 10 to 15 years on a reasonably predictable path. Experience with the challenges involved in transitioning these fundamental technologies from Terascale tomore » Petascale computing systems has raised questions about how these will scale another 3 or 4 orders of magnitude to meet the requirements imposed by Exascale computing systems. This report is focused on the most concerning scaling issues with data storage systems as they relate to High Performance Computing- and presents options for a path forward. Given the ability to store exponentially increasing amounts of data, far more advanced concepts and use of metadata will be critical to managing data in Exascale computing systems.« less

  10. Hierarchical storage of large volume of multidector CT data using distributed servers

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  11. SERODS optical data storage with parallel signal transfer

    DOEpatents

    Vo-Dinh, Tuan

    2003-09-02

    Surface-enhanced Raman optical data storage (SERODS) systems having increased reading and writing speeds, that is, increased data transfer rates, are disclosed. In the various SERODS read and write systems, the surface-enhanced Raman scattering (SERS) data is written and read using a two-dimensional process called parallel signal transfer (PST). The various embodiments utilize laser light beam excitation of the SERODS medium, optical filtering, beam imaging, and two-dimensional light detection. Two- and three-dimensional SERODS media are utilized. The SERODS write systems employ either a different laser or a different level of laser power.

  12. SERODS optical data storage with parallel signal transfer

    DOEpatents

    Vo-Dinh, Tuan

    2003-06-24

    Surface-enhanced Raman optical data storage (SERODS) systems having increased reading and writing speeds, that is, increased data transfer rates, are disclosed. In the various SERODS read and write systems, the surface-enhanced Raman scattering (SERS) data is written and read using a two-dimensional process called parallel signal transfer (PST). The various embodiments utilize laser light beam excitation of the SERODS medium, optical filtering, beam imaging, and two-dimensional light detection. Two- and three-dimensional SERODS media are utilized. The SERODS write systems employ either a different laser or a different level of laser power.

  13. Converged photonic data storage and switch platform for exascale disaggregated data centers

    NASA Astrophysics Data System (ADS)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  14. ElVisML: an open data format for the exchange and storage of electrophysiological data in ophthalmology.

    PubMed

    Strasser, Torsten; Peters, Tobias; Jägle, Herbert; Zrenner, Eberhart

    2018-02-01

    The ISCEV standards and recommendations for electrophysiological recordings in ophthalmology define a set of protocols with stimulus parameters, acquisition settings, and recording conditions, to unify the data and enable comparability of results across centers. Up to now, however, there are no standards to define the storage and exchange of such electrophysiological recordings. The aim of this study was to develop an open standard data format for the exchange and storage of visual electrophysiological data (ElVisML). We first surveyed existing data formats for biomedical signals and examined their suitability for electrophysiological data in ophthalmology. We then compared the suitability of text-based and binary formats, as well as encoding in Extensible Markup Language (XML) and character/comma-separated values. The results of the methodological consideration led to the development of ElVisML with an XML-encoded text-based format. This allows referential integrity, extensibility, the storing of accompanying units, as well as ensuring confidentiality and integrity of the data. A visualization of ElVisML documents (ElVisWeb) has additionally been developed, which facilitates the exchange of recordings on mailing lists and allows open access to data along with published articles. The open data format ElVisML ensures the quality, validity, and integrity of electrophysiological data transmission and storage as well as providing manufacturer-independent access and long-term archiving in a future-proof format. Standardization of the format of such neurophysiology data would promote the development of new techniques and open software for the use of neurophysiological data in both clinic and research.

  15. Comparison of Decadal Water Storage Trends from Global Hydrological Models and GRACE Satellite Data

    NASA Astrophysics Data System (ADS)

    Scanlon, B. R.; Zhang, Z. Z.; Save, H.; Sun, A. Y.; Mueller Schmied, H.; Van Beek, L. P.; Wiese, D. N.; Wada, Y.; Long, D.; Reedy, R. C.; Doll, P. M.; Longuevergne, L.

    2017-12-01

    Global hydrology is increasingly being evaluated using models; however, the reliability of these global models is not well known. In this study we compared decadal trends (2002-2014) in land water storage from 7 global models (WGHM, PCR-GLOBWB, and GLDAS: NOAH, MOSAIC, VIC, CLM, and CLSM) to storage trends from new GRACE satellite mascon solutions (CSR-M and JPL-M). The analysis was conducted over 186 river basins, representing about 60% of the global land area. Modeled total water storage trends agree with those from GRACE-derived trends that are within ±0.5 km3/yr but greatly underestimate large declining and rising trends outside this range. Large declining trends are found mostly in intensively irrigated basins and in some basins in northern latitudes. Rising trends are found in basins with little or no irrigation and are generally related to increasing trends in precipitation. The largest decline is found in the Ganges (-12 km3/yr) and the largest rise in the Amazon (43 km3/yr). Differences between models and GRACE are greatest in large basins (>0.5x106 km2) mostly in humid regions. There is very little agreement in storage trends between models and GRACE and among the models with values of r2 mostly <0.1. Various factors can contribute to discrepancies in water storage trends between models and GRACE, including uncertainties in precipitation, model calibration, storage capacity, and water use in models and uncertainties in GRACE data related to processing, glacier leakage, and glacial isostatic adjustment. The GRACE data indicate that land has a large capacity to store water over decadal timescales that is underrepresented by the models. The storage capacity in the modeled soil and groundwater compartments may be insufficient to accommodate the range in water storage variations shown by GRACE data. The inability of the models to capture the large storage trends indicates that model projections of climate and human-induced changes in water storage may be

  16. Phase-image-based content-addressable holographic data storage

    NASA Astrophysics Data System (ADS)

    John, Renu; Joseph, Joby; Singh, Kehar

    2004-03-01

    We propose and demonstrate the use of phase images for content-addressable holographic data storage. Use of binary phase-based data pages with 0 and π phase changes, produces uniform spectral distribution at the Fourier plane. The absence of strong DC component at the Fourier plane and more intensity of higher order spatial frequencies facilitate better recording of higher spatial frequencies, and improves the discrimination capability of the content-addressable memory. This improves the results of the associative recall in a holographic memory system, and can give low number of false hits even for small search arguments. The phase-modulated pixels also provide an opportunity of subtraction among data pixels leading to better discrimination between similar data pages.

  17. Archive Storage Media Alternatives.

    ERIC Educational Resources Information Center

    Ranade, Sanjay

    1990-01-01

    Reviews requirements for a data archive system and describes storage media alternatives that are currently available. Topics discussed include data storage; data distribution; hierarchical storage architecture, including inline storage, online storage, nearline storage, and offline storage; magnetic disks; optical disks; conventional magnetic…

  18. Effective grouping for energy and performance: Construction of adaptive, sustainable, and maintainable data storage

    NASA Astrophysics Data System (ADS)

    Essary, David S.

    The performance gap between processors and storage systems has been increasingly critical over the years. Yet the performance disparity remains, and further, storage energy consumption is rapidly becoming a new critical problem. While smarter caching and predictive techniques do much to alleviate this disparity, the problem persists, and data storage remains a growing contributor to latency and energy consumption. Attempts have been made at data layout maintenance, or intelligent physical placement of data, yet in practice, basic heuristics remain predominant. Problems that early studies sought to solve via layout strategies were proven to be NP-Hard, and data layout maintenance today remains more art than science. With unknown potential and a domain inherently full of uncertainty, layout maintenance persists as an area largely untapped by modern systems. But uncertainty in workloads does not imply randomness; access patterns have exhibited repeatable, stable behavior. Predictive information can be gathered, analyzed, and exploited to improve data layouts. Our goal is a dynamic, robust, sustainable predictive engine, aimed at improving existing layouts by replicating data at the storage device level. We present a comprehensive discussion of the design and construction of such a predictive engine, including workload evaluation, where we present and evaluate classical workloads as well as our own highly detailed traces collected over an extended period. We demonstrate significant gains through an initial static grouping mechanism, and compare against an optimal grouping method of our own construction, and further show significant improvement over competing techniques. We also explore and illustrate the challenges faced when moving from static to dynamic (i.e. online) grouping, and provide motivation and solutions for addressing these challenges. These challenges include metadata storage, appropriate predictive collocation, online performance, and physical placement

  19. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  20. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center

    PubMed Central

    Dou, Chao

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 PMID:28090205

  1. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center.

    PubMed

    Miao, Beibei; Dou, Chao; Jin, Xuebo

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 .

  2. On-Chip Fluorescence Switching System for Constructing a Rewritable Random Access Data Storage Device.

    PubMed

    Nguyen, Hoang Hiep; Park, Jeho; Hwang, Seungwoo; Kwon, Oh Seok; Lee, Chang-Soo; Shin, Yong-Beom; Ha, Tai Hwan; Kim, Moonil

    2018-01-10

    We report the development of on-chip fluorescence switching system based on DNA strand displacement and DNA hybridization for the construction of a rewritable and randomly accessible data storage device. In this study, the feasibility and potential effectiveness of our proposed system was evaluated with a series of wet experiments involving 40 bits (5 bytes) of data encoding a 5-charactered text (KRIBB). Also, a flexible data rewriting function was achieved by converting fluorescence signals between "ON" and "OFF" through DNA strand displacement and hybridization events. In addition, the proposed system was successfully validated on a microfluidic chip which could further facilitate the encoding and decoding process of data. To the best of our knowledge, this is the first report on the use of DNA hybridization and DNA strand displacement in the field of data storage devices. Taken together, our results demonstrated that DNA-based fluorescence switching could be applicable to construct a rewritable and randomly accessible data storage device through controllable DNA manipulations.

  3. Towards regional, error-bounded landscape carbon storage estimates for data-deficient areas of the world.

    PubMed

    Willcock, Simon; Phillips, Oliver L; Platts, Philip J; Balmford, Andrew; Burgess, Neil D; Lovett, Jon C; Ahrends, Antje; Bayliss, Julian; Doggart, Nike; Doody, Kathryn; Fanning, Eibleis; Green, Jonathan; Hall, Jaclyn; Howell, Kim L; Marchant, Rob; Marshall, Andrew R; Mbilinyi, Boniface; Munishi, Pantaleon K T; Owen, Nisha; Swetnam, Ruth D; Topp-Jorgensen, Elmer J; Lewis, Simon L

    2012-01-01

    Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as 'lowland tropical forest' are often used, termed 'Tier 1 type' analyses by the Intergovernmental Panel on Climate Change (IPCC). Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI) for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon) for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC 'Tier 2' reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92-6.74) Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced for a relatively

  4. Two-stage optical recording: photoinduced birefringence and surface-mediated bits storage in bisazo-containing copolymers towards ultrahigh data memory.

    PubMed

    Hu, Yanlei; Wu, Dong; Li, Jiawen; Huang, Wenhao; Chu, Jiaru

    2016-10-03

    Ultrahigh density data storage is in high demand in the current age of big data and thus motivates many innovative storage technologies. Femtosecond laser induced multi-dimensional optical data storage is an appealing method to fulfill the demand of ultrahigh storage capacity. Here we report a femtosecond laser induced two-stage optical storage in bisazobenzene copolymer films by manipulating the recording energies. Different mechanisms can be selected for specified memory use: two-photon isomerization (TPI) and laser induced surface deformation. Giant birefringence can be generated by TPI and brings about high signal-to-noise ratio (>20 dB) multi-dimensional reversible storage. Polarization-dependent surface deformation arises when increasing the recording energy, which not only facilitates the multi-level storage by black bits (dots), but also enhances the bits' readout signal and storing stability. This facile bits recording method, which enables completely different recording mechanisms in an identical storage medium, paves the way for sustainable big data storage.

  5. Experimental investigation of a page-oriented Lippmann holographic data storage system

    NASA Astrophysics Data System (ADS)

    Pauliat, Gilles; Contreras, Kevin

    2010-06-01

    Lippmann photography is a more than one century old interferometric process invented for recording colored images in thick black and white photographic emulsions. After a comparison between this photographic process and Denisyuk holography, we feature some hints to apply this technique to high density data storage by wavelength multiplexing in a page-oriented approach in thick media. For the first time we experimentally investigate this approach. We anticipated that this storage architecture should allow capacities as large as for conventional holography.

  6. Data storage and retrieval system abstract

    NASA Technical Reports Server (NTRS)

    Matheson, Barbara

    1992-01-01

    The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.

  7. Data storage and retrieval system abstract

    NASA Astrophysics Data System (ADS)

    Matheson, Barbara

    1992-09-01

    The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.

  8. Integrity Verification for Multiple Data Copies in Cloud Storage Based on Spatiotemporal Chaos

    NASA Astrophysics Data System (ADS)

    Long, Min; Li, You; Peng, Fei

    Aiming to strike for a balance between the security, efficiency and availability of the data verification in cloud storage, a novel integrity verification scheme based on spatiotemporal chaos is proposed for multiple data copies. Spatiotemporal chaos is implemented for node calculation of the binary tree, and the location of the data in the cloud is verified. Meanwhile, dynamic operation can be made to the data. Furthermore, blind information is used to prevent a third-party auditor (TPA) leakage of the users’ data privacy in a public auditing process. Performance analysis and discussion indicate that it is secure and efficient, and it supports dynamic operation and the integrity verification of multiple copies of data. It has a great potential to be implemented in cloud storage services.

  9. Petaminer: Using ROOT for efficient data storage in MySQL database

    NASA Astrophysics Data System (ADS)

    Cranshaw, J.; Malon, D.; Vaniachine, A.; Fine, V.; Lauret, J.; Hamill, P.

    2010-04-01

    High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.

  10. Hybrid data storage system in an HPC exascale environment

    DOEpatents

    Bent, John M.; Faibish, Sorin; Gupta, Uday K.; Tzelnic, Percy; Ting, Dennis P. J.

    2015-08-18

    A computer-executable method, system, and computer program product for managing I/O requests from a compute node in communication with a data storage system, including a first burst buffer node and a second burst buffer node, the computer-executable method, system, and computer program product comprising striping data on the first burst buffer node and the second burst buffer node, wherein a first portion of the data is communicated to the first burst buffer node and a second portion of the data is communicated to the second burst buffer node, processing the first portion of the data at the first burst buffer node, and processing the second portion of the data at the second burst buffer node.

  11. Effect of storage time on gene expression data acquired from unfrozen archived newborn blood spots.

    PubMed

    Ho, Nhan T; Busik, Julia V; Resau, James H; Paneth, Nigel; Khoo, Sok Kean

    2016-11-01

    Unfrozen archived newborn blood spots (NBS) have been shown to retain sufficient messenger RNA (mRNA) for gene expression profiling. However, the effect of storage time at ambient temperature for NBS samples in relation to the quality of gene expression data is relatively unknown. Here, we evaluated mRNA expression from quantitative real-time PCR (qRT-PCR) and microarray data obtained from NBS samples stored at ambient temperature to determine the effect of storage time on the quality of gene expression. These data were generated in a previous case-control study examining NBS in 53 children with cerebral palsy (CP) and 53 matched controls. NBS sample storage period ranged from 3 to 16years at ambient temperature. We found persistently low RNA integrity numbers (RIN=2.3±0.71) and 28S/18S rRNA ratios (~0) across NBS samples for all storage periods. In both qRT-PCR and microarray data, the expression of three common housekeeping genes-beta cytoskeletal actin (ACTB), glyceraldehyde 3-phosphate dehydrogenase (GAPDH), and peptidylprolyl isomerase A (PPIA)-decreased with increased storage time. Median values of each microarray probe intensity at log 2 scale also decreased over time. After eight years of storage, probe intensity values were largely reduced to background intensity levels. Of 21,500 genes tested, 89% significantly decreased in signal intensity, with 13,551, 10,730, and 9925 genes detected within 5years, > 5 to <10years, and >10years of storage, respectively. We also examined the expression of two gender-specific genes (X inactivation-specific transcript, XIST and lysine-specific demethylase 5D, KDM5D) and seven gene sets representing the inflammatory, hypoxic, coagulative, and thyroidal pathways hypothesized to be related to CP risk to determine the effect of storage time on the detection of these biologically relevant genes. We found the gender-specific genes and CP-related gene sets detectable in all storage periods, but exhibited differential expression

  12. Technology Assessment of High Capacity Data Storage Systems: Can We Avoid a Data Survivability Crisis?

    NASA Technical Reports Server (NTRS)

    Halem, Milton

    1999-01-01

    In a recent address at the California Science Center in Los Angeles, Vice President Al Gore articulated a Digital Earth Vision. That vision spoke to developing a multi-resolution, three-dimensional visual representation of the planet into which we can roam and zoom into vast quantities of embedded geo-referenced data. The vision was not limited to moving through space, but also allowing travel over a time-line, which can be set for days, years, centuries, or even geological epochs. A working group of Federal Agencies, developing a coordinated program to implement the Vice President's vision, developed the definition of the Digital Earth as a visual representation of our planet that enables a person to explore and interact with the vast amounts of natural and cultural geo-referenced information gathered about the Earth. One of the challenges identified by the agencies was whether the technology existed that would be available to permanently store and deliver all the digital data that enterprises might want to save for decades and centuries. Satellite digital data is growing by Moore's Law as is the growth of computer generated data. Similarly, the density of digital storage media in our information-intensive society is also increasing by a factor of four every three years. The technological bottleneck is that the bandwidth for transferring data is only growing at a factor of four every nine years. This implies that the migration of data to viable long-term storage is growing more slowly. The implication is that older data stored on increasingly obsolete media are at considerable risk if they cannot be continuously migrated to media with longer life times. Another problem occurs when the software and hardware systems for which the media were designed are no longer serviced by their manufacturers. Many instances exist where support for these systems are phased out after mergers or even in going out of business. In addition, survivability of older media can suffer from

  13. A system for the input and storage of data in the Besm-6 digital computer

    NASA Technical Reports Server (NTRS)

    Schmidt, K.; Blenke, L.

    1975-01-01

    Computer programs used for the decoding and storage of large volumes of data on the the BESM-6 computer are described. The following factors are discussed: the programming control language allows the programs to be run as part of a modular programming system used in data processing; data control is executed in a hierarchically built file on magnetic tape with sequential index storage; and the programs are not dependent on the structure of the data.

  14. Benefits and Pitfalls of GRACE Terrestrial Water Storage Data Assimilation

    NASA Technical Reports Server (NTRS)

    Girotto, Manuela

    2018-01-01

    Satellite observations of terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE) mission have a coarse resolution in time (monthly) and space (roughly 150,000 sq km at midlatitudes) and vertically integrate all water storage components over land, including soil moisture and groundwater. Nonetheless, data assimilation can be used to horizontally downscale and vertically partition GRACE-TWS observations. This presentation illustrates some of the benefits and drawbacks of assimilating TWS observations from GRACE into a land surface model over the continental United States and India. The assimilation scheme yields improved skill metrics for groundwater compared to the no-assimilation simulations. A smaller impact is seen for surface and root-zone soil moisture. Further, GRACE observes TWS depletion associated with anthropogenic groundwater extraction. Results from the assimilation emphasize the importance of representing anthropogenic processes in land surface modeling and data assimilation systems.

  15. A 1985-2015 data-driven global reconstruction of GRACE total water storage

    NASA Astrophysics Data System (ADS)

    Humphrey, Vincent; Gudmundsson, Lukas; Isabelle Seneviratne, Sonia

    2016-04-01

    After thirteen years of measurements, the Gravity Recovery and Climate Experiment (GRACE) mission has enabled for an unprecedented view on total water storage (TWS) variability. However, the relatively short record length, irregular time steps and multiple data gaps since 2011 still represent important limitations to a wider use of this dataset within the hydrological and climatological community especially for applications such as model evaluation or assimilation of GRACE in land surface models. To address this issue, we make use of the available GRACE record (2002-2015) to infer local statistical relationships between detrended monthly TWS anomalies and the main controlling atmospheric drivers (e.g. daily precipitation and temperature) at 1 degree resolution (Humphrey et al., in revision). Long-term and homogeneous monthly time series of detrended anomalies in total water storage are then reconstructed for the period 1985-2015. The quality of this reconstruction is evaluated in two different ways. First we perform a cross-validation experiment to assess the performance and robustness of the statistical model. Second we compare with independent basin-scale estimates of TWS anomalies derived by means of combined atmospheric and terrestrial water-balance using atmospheric water vapor flux convergence and change in atmospheric water vapor content (Mueller et al. 2011). The reconstructed time series are shown to provide robust data-driven estimates of global variations in water storage over large regions of the world. Example applications are provided for illustration, including an analysis of some selected major drought events which occurred before the GRACE era. References Humphrey V, Gudmundsson L, Seneviratne SI (in revision) Assessing global water storage variability from GRACE: trends, seasonal cycle, sub-seasonal anomalies and extremes. Surv Geophys Mueller B, Hirschi M, Seneviratne SI (2011) New diagnostic estimates of variations in terrestrial water storage

  16. mz5: Space- and Time-efficient Storage of Mass Spectrometry Data Sets*

    PubMed Central

    Wilhelm, Mathias; Kirchner, Marc; Steen, Judith A. J.; Steen, Hanno

    2012-01-01

    Across a host of MS-driven-omics fields, researchers witness the acquisition of ever increasing amounts of high throughput MS data and face the need for their compact yet efficiently accessible storage. Addressing the need for an open data exchange format, the Proteomics Standards Initiative and the Seattle Proteome Center at the Institute for Systems Biology independently developed the mzData and mzXML formats, respectively. In a subsequent joint effort, they defined an ontology and associated controlled vocabulary that specifies the contents of MS data files, implemented as the newer mzML format. All three formats are based on XML and are thus not particularly efficient in either storage space requirements or read/write speed. This contribution introduces mz5, a complete reimplementation of the mzML ontology that is based on the efficient, industrial strength storage backend HDF5. Compared with the current mzML standard, this strategy yields an average file size reduction to ∼54% and increases linear read and write speeds ∼3–4-fold. The format is implemented as part of the ProteoWizard project and is available under a permissive Apache license. Additional information and download links are available from http://software.steenlab.org/mz5. PMID:21960719

  17. mz5: space- and time-efficient storage of mass spectrometry data sets.

    PubMed

    Wilhelm, Mathias; Kirchner, Marc; Steen, Judith A J; Steen, Hanno

    2012-01-01

    Across a host of MS-driven-omics fields, researchers witness the acquisition of ever increasing amounts of high throughput MS data and face the need for their compact yet efficiently accessible storage. Addressing the need for an open data exchange format, the Proteomics Standards Initiative and the Seattle Proteome Center at the Institute for Systems Biology independently developed the mzData and mzXML formats, respectively. In a subsequent joint effort, they defined an ontology and associated controlled vocabulary that specifies the contents of MS data files, implemented as the newer mzML format. All three formats are based on XML and are thus not particularly efficient in either storage space requirements or read/write speed. This contribution introduces mz5, a complete reimplementation of the mzML ontology that is based on the efficient, industrial strength storage backend HDF5. Compared with the current mzML standard, this strategy yields an average file size reduction to ∼54% and increases linear read and write speeds ∼3-4-fold. The format is implemented as part of the ProteoWizard project and is available under a permissive Apache license. Additional information and download links are available from http://software.steenlab.org/mz5.

  18. Hierarchical programming for data storage and visualization

    USGS Publications Warehouse

    Donovan, John M.; Smith, Peter E.; ,

    2001-01-01

    Graphics software is an essential tool for interpreting, analyzing, and presenting data from multidimensional hydrodynamic models used in estuarine and coastal ocean studies. The post-processing of time-varying three-dimensional model output presents unique requirements for data visualization because of the large volume of data that can be generated and the multitude of time scales that must be examined. Such data can relate to estuarine or coastal ocean environments and come from numerical models or field instruments. One useful software tool for the display, editing, visualization, and printing of graphical data is the Gr application, written by the first author for use in U.S. Geological Survey San Francisco Bay Program. The Gr application has been made available to the public via the Internet since the year 2000. The Gr application is written in the Java (Sun Microsystems, Nov. 29, 2001) programming language and uses the Extensible Markup Language standard for hierarchical data storage. Gr presents a hierarchy of objects to the user that can be edited using a common interface. Java's object-oriented capabilities allow Gr to treat data, graphics, and tools equally and to save them all to a single XML file.

  19. Mariner 9 data storage subsystem flight performance summary

    NASA Technical Reports Server (NTRS)

    Thomas, N. E.; Larman, B. T.

    1973-01-01

    The performance is summarized of the Mariner 9 Data Storage Subsystem (DSS) throughout the primary and extended missions. Information presented is limited to reporting of anomalies which occurred during the playback sequences. Tables and figures describe the anomalies (dropouts, missing and added bits, in the imaging data) as a function of time (accumulated tape passes). The data results indicate that the performance of the DSS was satisfactory and within specification throughout the mission. The data presented is taken from the Spacecraft Team Incident/Surprise Anomaly Log recorded during the mission. Pertinent statistics concerning the tape transport performance are given. Also presented is a brief description of DSS operation, particularly that related to the recorded anomalies. This covers the video data encoding and how it is interpreted/decoded by ground data processing and the functional operation of the DSS in abnormal conditions such as loss of lock to the playback signal.

  20. Towards Regional, Error-Bounded Landscape Carbon Storage Estimates for Data-Deficient Areas of the World

    PubMed Central

    Willcock, Simon; Phillips, Oliver L.; Platts, Philip J.; Balmford, Andrew; Burgess, Neil D.; Lovett, Jon C.; Ahrends, Antje; Bayliss, Julian; Doggart, Nike; Doody, Kathryn; Fanning, Eibleis; Green, Jonathan; Hall, Jaclyn; Howell, Kim L.; Marchant, Rob; Marshall, Andrew R.; Mbilinyi, Boniface; Munishi, Pantaleon K. T.; Owen, Nisha; Swetnam, Ruth D.; Topp-Jorgensen, Elmer J.; Lewis, Simon L.

    2012-01-01

    Monitoring landscape carbon storage is critical for supporting and validating climate change mitigation policies. These may be aimed at reducing deforestation and degradation, or increasing terrestrial carbon storage at local, regional and global levels. However, due to data-deficiencies, default global carbon storage values for given land cover types such as ‘lowland tropical forest’ are often used, termed ‘Tier 1 type’ analyses by the Intergovernmental Panel on Climate Change (IPCC). Such estimates may be erroneous when used at regional scales. Furthermore uncertainty assessments are rarely provided leading to estimates of land cover change carbon fluxes of unknown precision which may undermine efforts to properly evaluate land cover policies aimed at altering land cover dynamics. Here, we present a repeatable method to estimate carbon storage values and associated 95% confidence intervals (CI) for all five IPCC carbon pools (aboveground live carbon, litter, coarse woody debris, belowground live carbon and soil carbon) for data-deficient regions, using a combination of existing inventory data and systematic literature searches, weighted to ensure the final values are regionally specific. The method meets the IPCC ‘Tier 2’ reporting standard. We use this method to estimate carbon storage over an area of33.9 million hectares of eastern Tanzania, reporting values for 30 land cover types. We estimate that this area stored 6.33 (5.92–6.74) Pg C in the year 2000. Carbon storage estimates for the same study area extracted from five published Africa-wide or global studies show a mean carbon storage value of ∼50% of that reported using our regional values, with four of the five studies reporting lower carbon storage values. This suggests that carbon storage may have been underestimated for this region of Africa. Our study demonstrates the importance of obtaining regionally appropriate carbon storage estimates, and shows how such values can be produced for

  1. Research Studies on Advanced Optical Module/Head Designs for Optical Data Storage

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Preprints are presented from the recent 1992 Optical Data Storage meeting in San Jose. The papers are divided into the following topical areas: Magneto-optical media (Modeling/design and fabrication/characterization/testing); Optical heads (holographic optical elements); and Optical heads (integrated optics). Some representative titles are as follow: Diffraction analysis and evaluation of several focus and track error detection schemes for magneto-optical disk systems; Proposal for massively parallel data storage system; Transfer function characteristics of super resolving systems; Modeling and measurement of a micro-optic beam deflector; Oxidation processes in magneto-optic and related materials; and A modal analysis of lamellar diffraction gratings in conical mountings.

  2. The TDR: A Repository for Long Term Storage of Geophysical Data and Metadata

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Baltzer, T.; Caron, J.

    2006-12-01

    For many years Unidata has provided easy, low cost data access to universities and research labs. Historically Unidata technology provided access to data in near real time. In recent years Unidata has additionally turned to providing middleware to serve longer term data and associated metadata via its THREDDS technology, the most recent offering being the THREDDS Data Server (TDS). The TDS provides middleware for metadata access and management, OPeNDAP data access, and integration with the Unidata Integrated Data Viewer (IDV), among other benefits. The TDS was designed to support rolling archives of data, that is, data that exist only for a relatively short, predefined time window. Now we are creating an addition to the TDS, called the THREDDS Data Repository (TDR), which allows users to store and retrieve data and other objects for an arbitrarily long time period. Data in the TDR can also be served by the TDS. The TDR performs important functions of locating storage for the data, moving the data to and from the repository, assigning unique identifiers, and generating metadata. The TDR framework supports pluggable components that allow tailoring an implementation for a particular application. The Linked Environments for Atmospheric Discovery (LEAD) project provides an excellent use case for the TDR. LEAD is a multi-institutional Large Information Technology Research project funded by the National Science Foundation (NSF). The goal of LEAD is to create a framework based on Grid and Web Services to support mesoscale meteorology research and education. This includes capabilities such as launching forecast models, mining data for meteorological phenomena, and dynamic workflows that are automatically reconfigurable in response to changing weather. LEAD presents unique challenges in managing and storing large data volumes from real-time observational systems as well as data that are dynamically created during the execution of adaptive workflows. For example, in order to

  3. Study report on laser storage and retrieval of image data

    NASA Technical Reports Server (NTRS)

    Becker, C. H.

    1976-01-01

    The theoretical foundation is presented for a system of real-time nonphotographic and nonmagnetic digital laser storage and retrieval of image data. The system utilizes diffraction-limited laser focusing upon thin metal films, melting elementary holes in the metal films in laser focus. The metal films are encapsulated in rotating flexible mylar discs which act as the permanent storage carries. Equal sized holes encompass two dimensional digital ensembles of information bits which are time-sequentially (bit by bit) stored and retrieved. The bits possess the smallest possible size, defined by the Rayleigh criterion of coherent physical optics. Space and time invariant reflective read-out of laser discs with a small laser, provides access to the stored digital information. By eliminating photographic and magnetic data processing, which characterize the previous state of the art, photographic grain, diffusion, and gamma-distortion do not exist. Similarly, magnetic domain structures, magnetic gaps, and magnetic read-out are absent with a digital laser disc system.

  4. PACS storage technology update: holographic storage.

    PubMed

    Colang, John E; Johnston, James N

    2006-01-01

    This paper focuses on the emerging technology of holographic storage and its effect on picture archiving and communication systems (PACS). A review of the emerging technology is presented, which includes a high level description of holographic drives and the associated substrate media, the laser and optical technology, and the spatial light modulator. The potential advantages and disadvantages of holographic drive and storage technology are evaluated. PACS administrators face myriad complex and expensive storage solutions and selecting an appropriate system is time-consuming and costly. Storage technology may become obsolete quickly because of the exponential nature of the advances in digital storage media. Holographic storage may turn out to be a low cost, high speed, high volume storage solution of the future; however, data is inconclusive at this early stage of the technology lifecycle. Despite the current lack of quantitative data to support the hypothesis that holographic technology will have a significant effect on PACS and standards of practice, it seems likely from the current information that holographic technology will generate significant efficiencies. This paper assumes the reader has a fundamental understanding of PACS technology.

  5. 76 FR 2707 - In the Matter of Certain Data Storage Products and Components Thereof; Notice of Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-14

    ... complaint filed by Data Network Storage, LLC of Newport Beach, California (``DNS''). 75 FR 71736 (Nov. 24... States after importation of certain data storage products and components thereof by reason of... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-748] In the Matter of Certain Data...

  6. The Challenges Facing Science Data Archiving on Current Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Peavey, Bernard; Behnke, Jeanne (Editor)

    1996-01-01

    This paper discusses the desired characteristics of a tape-based petabyte science data archive and retrieval system required to store and distribute several terabytes (TB) of data per day over an extended period of time, probably more than 115 years, in support of programs such as the Earth Observing System Data and Information System (EOSDIS). These characteristics take into consideration not only cost effective and affordable storage capacity, but also rapid access to selected files, and reading rates that are needed to satisfy thousands of retrieval transactions per day. It seems that where rapid random access to files is not crucial, the tape medium, magnetic or optical, continues to offer cost effective data storage and retrieval solutions, and is likely to do so for many years to come. However, in environments like EOS these tape based archive solutions provide less than full user satisfaction. Therefore, the objective of this paper is to describe the performance and operational enhancements that need to be made to the current tape based archival systems in order to achieve greater acceptance by the EOS and similar user communities.

  7. Mahanaxar: quality of service guarantees in high-bandwidth, real-time streaming data storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bigelow, David; Bent, John; Chen, Hsing-Bung

    2010-04-05

    Large radio telescopes, cyber-security systems monitoring real-time network traffic, and others have specialized data storage needs: guaranteed capture of an ultra-high-bandwidth data stream, retention of the data long enough to determine what is 'interesting,' retention of interesting data indefinitely, and concurrent read/write access to determine what data is interesting, without interrupting the ongoing capture of incoming data. Mahanaxar addresses this problem. Mahanaxar guarantees streaming real-time data capture at (nearly) the full rate of the raw device, allows concurrent read and write access to the device on a best-effort basis without interrupting the data capture, and retains data as long asmore » possible given the available storage. It has built in mechanisms for reliability and indexing, can scale to meet arbitrary bandwidth requirements, and handles both small and large data elements equally well. Results from our prototype implementation shows that Mahanaxar provides both better guarantees and better performance than traditional file systems.« less

  8. Full-field digital mammography image data storage reduction using a crop tool.

    PubMed

    Kang, Bong Joo; Kim, Sung Hun; An, Yeong Yi; Choi, Byung Gil

    2015-05-01

    The storage requirements for full-field digital mammography (FFDM) in a picture archiving and communication system are significant, so methods to reduce the data set size are needed. A FFDM crop tool for this purpose was designed, implemented, and tested. A total of 1,651 screening mammography cases with bilateral FFDMs were included in this study. The images were cropped using a DICOM editor while maintaining image quality. The cases were evaluated according to the breast volume (1/4, 2/4, 3/4, and 4/4) in the craniocaudal view. The image sizes between the cropped image group and the uncropped image group were compared. The overall image quality and reader's preference were independently evaluated by the consensus of two radiologists. Digital storage requirements for sets of four uncropped to cropped FFDM images were reduced by 3.8 to 82.9 %. The mean reduction rates according to the 1/4-4/4 breast volumes were 74.7, 61.1, 38, and 24 %, indicating that the lower the breast volume, the smaller the size of the cropped data set. The total image data set size was reduced from 87 to 36.7 GB, or a 57.7 % reduction. The overall image quality and the reader's preference for the cropped images were higher than those of the uncropped images. FFDM mammography data storage requirements can be significantly reduced using a crop tool.

  9. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  10. Large-scale electrophysiology: acquisition, compression, encryption, and storage of big data.

    PubMed

    Brinkmann, Benjamin H; Bower, Mark R; Stengel, Keith A; Worrell, Gregory A; Stead, Matt

    2009-05-30

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single-neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single-neuron action potentials, high frequency oscillations, and high amplitude ultra-slow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.

  11. Large-scale Electrophysiology: Acquisition, Compression, Encryption, and Storage of Big Data

    PubMed Central

    Brinkmann, Benjamin H.; Bower, Mark R.; Stengel, Keith A.; Worrell, Gregory A.; Stead, Matt

    2009-01-01

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32 kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single neuron action potentials, high frequency oscillations, and high amplitude ultraslow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information. PMID:19427545

  12. Monitoring of large-scale federated data storage: XRootD and beyond

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Diguez Arias, D.; Giordano, D.; Oleynik, D.; Petrosyan, A.; Saiz, P.; Tadel, M.; Tuckett, D.; Vukotic, I.

    2014-06-01

    The computing models of the LHC experiments are gradually moving from hierarchical data models with centrally managed data pre-placement towards federated storage which provides seamless access to data files independently of their location and dramatically improve recovery due to fail-over mechanisms. Construction of the data federations and understanding the impact of the new approach to data management on user analysis requires complete and detailed monitoring. Monitoring functionality should cover the status of all components of the federated storage, measuring data traffic and data access performance, as well as being able to detect any kind of inefficiencies and to provide hints for resource optimization and effective data distribution policy. Data mining of the collected monitoring data provides a deep insight into new usage patterns. In the WLCG context, there are several federations currently based on the XRootD technology. This paper will focus on monitoring for the ATLAS and CMS XRootD federations implemented in the Experiment Dashboard monitoring framework. Both federations consist of many dozens of sites accessed by many hundreds of clients and they continue to grow in size. Handling of the monitoring flow generated by these systems has to be well optimized in order to achieve the required performance. Furthermore, this paper demonstrates the XRootD monitoring architecture is sufficiently generic to be easily adapted for other technologies, such as HTTP/WebDAV dynamic federations.

  13. Optically Addressed Nanostructures for High Density Data Storage

    DTIC Science & Technology

    2005-10-14

    beam to sub-wavelength resolutions. X. Refereed Journal Publications I. M. D. Stenner , D. J. Gauthier, and M. A. Neifeld, "The speed of information in a...profiles for high-density optical data storage," Optics Communications, Vol.253, pp.56-69, 2005. 5. M. D. Stenner , D. J. Gauthier, and M. A. Neifeld, "Fast...causal information transmission in a medium with a slow group velocity," Physical Review Letters, Vol.94, February 2005. 6. M. D. Stenner , M. A

  14. Challenges for data storage in medical imaging research.

    PubMed

    Langer, Steve G

    2011-04-01

    Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.

  15. A Columnar Storage Strategy with Spatiotemporal Index for Big Climate Data

    NASA Astrophysics Data System (ADS)

    Hu, F.; Bowen, M. K.; Li, Z.; Schnase, J. L.; Duffy, D.; Lee, T. J.; Yang, C. P.

    2015-12-01

    Large collections of observational, reanalysis, and climate model output data may grow to as large as a 100 PB in the coming years, so climate dataset is in the Big Data domain, and various distributed computing frameworks have been utilized to address the challenges by big climate data analysis. However, due to the binary data format (NetCDF, HDF) with high spatial and temporal dimensions, the computing frameworks in Apache Hadoop ecosystem are not originally suited for big climate data. In order to make the computing frameworks in Hadoop ecosystem directly support big climate data, we propose a columnar storage format with spatiotemporal index to store climate data, which will support any project in the Apache Hadoop ecosystem (e.g. MapReduce, Spark, Hive, Impala). With this approach, the climate data will be transferred into binary Parquet data format, a columnar storage format, and spatial and temporal index will be built and attached into the end of Parquet files to enable real-time data query. Then such climate data in Parquet data format could be available to any computing frameworks in Hadoop ecosystem. The proposed approach is evaluated using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. Experimental results show that this approach could efficiently overcome the gap between the big climate data and the distributed computing frameworks, and the spatiotemporal index could significantly accelerate data querying and processing.

  16. Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations: Data used in Geosphere Journal Article

    DOE Data Explorer

    Thomas A. Buscheck

    2015-06-01

    This data submission is for Phase 2 of Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations, which focuses on multi-fluid (CO2 and brine) geothermal energy production and diurnal bulk energy storage in geologic settings that are suitable for geologic CO2 storage. This data submission includes all data used in the Geosphere Journal article by Buscheck et al (2016). All assumptions are discussed in that article.

  17. Space environment data storage and access: lessons learned and recommendations for the future

    NASA Astrophysics Data System (ADS)

    Evans, Hugh; Heynderickx, Daniel

    2012-07-01

    With the ever increasing volume of space environment data available at present and planned for the near future, the demands on data storage and access methods are increasing as well. In addition, continued access to historical, archived data remains crucial. On the basis of many years of experience, the authors identify the following issues as important for continued and efficient handling of datasets now and in the future: The huge data volumes currently or very soon avaiable from a number of space missions will limi direct Internet download access to even relatively short epoch ranges of data. Therefore, data providers should establish or extend standardised data (post-) processing services so that only data query results should be downloaded. Although a single standardised data format will in all likelihood remain utopia, data providers should at least include extensive metadata with their data products, according to established standards and practices (e.g. ISTP, SPASE). Standardisation of (sets of) metadata greatly facilitates data mining and querying. The use of SQL database storage should be considered instead of, or in parallel with, classic storage of data files. The use of SQL does away with having to handle file parsing and processing, while at the same time standard access protocols can be used to (remotely) connect to such data repositories. Many data holdings are still lacking in extensive descriptions of data provenance (e.g. instrument description), content and format. Unfortunately, detailed data information is usually rejected by scientific and technical journals. Re-processing of historical archived datasets into modern formats, making them easily available and usable, is urgently required, as knowledge is being lost. A global data directory has still not been achieved; policy makers should enforce stricter rules for "broadcasting" dataset information.

  18. Using Object Storage Technology vs Vendor Neutral Archives for an Image Data Repository Infrastructure.

    PubMed

    Bialecki, Brian; Park, James; Tilkin, Mike

    2016-08-01

    The intent of this project was to use object storage and its database, which has the ability to add custom extensible metadata to an imaging object being stored within the system, to harness the power of its search capabilities, and to close the technology gap that healthcare faces. This creates a non-disruptive tool that can be used natively by both legacy systems and the healthcare systems of today which leverage more advanced storage technologies. The base infrastructure can be populated alongside current workflows without any interruption to the delivery of services. In certain use cases, this technology can be seen as a true alternative to the VNA (Vendor Neutral Archive) systems implemented by healthcare today. The scalability, security, and ability to process complex objects makes this more than just storage for image data and a commodity to be consumed by PACS (Picture Archiving and Communication System) and workstations. Object storage is a smart technology that can be leveraged to create vendor independence, standards compliance, and a data repository that can be mined for truly relevant content by adding additional context to search capabilities. This functionality can lead to efficiencies in workflow and a wealth of minable data to improve outcomes into the future.

  19. Combined statistical analyses for long-term stability data with multiple storage conditions: a simulation study.

    PubMed

    Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R

    2014-01-01

    Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.

  20. ACCELERATORS: Preliminary application of turn-by-turn data analysis to the SSRF storage ring

    NASA Astrophysics Data System (ADS)

    Chen, Jian-Hui; Zhao, Zhen-Tang

    2009-07-01

    There is growing interest in utilizing the beam position monitor turn-by-turn (TBT) data to debug accelerators. TBT data can be used to determine the linear optics, coupled optics and nonlinear behaviors of the storage ring lattice. This is not only a useful complement to other methods of determining the linear optics such as LOCO, but also provides a possibility to uncover more hidden phenomena. In this paper, a preliminary application of a β function measurement to the SSRF storage ring is presented.

  1. Online data handling and storage at the CMS experiment

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gómez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, RK; Morovic, S.; Nuñez-Barranco-Fernández, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

  2. Data on conceptual design of cryogenic energy storage system combined with liquefied natural gas regasification process.

    PubMed

    Lee, Inkyu; Park, Jinwoo; Moon, Il

    2017-12-01

    This paper describes data of an integrated process, cryogenic energy storage system combined with liquefied natural gas (LNG) regasification process. The data in this paper is associated with the article entitled "Conceptual Design and Exergy Analysis of Combined Cryogenic Energy Storage and LNG Regasification Processes: Cold and Power Integration" (Lee et al., 2017) [1]. The data includes the sensitivity case study dataset of the air flow rate and the heat exchanging feasibility data by composite curves. The data is expected to be helpful to the cryogenic energy process development.

  3. Using expert systems to implement a semantic data model of a large mass storage system

    NASA Technical Reports Server (NTRS)

    Roelofs, Larry H.; Campbell, William J.

    1990-01-01

    The successful development of large volume data storage systems will depend not only on the ability of the designers to store data, but on the ability to manage such data once it is in the system. The hypothesis is that mass storage data management can only be implemented successfully based on highly intelligent meta data management services. There now exists a proposed mass store system standard proposed by the IEEE that addresses many of the issues related to the storage of large volumes of data, however, the model does not consider a major technical issue, namely the high level management of stored data. However, if the model were expanded to include the semantics and pragmatics of the data domain using a Semantic Data Model (SDM) concept, the result would be data that is expressive of the Intelligent Information Fusion (IIF) concept and also organized and classified in context to its use and purpose. The results are presented of a demonstration prototype SDM implemented using the expert system development tool NEXPERT OBJECT. In the prototype, a simple instance of a SDM was created to support a hypothetical application for the Earth Observing System, Data Information System (EOSDIS). The massive amounts of data that EOSDIS will manage requires the definition and design of a powerful information management system in order to support even the most basic needs of the project. The application domain is characterized by a semantic like network that represents the data content and the relationships between the data based on user views and the more generalized domain architectural view of the information world. The data in the domain are represented by objects that define classes, types and instances of the data. In addition, data properties are selectively inherited between parent and daughter relationships in the domain. Based on the SDM a simple information system design is developed from the low level data storage media, through record management and meta data

  4. Storages Are Not Forever

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cambria, Erik; Chattopadhyay, Anupam; Linn, Eike

    Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. Here, we chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering outmore » noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.« less

  5. Storages Are Not Forever

    DOE PAGES

    Cambria, Erik; Chattopadhyay, Anupam; Linn, Eike; ...

    2017-05-27

    Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. Here, we chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering outmore » noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.« less

  6. Spatially coupled low-density parity-check error correction for holographic data storage

    NASA Astrophysics Data System (ADS)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  7. 40 CFR 792.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 31 2010-07-01 2010-07-01 true Storage and retrieval of records and data. 792.190 Section 792.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... mutagenicity tests, specimens of soil, water, and plants, and wet specimens of blood, urine, feces, and...

  8. Dynamic mechanical analysis and organization/storage of data for polymetric materials

    NASA Technical Reports Server (NTRS)

    Rosenberg, M.; Buckley, W.

    1982-01-01

    Dynamic mechanical analysis was performed on a variety of temperature resistant polymers and composite resin matrices. Data on glass transition temperatures and degree of cure attained were derived. In addition a laboratory based computer system was installed and data base set up to allow entry of composite data. The laboratory CPU termed TYCHO is based on a DEC PDP 11/44 CPU with a Datatrieve relational data base. The function of TYCHO is integration of chemical laboratory analytical instrumentation and storage of chemical structures for modeling of new polymeric structures and compounds

  9. Increased Water Storage in the Qaidam Basin, the North Tibet Plateau from GRACE Gravity Data

    PubMed Central

    Jiao, Jiu Jimmy; Zhang, Xiaotao; Liu, Yi; Kuang, Xingxing

    2015-01-01

    Groundwater plays a key role in maintaining the ecology and environment in the hyperarid Qaidam Basin (QB). Indirect evidence and data from sparse observation wells suggest that groundwater in the QB is increasing but there has been no regional assessment of the groundwater conditions in the entire basin because of its remoteness and the severity of the arid environment. Here we report changes in the spatial and temporal distribution of terrestrial water storage (TWS) in the northern Tibetan Plateau (NTP) using Gravity Recovery and Climate Experiment (GRACE) data. Our study confirms long-term (2003–2012) TWS increases in the NTP. Between 2003 and 2012 the TWS increased by 88.4 and 20.6 km3 in the NTP and the QB, respectively, which is 225% and 52% of the capacity of the Three Gorges Reservoir, respectively. Soil and water changes from the Global Land Data Assimilation System (GLDAS) were also used to identify groundwater storage in the TWS and to demonstrate a long-term increase in groundwater storage in the QB. We demonstrate that increases in groundwater, not lake water, are dominant in the QB, as observed by groundwater levels. Our study suggests that the TWS increase was likely caused by a regional increase in precipitation and a decrease in evaporation. Degradation of the permafrost increases the thickness of the active layers providing increased storage for infiltrated precipitation and snow and ice melt water, which may also contribute to the increased TWS. The huge increase of water storage in the NTP will have profound effects, not only on local ecology and environment, but also on global water storage and sea level changes. PMID:26506230

  10. Increased Water Storage in the Qaidam Basin, the North Tibet Plateau from GRACE Gravity Data.

    PubMed

    Jiao, Jiu Jimmy; Zhang, Xiaotao; Liu, Yi; Kuang, Xingxing

    2015-01-01

    Groundwater plays a key role in maintaining the ecology and environment in the hyperarid Qaidam Basin (QB). Indirect evidence and data from sparse observation wells suggest that groundwater in the QB is increasing but there has been no regional assessment of the groundwater conditions in the entire basin because of its remoteness and the severity of the arid environment. Here we report changes in the spatial and temporal distribution of terrestrial water storage (TWS) in the northern Tibetan Plateau (NTP) using Gravity Recovery and Climate Experiment (GRACE) data. Our study confirms long-term (2003-2012) TWS increases in the NTP. Between 2003 and 2012 the TWS increased by 88.4 and 20.6 km3 in the NTP and the QB, respectively, which is 225% and 52% of the capacity of the Three Gorges Reservoir, respectively. Soil and water changes from the Global Land Data Assimilation System (GLDAS) were also used to identify groundwater storage in the TWS and to demonstrate a long-term increase in groundwater storage in the QB. We demonstrate that increases in groundwater, not lake water, are dominant in the QB, as observed by groundwater levels. Our study suggests that the TWS increase was likely caused by a regional increase in precipitation and a decrease in evaporation. Degradation of the permafrost increases the thickness of the active layers providing increased storage for infiltrated precipitation and snow and ice melt water, which may also contribute to the increased TWS. The huge increase of water storage in the NTP will have profound effects, not only on local ecology and environment, but also on global water storage and sea level changes.

  11. Holographic data storage crystals for the LDEF. [long duration exposure facility

    NASA Technical Reports Server (NTRS)

    Callen, W. Russell; Gaylord, Thomas K.

    1992-01-01

    Lithium niobate is a significant electro-optic material, with potential applications in ultra high capacity storage and processing systems. Lithium niobate is the material of choice for many integrated optical devices and holographic mass memory systems. For crystals of lithium niobate were passively exposed to the space environment of the Long Duration Exposure Facility (LDEF). Three of these crystals contained volume holograms. Although the crystals suffered the surface damage characteristics of most of the other optical components on the Georgia Tech tray, the crystals were recovered intact. The holograms were severely degraded because of the lengthy exposure, but the bulk properties are being investigated to determine the spaceworthiness for space data storage and retrieval systems.

  12. An Information Storage and Retrieval System for Biological and Geological Data. Interim Report.

    ERIC Educational Resources Information Center

    Squires, Donald F.

    A project is being conducted to test the feasibility of an information storage and retrieval system for museum specimen data, particularly for natural history museums. A pilot data processing system has been developed, with the specimen records from the national collections of birds, marine crustaceans, and rocks used as sample data. The research…

  13. Geological investigation for CO2 storage: from seismic and well data to storage design

    NASA Astrophysics Data System (ADS)

    Chapuis, Flavie; Bauer, Hugues; Grataloup, Sandrine; Leynet, Aurélien; Bourgine, Bernard; Castagnac, Claire; Fillacier, Simon; Lecomte, Antony; Le Gallo, Yann; Bonijoly, Didier

    2010-05-01

    Geological investigation for CO2 storage: from seismic and well data to storage design Chapuis F.1, Bauer H.1, Grataloup S.1, Leynet A.1, Bourgine B.1, Castagnac C.1, Fillacier, S.2, Lecomte A.2, Le Gallo Y.2, Bonijoly D.1. 1 BRGM, 3 av Claude Guillemin, 45060 Orléans Cedex, France, f.chapuis@brgm.fr, d.bonijoly@brgm.fr 2 Geogreen, 7, rue E. et A. Peugeot, 92563 Rueil-Malmaison Cedex, France, ylg@greogreen.fr The main purpose of this study is to evaluate the techno-economical potential of storing 200 000 tCO2 per year produced by a sugar beat distillery. To reach this goal, an accurate hydro-geological characterisation of a CO2 injection site is of primary importance because it will strongly influence the site selection, the storage design and the risk management. Geological investigation for CO2 storage is usually set in the center or deepest part of sedimentary basins. However, CO2 producers do not always match with the geological settings, and so other geological configurations have to be studied. This is the aim of this project, which is located near the South-West border of the Paris Basin, in the Orléans region. Special geometries such as onlaps and pinch out of formation against the basement are likely to be observed and so have to be taken into account. Two deep saline aquifers are potentially good candidates for CO2 storage. The Triassic continental deposits capped by the Upper Triassic/Lower Jurassic continental shales and the Dogger carbonate deposits capped by the Callovian and Oxfordian shales. First, a data review was undertaken, to provide the palaeogeographical settings and ideas about the facies, thicknesses and depth of the targeted formations. It was followed by a seismic interpretation. Three hundred kilometres of seismic lines were reprocessed and interpreted to characterize the geometry of the studied area. The main structure identified is the Étampes fault that affects all the formations. Apart from the vicinity of the fault where drag

  14. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip.

    PubMed

    Shulaker, Max M; Hills, Gage; Park, Rebecca S; Howe, Roger T; Saraswat, Krishna; Wong, H-S Philip; Mitra, Subhasish

    2017-07-05

    The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors-promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage-fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce 'highly processed' information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

  15. Performance data for a desuperheater integrated to a thermal energy storage system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, A.H.W.; Jones, J.W.

    1995-11-01

    Desuperheaters are heat exchangers that recover heat from the compressor discharge gas to heat domestic hot water. The objective of this project was to conduct performance tests for a desuperheater in the cooling and heating modes of a thermal energy storage system so as to form a data base on the steady state performance of a residential desuperheater unit. The desuperheater integrated to a thermal energy storage system was installed in the Dual-Air Loop Test Facility at The Center for Energy Studies, the University of Texas at Austin. The major components of the system consist of the refrigerant compressor, domesticmore » hot water (DHW) desuperheater, thermal storage tank with evaporator/condenser coil, outdoor air coil, DHW storage tank, DHW circulating pump, space conditioning water circulation pump, and indoor heat exchanger. Although measurements were made to quantity space heating, space cooling, and domestic water heating, this paper only emphasizes the desuperheater performance of the unit. Experiments were conducted to study the effects of various outdoor temperature and entering water temperature on the performance of the desuperheater/TES system. In the cooling and heating modes, the desuperheater captured 5 to 18 percent and 8 to 17 percent, respectively, of the heat that would be normally rejected through the air coil condenser. At higher outdoor temperature, the desuperheater captured more heat. it was also noted that the heating and cooling COPs decreased with entering water temperature. The information generated in the experimental efforts could be used to form a data base on the steady state performance of a residential desuperheater unit.« less

  16. 40 CFR 160.190 - Storage and retrieval of records and data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Storage and retrieval of records and data. 160.190 Section 160.190 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... soil, water, and plants, and wet specimens of blood, urine, feces, and biological fluids, do not need...

  17. Data systems and computer science space data systems: Onboard memory and storage

    NASA Technical Reports Server (NTRS)

    Shull, Tom

    1991-01-01

    The topics are presented in viewgraph form and include the following: technical objectives; technology challenges; state-of-the-art assessment; mass storage comparison; SODR drive and system concepts; program description; vertical Bloch line (VBL) device concept; relationship to external programs; and backup charts for memory and storage.

  18. [Carbon storage of forest stands in Shandong Province estimated by forestry inventory data].

    PubMed

    Li, Shi-Mei; Yang, Chuan-Qiang; Wang, Hong-Nian; Ge, Li-Qiang

    2014-08-01

    Based on the 7th forestry inventory data of Shandong Province, this paper estimated the carbon storage and carbon density of forest stands, and analyzed their distribution characteristics according to dominant tree species, age groups and forest category using the volume-derived biomass method and average-biomass method. In 2007, the total carbon storage of the forest stands was 25. 27 Tg, of which the coniferous forests, mixed conifer broad-leaved forests, and broad-leaved forests accounted for 8.6%, 2.0% and 89.4%, respectively. The carbon storage of forest age groups followed the sequence of young forests > middle-aged forests > mature forests > near-mature forests > over-mature forests. The carbon storage of young forests and middle-aged forests accounted for 69.3% of the total carbon storage. Timber forest, non-timber product forest and protection forests accounted for 37.1%, 36.3% and 24.8% of the total carbon storage, respectively. The average carbon density of forest stands in Shandong Province was 10.59 t x hm(-2), which was lower than the national average level. This phenomenon was attributed to the imperfect structure of forest types and age groups, i. e., the notably higher percentage of timber forests and non-timber product forest and the excessively higher percentage of young forests and middle-aged forest than mature forests.

  19. Development and evaluation of a low-cost and high-capacity DICOM image data storage system for research.

    PubMed

    Yakami, Masahiro; Ishizu, Koichi; Kubo, Takeshi; Okada, Tomohisa; Togashi, Kaori

    2011-04-01

    Thin-slice CT data, useful for clinical diagnosis and research, is now widely available but is typically discarded in many institutions, after a short period of time due to data storage capacity limitations. We designed and built a low-cost high-capacity Digital Imaging and COmmunication in Medicine (DICOM) storage system able to store thin-slice image data for years, using off-the-shelf consumer hardware components, such as a Macintosh computer, a Windows PC, and network-attached storage units. "Ordinary" hierarchical file systems, instead of a centralized data management system such as relational database, were adopted to manage patient DICOM files by arranging them in directories enabling quick and easy access to the DICOM files of each study by following the directory trees with Windows Explorer via study date and patient ID. Software used for this system was open-source OsiriX and additional programs we developed ourselves, both of which were freely available via the Internet. The initial cost of this system was about $3,600 with an incremental storage cost of about $900 per 1 terabyte (TB). This system has been running since 7th Feb 2008 with the data stored increasing at the rate of about 1.3 TB per month. Total data stored was 21.3 TB on 23rd June 2009. The maintenance workload was found to be about 30 to 60 min once every 2 weeks. In conclusion, this newly developed DICOM storage system is useful for research due to its cost-effectiveness, enormous capacity, high scalability, sufficient reliability, and easy data access.

  20. Online Data Handling and Storage at the CMS Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J. M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced bymore » the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.« less

  1. Monitoring groundwater storage change in Mekong Delta using Gravity Recovery and Climate Experiment (GRACE) data

    NASA Astrophysics Data System (ADS)

    Aierken, A.; Lee, H.; Hossain, F.; Bui, D. D.; Nguyen, L. D.

    2016-12-01

    The Mekong Delta, home to almost 20 million inhabitants, is considered one of the most important region for Vietnam as it is the agricultural and industrial production base of the nation. However, in recent decades, the region is seriously threatened by variety of environmental hazards, such as floods, saline water intrusion, arsenic contamination, and land subsidence, which raise its vulnerability to sea level rise due to global climate change. All these hazards are related to groundwater depletion, which is the result of dramatically increased over-exploitation. Therefore, monitoring groundwater is critical to sustainable development and most importantly, to people's life in the region. In most countries, groundwater is monitored using well observations. However, because of its spatial and temporal gaps and cost, it is typically difficult to obtain large scale, continuous observations. Since 2002, the Gravity Recovery and Climate Experiment (GRACE) satellite gravimetry mission has delivered freely available Earth's gravity variation data, which can be used to obtain terrestrial water storage (TWS) changes. In this study, the TWS anomalies over the Mekong Delta, which are the integrated sum of anomalies of soil moisture storage (SMS), surface water storage (SWS), canopy water storage (CWS), groundwater storage (GWS), have been obtained using GRACE CSR RL05 data. The leakage error occurred due to GRACE signal processing has been corrected using several different approaches. The groundwater storage anomalies were then derived from TWS anomalies by removing SMS, and CWS anomalies simulated by the four land surface models (NOAH, CLM, VIC and MOSAIC) in the Global Land Data Assimilation System (GLDAS), as well as SWS anomalies estimated using ENVISAT satellite altimetry and MODIS imagery. Then, the optimal GRACE signal restoration method for the Mekong Delta is determined with available in-situ well data. The estimated GWS anomalies revealed continuously decreasing

  2. Certification of ICI 1012 optical data storage tape

    NASA Technical Reports Server (NTRS)

    Howell, J. M.

    1993-01-01

    ICI has developed a unique and novel method of certifying a Terabyte optical tape. The tape quality is guaranteed as a statistical upper limit on the probability of uncorrectable errors. This is called the Corrected Byte Error Rate or CBER. We developed this probabilistic method because of two reasons why error rate cannot be measured directly. Firstly, written data is indelible, so one cannot employ write/read tests such as used for magnetic tape. Secondly, the anticipated error rates need impractically large samples to measure accurately. For example, a rate of 1E-12 implies only one byte in error per tape. The archivability of ICI 1012 Data Storage Tape in general is well characterized and understood. Nevertheless, customers expect performance guarantees to be supported by test results on individual tapes. In particular, they need assurance that data is retrievable after decades in archive. This paper describes the mathematical basis, measurement apparatus and applicability of the certification method.

  3. Nosql for Storage and Retrieval of Large LIDAR Data Collections

    NASA Astrophysics Data System (ADS)

    Boehm, J.; Liu, K.

    2015-08-01

    Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file) in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.

  4. Data Storage and sharing for the long tail of science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B.; Pouchard, L.; Smith, P. M.

    Research data infrastructure such as storage must now accommodate new requirements resulting from trends in research data management that require researchers to store their data for the long term and make it available to other researchers. We propose Data Depot, a system and service that provides capabilities for shared space within a group, shared applications, flexible access patterns and ease of transfer at Purdue University. We evaluate Depot as a solution for storing and sharing multiterabytes of data produced in the long tail of science with a use case in soundscape ecology studies from the Human- Environment Modeling and Analysismore » Laboratory. We observe that with the capabilities enabled by Data Depot, researchers can easily deploy fine-grained data access control, manage data transfer and sharing, as well as integrate their workflows into a High Performance Computing environment.« less

  5. A Split-Path Schema-Based RFID Data Storage Model in Supply Chain Management

    PubMed Central

    Fan, Hua; Wu, Quanyuan; Lin, Yisong; Zhang, Jianfeng

    2013-01-01

    In modern supply chain management systems, Radio Frequency IDentification (RFID) technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products. Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance. PMID:23645112

  6. Damming the genomic data flood using a comprehensive analysis and storage data structure

    PubMed Central

    Bouffard, Marc; Phillips, Michael S.; Brown, Andrew M.K.; Marsh, Sharon; Tardif, Jean-Claude; van Rooij, Tibor

    2010-01-01

    Data generation, driven by rapid advances in genomic technologies, is fast outpacing our analysis capabilities. Faced with this flood of data, more hardware and software resources are added to accommodate data sets whose structure has not specifically been designed for analysis. This leads to unnecessarily lengthy processing times and excessive data handling and storage costs. Current efforts to address this have centered on developing new indexing schemas and analysis algorithms, whereas the root of the problem lies in the format of the data itself. We have developed a new data structure for storing and analyzing genotype and phenotype data. By leveraging data normalization techniques, database management system capabilities and the use of a novel multi-table, multidimensional database structure we have eliminated the following: (i) unnecessarily large data set size due to high levels of redundancy, (ii) sequential access to these data sets and (iii) common bottlenecks in analysis times. The resulting novel data structure horizontally divides the data to circumvent traditional problems associated with the use of databases for very large genomic data sets. The resulting data set required 86% less disk space and performed analytical calculations 6248 times faster compared to a standard approach without any loss of information. Database URL: http://castor.pharmacogenomics.ca PMID:21159730

  7. GraphStore: A Distributed Graph Storage System for Big Data Networks

    ERIC Educational Resources Information Center

    Martha, VenkataSwamy

    2013-01-01

    Networks, such as social networks, are a universal solution for modeling complex problems in real time, especially in the Big Data community. While previous studies have attempted to enhance network processing algorithms, none have paved a path for the development of a persistent storage system. The proposed solution, GraphStore, provides an…

  8. Data on the no-load performance analysis of a tomato postharvest storage system.

    PubMed

    Ayomide, Orhewere B; Ajayi, Oluseyi O; Banjo, Solomon O; Ajayi, Adesola A

    2017-08-01

    In this present investigation, an original and detailed empirical data on the transfer of heat in a tomato postharvest storage system was presented. No-load tests were performed for a period of 96 h. The heat distribution at different locations, namely the top, middle and bottom of the system was acquired, at a time interval of 30 min for the test period. The humidity inside the system was taken into consideration. Thus, No-load tests with or without introduction of humidity were carried out and data showing the effect of a rise in humidity level, on temperature distribution were acquired. The temperatures at the external mechanical cooling components were acquired and could be used for showing the performance analysis of the storage system.

  9. nmrML: A Community Supported Open Data Standard for the Description, Storage, and Exchange of NMR Data.

    PubMed

    Schober, Daniel; Jacob, Daniel; Wilson, Michael; Cruz, Joseph A; Marcu, Ana; Grant, Jason R; Moing, Annick; Deborde, Catherine; de Figueiredo, Luis F; Haug, Kenneth; Rocca-Serra, Philippe; Easton, John; Ebbels, Timothy M D; Hao, Jie; Ludwig, Christian; Günther, Ulrich L; Rosato, Antonio; Klein, Matthias S; Lewis, Ian A; Luchinat, Claudio; Jones, Andrew R; Grauslys, Arturas; Larralde, Martin; Yokochi, Masashi; Kobayashi, Naohiro; Porzel, Andrea; Griffin, Julian L; Viant, Mark R; Wishart, David S; Steinbeck, Christoph; Salek, Reza M; Neumann, Steffen

    2018-01-02

    NMR is a widely used analytical technique with a growing number of repositories available. As a result, demands for a vendor-agnostic, open data format for long-term archiving of NMR data have emerged with the aim to ease and encourage sharing, comparison, and reuse of NMR data. Here we present nmrML, an open XML-based exchange and storage format for NMR spectral data. The nmrML format is intended to be fully compatible with existing NMR data for chemical, biochemical, and metabolomics experiments. nmrML can capture raw NMR data, spectral data acquisition parameters, and where available spectral metadata, such as chemical structures associated with spectral assignments. The nmrML format is compatible with pure-compound NMR data for reference spectral libraries as well as NMR data from complex biomixtures, i.e., metabolomics experiments. To facilitate format conversions, we provide nmrML converters for Bruker, JEOL and Agilent/Varian vendor formats. In addition, easy-to-use Web-based spectral viewing, processing, and spectral assignment tools that read and write nmrML have been developed. Software libraries and Web services for data validation are available for tool developers and end-users. The nmrML format has already been adopted for capturing and disseminating NMR data for small molecules by several open source data processing tools and metabolomics reference spectral libraries, e.g., serving as storage format for the MetaboLights data repository. The nmrML open access data standard has been endorsed by the Metabolomics Standards Initiative (MSI), and we here encourage user participation and feedback to increase usability and make it a successful standard.

  10. Spectroscopic Feedback for High Density Data Storage and Micromachining

    DOEpatents

    Carr, Christopher W.; Demos, Stavros; Feit, Michael D.; Rubenchik, Alexander M.

    2008-09-16

    Optical breakdown by predetermined laser pulses in transparent dielectrics produces an ionized region of dense plasma confined within the bulk of the material. Such an ionized region is responsible for broadband radiation that accompanies a desired breakdown process. Spectroscopic monitoring of the accompanying light in real-time is utilized to ascertain the morphology of the radiated interaction volume. Such a method and apparatus as presented herein, provides commercial realization of rapid prototyping of optoelectronic devices, optical three-dimensional data storage devices, and waveguide writing.

  11. Estimating continental water storage variations in Central Asia area using GRACE data

    NASA Astrophysics Data System (ADS)

    Dapeng, Mu; Zhongchang, Sun; Jinyun, Guo

    2014-03-01

    The goal of GRACE satellite is to determine time-variations of the Earth's gravity, and particularly the effects of fluid mass redistributions at the surface of the Earth. This paper uses GRACE Level-2 RL05 data provided by CSR to estimate water storage variations of four river basins in Asia area for the period from 2003 to 2011. We apply a two-step filtering method to reduce the errors in GRACE data, which combines Gaussian averaging function and empirical de-correlation method. We use GLDAS hydrology to validate the result from GRACE. Special averaging approach is preformed to reduce the errors in GLDAS. The results of former three basins from GRACE are consistent with GLDAS hydrology model. In the Tarim River basin, there is more discrepancy between GRACE and GLDAS. Precipitation data from weather station proves that the results of GRACE are more plausible. We use spectral analysis to obtain the main periods of GRACE and GLDAS time series and then use least squares adjustment to determine the amplitude and phase. The results show that water storage in Central Asia is decreasing.

  12. First Experiences with CMS Data Storage on the GEMSS System at the INFN-CNAF Tier-1

    NASA Astrophysics Data System (ADS)

    Andreotti, D.; Bonacorsi, D.; Cavalli, A.; Pra, S. Dal; Dell'Agnello, L.; Forti, Alberto; Grandi, C.; Gregori, D.; Gioi, L. Li; Martelli, B.; Prosperini, A.; Ricci, P. P.; Ronchieri, Elisabetta; Sapunenko, V.; Sartirana, A.; Vagnoni, V.; Zappi, Riccardo

    A brand new Mass Storage System solution called "Grid-Enabled Mass Storage System" (GEMSS) -based on the Storage Resource Manager (StoRM) developed by INFN, on the General Parallel File System by IBM and on the Tivoli Storage Manager by IBM -has been tested and deployed at the INFNCNAF Tier-1 Computing Centre in Italy. After a successful stress test phase, the solution is now being used in production for the data custodiality of the CMS experiment at CNAF. All data previously recorded on the CASTOR system have been transferred to GEMSS. As final validation of the GEMSS system, some of the computing tests done in the context of the WLCG "Scale Test for the Experiment Program" (STEP'09) challenge were repeated in September-October 2009 and compared with the results previously obtained with CASTOR in June 2009. In this paper, the GEMSS system basics, the stress test activity and the deployment phase -as well as the reliability and performance of the system -are overviewed. The experiences in the use of GEMSS at CNAF in preparing for the first months of data taking of the CMS experiment at the Large Hadron Collider are also presented.

  13. Ensuring long-term reliability of the data storage on optical disc

    NASA Astrophysics Data System (ADS)

    Chen, Ken; Pan, Longfa; Xu, Bin; Liu, Wei

    2008-12-01

    "Quality requirements and handling regulation of archival optical disc for electronic records filing" is released by The State Archives Administration of the People's Republic of China (SAAC) on its network in March 2007. This document established a complete operative managing process for optical disc data storage in archives departments. The quality requirements of the optical disc used in archives departments are stipulated. Quality check of the recorded disc before filing is considered to be necessary and the threshold of the parameter of the qualified filing disc is set down. The handling regulations for the staffs in the archives departments are described. Recommended environment conditions of the disc preservation, recording, accessing and testing are presented. The block error rate of the disc is selected as main monitoring parameter of the lifetime of the filing disc and three classes pre-alarm lines are created for marking of different quality check intervals. The strategy of monitoring the variation of the error rate curve of the filing discs and moving the data to a new disc or a new media when the error rate of the disc reaches the third class pre-alarm line will effectively guarantee the data migration before permanent loss. Only when every step of the procedure is strictly implemented, it is believed that long-term reliability of the data storage on optical disc for archives departments can be effectively ensured.

  14. High Density Data Storage, the SONY Data DiscMan Electronic Book, and the Unfolding Multi-Media Revolution.

    ERIC Educational Resources Information Center

    Kountz, John

    1991-01-01

    Description of high density data storage (HDDS) devices focuses on CD-ROMs and explores their impact on libraries, publishing, education, and library communications. Highlights include costs; technical standards; reading devices; authoring systems; robotics; the influence of new technology on the role of libraries; and royalty and copyright issues…

  15. Evaluating Water Storage Variations in the MENA region using GRACE Satellite Data

    NASA Astrophysics Data System (ADS)

    Lopez, O.; Houborg, R.; McCabe, M. F.

    2013-12-01

    Terrestrial water storage (TWS) variations over large river basins can be derived from temporal gravity field variations observed by the Gravity Recovery and Climate Experiment (GRACE) satellites. These signals are useful for determining accurate estimates of water storage and fluxes over areas covering a minimum of 150,000 km2 (length scales of a few hundred kilometers) and thus prove to be a valuable tool for regional water resources management, particularly for areas with a lack of in-situ data availability or inconsistent monitoring, such as the Middle East and North Africa (MENA) region. This already stressed arid region is particularly vulnerable to climate change and overdraft of its non-renewable freshwater sources, and thus direction in managing its resources is a valuable aid. An inter-comparison of different GRACE-derived TWS products was done in order to provide a quantitative assessment on their uncertainty and their utility for diagnosing spatio-temporal variability in water storage over the MENA region. Different processing approaches for the inter-satellite tracking data from the GRACE mission have resulted in the development of TWS products, with resolutions in time from 10 days to 1 month and in space from 0.5 to 1 degree global gridded data, while some of them use input from land surface models in order to restore the original signal amplitudes. These processing differences and the difficulties in recovering the mass change signals over arid regions will be addressed. Output from the different products will be evaluated and compared over basins inside the MENA region, and compared to output from land surface models.

  16. Digital Holographic Data Storage with Fast Access

    NASA Astrophysics Data System (ADS)

    Ma, J.; Chang, T.; Choi, S.; Hong, J.

    Recent investigations in holographic mass memory systems have produced proof of concept demonstrations that have highlighted their potential for providing unprecedented capacity, data transfer rates and fast random access performance [1-4]. The exploratory nature of most such investigations has been largely confined to benchtop experiments in which the practical constraints of packaging and environmental concerns have been ignored. We have embarked on an effort to demonstrate the holographic mass memory concept by developing a compact prototype system geared for avionics and similar applications, which demand the following features (mostly interdependent factors): (1) solid-state design (no moving parts), (2) fast data-seek time, (3) robustness with respect to environmental factors (temperature, vibration, shock). In this chapter, we report on the development and demonstration of two systems, one with 100 Mbytes and the other with more than 1 Gbyte of storage capacity. Both systems feature solid-state design with the addressing mechanism realized with acousto-optic deflectors that are capable of better than 50 µs data seek time. Since the basic designs for the two systems are similar, we describe only the larger system in detail. The operation of the smaller system has been demonstrated in various environments, including hand-held operation and thermal/mechanical shock, and a photograph of the smaller system is provided as well as actual digital data retrieved from the same system.

  17. Globally distributed software defined storage (proposal)

    NASA Astrophysics Data System (ADS)

    Shevel, A.; Khoruzhnikov, S.; Grudinin, V.; Sadov, O.; Kairkanov, A.

    2017-10-01

    The volume of the coming data in HEP is growing. The volume of the data to be held for a long time is growing as well. Large volume of data - big data - is distributed around the planet. The methods, approaches how to organize and manage the globally distributed data storage are required. The distributed storage has several examples for personal needs like own-cloud.org, pydio.com, seafile.com, sparkleshare.org. For enterprise-level there is a number of systems: SWIFT - distributed storage systems (part of Openstack), CEPH and the like which are mostly object storage. When several data center’s resources are integrated, the organization of data links becomes very important issue especially if several parallel data links between data centers are used. The situation in data centers and in data links may vary each hour. All that means each part of distributed data storage has to be able to rearrange usage of data links and storage servers in each data center. In addition, for each customer of distributed storage different requirements could appear. The above topics are planned to be discussed in data storage proposal.

  18. Magnetic domain wall shift registers for data storage applications

    NASA Astrophysics Data System (ADS)

    Read, Dan; O'Brien, L.; Zeng, H. T.; Lewis, E. R.; Petit, D.; Sampaio, J.; Thevenard, L.; Cowburn, R. P.

    2009-03-01

    Data storage devices based on magnetic domain walls (DWs) propagating through permalloy (Py) nanowires have been proposed [Allwood et al. Science 309, 1688 (2005), S. S. Parkin, US Patent 6,834,005 (2004)] and have attracted a great deal of attention. We experimentally demonstrate such a device using shift registers constructed from magnetic NOT gates used in combination with a globally applied rotating magnetic field. We have demonstrated data writing, propagation, and readout in individually addressable Py nanowires 90 nm wide and 10 nm thick. Electrical data writing is achieved using the Oersted field due to current pulses in gold stripes (4 μm wide, 150 nm thick), patterned on top of and perpendicular to the nanowires. The conduit-like properties of the nanowires allow the propagation of data sequences over distances greater than 100 μm. Using spatially resolved magneto-optical Kerr effect (MOKE) measurements we can directly detect the propagation of single DWs in individual nanostructures without requiring data averaging. Electrical readout was demonstrated by detecting the presence of DWs at deliberately introduced pinning sites in the wire.

  19. ``Recent experiences and future expectations in data storage technology''

    NASA Astrophysics Data System (ADS)

    Pfister, Jack

    1990-08-01

    For more than 10 years the conventional media for High Energy Physics has been 9 track magnetic tape in various densities. More recently, especially in Europe, the IBM 3480 technology has been adopted while in the United States, especially at Fermilab, 8 mm is being used by the largest experiments as a primary recording media and where possible they are using 8 mm for the production, analysis and distribution of data summary tapes. VHS and Digital Audio tape have recurrently appeared but seem to serve primarily as a back-up storage media. The reasons for what appear to be a radical departure are many. Economics (media and controllers are inexpensive), form factor (two gigabytes per shirt pocket), and convenience (fewer mounts/dismounts per minute) are dominant among the reasons. The traditional data media suppliers seem to have been content to evolve the traditional media at their own pace with only modest enhancements primarily in ``value engineering'' of extant products. Meanwhile, start-up companies providing small system and workstations sought other media both to reduce the price of their offerings and respond to the real need of lower cost back-up for lower cost systems. This happening in a market context where traditional computer systems vendors were leaving the tape market altogether or shifting to ``3480'' technology which has certainly created a climate for reconsideration and change. The newest data storage products, in most cases, are not coming from the technologies developed by the computing industry but by the audio and video industry. Just where these flopticals, opticals, 19 mm tape and the new underlying technologies, such as, ``digital paper'' may fit in the HEP computing requirement picture will be reviewed. What these technologies do for and to HEP will be discussed along with some suggestions for a methodology for tracking and evaluating extant and emerging technologies.

  20. Technology for organization of the onboard system for processing and storage of ERS data for ultrasmall spacecraft

    NASA Astrophysics Data System (ADS)

    Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.

    2017-10-01

    Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.

  1. Assimilation of GRACE Terrestrial Water Storage Data into a Land Surface Model

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf H.; Zaitchik, Benjamin F.; Rodell, Matt

    2008-01-01

    The NASA Gravity Recovery and Climate Experiment (GRACE) system of satellites provides observations of large-scale, monthly terrestrial water storage (TWS) changes. In. this presentation we describe a land data assimilation system that ingests GRACE observations and show that the assimilation improves estimates of water storage and fluxes, as evaluated against independent measurements. The ensemble-based land data assimilation system uses a Kalman smoother approach along with the NASA Catchment Land Surface Model (CLSM). We assimilated GRACE-derived TWS anomalies for each of the four major sub-basins of the Mississippi into the Catchment Land Surface Model (CLSM). Compared with the open-loop (no assimilation) CLSM simulation, assimilation estimates of groundwater variability exhibited enhanced skill with respect to measured groundwater. Assimilation also significantly increased the correlation between simulated TWS and gauged river flow for all four sub-basins and for the Mississippi River basin itself. In addition, model performance was evaluated for watersheds smaller than the scale of GRACE observations, in the majority of cases, GRACE assimilation led to increased correlation between TWS estimates and gauged river flow, indicating that data assimilation has considerable potential to downscale GRACE data for hydrological applications. We will also describe how the output from the GRACE land data assimilation system is now being prepared for use in the North American Drought Monitor.

  2. Handling the data management needs of high-throughput sequencing data: SpeedGene, a compression algorithm for the efficient storage of genetic data

    PubMed Central

    2012-01-01

    Background As Next-Generation Sequencing data becomes available, existing hardware environments do not provide sufficient storage space and computational power to store and process the data due to their enormous size. This is and will be a frequent problem that is encountered everyday by researchers who are working on genetic data. There are some options available for compressing and storing such data, such as general-purpose compression software, PBAT/PLINK binary format, etc. However, these currently available methods either do not offer sufficient compression rates, or require a great amount of CPU time for decompression and loading every time the data is accessed. Results Here, we propose a novel and simple algorithm for storing such sequencing data. We show that, the compression factor of the algorithm ranges from 16 to several hundreds, which potentially allows SNP data of hundreds of Gigabytes to be stored in hundreds of Megabytes. We provide a C++ implementation of the algorithm, which supports direct loading and parallel loading of the compressed format without requiring extra time for decompression. By applying the algorithm to simulated and real datasets, we show that the algorithm gives greater compression rate than the commonly used compression methods, and the data-loading process takes less time. Also, The C++ library provides direct-data-retrieving functions, which allows the compressed information to be easily accessed by other C++ programs. Conclusions The SpeedGene algorithm enables the storage and the analysis of next generation sequencing data in current hardware environment, making system upgrades unnecessary. PMID:22591016

  3. Pulse Code Modulation (PCM) data storage and analysis using a microcomputer

    NASA Technical Reports Server (NTRS)

    Massey, D. E.

    1986-01-01

    A PCM storage device/data analyzer is described. This instrument is a peripheral plug-in board especially built to enable a personal computer to store and analyze data from a PCM source. This board and custom written software turns a computer into a snapshot PCM decommutator. This instrument will take in and store many hundreds or thousands of PCM telemetry data frames, then sift through them over and over again. The data can be converted to any number base and displayed, examined for any bit dropouts or changes in particular words or frames, graphically plotted, or statistically analyzed. This device was designed and built for use on the NASA Sounding Rocket Program for PCM encoder configuration and testing.

  4. Bookshelf: a simple curation system for the storage of biomolecular simulation data.

    PubMed

    Vohra, Shabana; Hall, Benjamin A; Holdbrook, Daniel A; Khalid, Syma; Biggin, Philip C

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call 'Bookshelf', that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/

  5. Bookshelf: a simple curation system for the storage of biomolecular simulation data

    PubMed Central

    Vohra, Shabana; Hall, Benjamin A.; Holdbrook, Daniel A.; Khalid, Syma; Biggin, Philip C.

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call ‘Bookshelf’, that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/ PMID:21169341

  6. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  7. A price and performance comparison of three different storage architectures for data in cloud-based systems

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Jelenak, A.; Potter, N.; Fulker, D. W.; Habermann, T.

    2017-12-01

    Providing data services based on cloud computing technology that is equivalent to those developed for traditional computing and storage systems is critical for successful migration to cloud-based architectures for data production, scientific analysis and storage. OPeNDAP Web-service capabilities (comprising the Data Access Protocol (DAP) specification plus open-source software for realizing DAP in servers and clients) are among the most widely deployed means for achieving data-as-service functionality in the Earth sciences. OPeNDAP services are especially common in traditional data center environments where servers offer access to datasets stored in (very large) file systems, and a preponderance of the source data for these services is being stored in the Hierarchical Data Format Version 5 (HDF5). Three candidate architectures for serving NASA satellite Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS) were developed and their performance examined for a set of representative use cases. The performance was based both on runtime and incurred cost. The three architectures differ in how HDF5 files are stored in the Amazon Simple Storage Service (S3) and how the Hyrax server (as an EC2 instance) retrieves their data. The results for both the serial and parallel access to HDF5 data in the S3 will be presented. While the study focused on HDF5 data, OPeNDAP and the Hyrax data server, the architectures are generic and the analysis can be extrapolated to many different data formats, web APIs, and data servers.

  8. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  9. Threshold response using modulated continuous wave illumination for multilayer 3D optical data storage

    NASA Astrophysics Data System (ADS)

    Saini, A.; Christenson, C. W.; Khattab, T. A.; Wang, R.; Twieg, R. J.; Singer, K. D.

    2017-01-01

    In order to achieve a high capacity 3D optical data storage medium, a nonlinear or threshold writing process is necessary to localize data in the axial dimension. To this end, commercial multilayer discs use thermal ablation of metal films or phase change materials to realize such a threshold process. This paper addresses a threshold writing mechanism relevant to recently reported fluorescence-based data storage in dye-doped co-extruded multilayer films. To gain understanding of the essential physics, single layer spun coat films were used so that the data is easily accessible by analytical techniques. Data were written by attenuating the fluorescence using nanosecond-range exposure times from a 488 nm continuous wave laser overlapping with the single photon absorption spectrum. The threshold writing process was studied over a range of exposure times and intensities, and with different fluorescent dyes. It was found that all of the dyes have a common temperature threshold where fluorescence begins to attenuate, and the physical nature of the thermal process was investigated.

  10. Random-access technique for modular bathymetry data storage in a continental shelf wave refraction program

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1974-01-01

    A study was conducted of an alternate method for storage and use of bathymetry data in the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave-refraction computer program. The regional bathymetry array was divided into 105 indexed modules which can be read individually into memory in a nonsequential manner from a peripheral file using special random-access subroutines. In running a sample refraction case, a 75-percent decrease in program field length was achieved by using the random-access storage method in comparison with the conventional method of total regional array storage. This field-length decrease was accompanied by a comparative 5-percent increase in central processing time and a 477-percent increase in the number of operating-system calls. A comparative Langley Research Center computer system cost savings of 68 percent was achieved by using the random-access storage method.

  11. Optical response of photopolymer materials for holographic data storage applications.

    PubMed

    Sheridan, J T; Gleeson, M R; Close, C E; Kelly, J V

    2007-01-01

    We briefly review the application of photopolymer recording materials in the area of holographic data storage. In particular we discuss the recent development of the Non-local Polymerisation Driven Diffusion model. Applying this model we develop simple first-order analytic expressions describing the spatial frequency response of photopolymer materials. The assumptions made in the derivation of these formulae are described and their ranges of validity are examined. The effects of particular physical parameters of a photopolymer on the material response are discussed.

  12. Model-independent and fast determination of optical functions in storage rings via multiturn and closed-orbit data

    NASA Astrophysics Data System (ADS)

    Riemann, Bernard; Grete, Patrick; Weis, Thomas

    2011-06-01

    Multiturn (or turn-by-turn) data acquisition has proven to be a new source of direct measurements for Twiss parameters in storage rings. On the other hand, closed-orbit measurements are a long-known tool for analyzing closed-orbit perturbations with conventional beam position monitor (BPM) systems and are necessarily available at every storage ring. This paper aims at combining the advantages of multiturn measurements and closed-orbit data. We show that only two multiturn BPMs and four correctors in one localized drift space in the storage ring (diagnostic drift) are sufficient for model-independent and absolute measuring of β and φ functions at all BPMs, including the conventional ones, instead of requiring all BPMs being equipped with multiturn electronics.

  13. A study of Bangladesh's sub-surface water storages using satellite products and data assimilation scheme.

    PubMed

    Khaki, M; Forootan, E; Kuhn, M; Awange, J; Papa, F; Shum, C K

    2018-06-01

    Climate change can significantly influence terrestrial water changes around the world particularly in places that have been proven to be more vulnerable such as Bangladesh. In the past few decades, climate impacts, together with those of excessive human water use have changed the country's water availability structure. In this study, we use multi-mission remotely sensed measurements along with a hydrological model to separately analyze groundwater and soil moisture variations for the period 2003-2013, and their interactions with rainfall in Bangladesh. To improve the model's estimates of water storages, terrestrial water storage (TWS) data obtained from the Gravity Recovery And Climate Experiment (GRACE) satellite mission are assimilated into the World-Wide Water Resources Assessment (W3RA) model using the ensemble-based sequential technique of the Square Root Analysis (SQRA) filter. We investigate the capability of the data assimilation approach to use a non-regional hydrological model for a regional case study. Based on these estimates, we investigate relationships between the model derived sub-surface water storage changes and remotely sensed precipitations, as well as altimetry-derived river level variations in Bangladesh by applying the empirical mode decomposition (EMD) method. A larger correlation is found between river level heights and rainfalls (78% on average) in comparison to groundwater storage variations and rainfalls (57% on average). The results indicate a significant decline in groundwater storage (∼32% reduction) for Bangladesh between 2003 and 2013, which is equivalent to an average rate of 8.73 ± 2.45mm/year. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Pulse Code Modulation (PCM) data storage and analysis using a microcomputer

    NASA Technical Reports Server (NTRS)

    Massey, D. E.

    1986-01-01

    The current widespread use of microcomputers has led to the creation of some very low-cost instrumentation. A Pulse Code Modulation (PCM) storage device/data analyzer -- a peripheral plug-in board especially constructed to enable a personal computer to store and analyze data from a PCM source -- was designed and built for use on the NASA Sounding Rocket Program for PMC encoder configuration and testing. This board and custom-written software turns a computer into a snapshot PCM decommutator which will accept and store many hundreds or thousands of PCM telemetry data frames, then sift through them repeatedly. These data can be converted to any number base and displayed, examined for any bit dropouts or changes (in particular, words or frames), graphically plotted, or statistically analyzed.

  15. Ensuring Data Storage Security in Tree cast Routing Architecture for Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kumar, K. E. Naresh; Sagar, U. Vidya; Waheed, Mohd. Abdul

    2010-10-01

    In this paper presents recent advances in technology have made low-cost, low-power wireless sensors with efficient energy consumption. A network of such nodes can coordinate among themselves for distributed sensing and processing of certain data. For which, we propose an architecture to provide a stateless solution in sensor networks for efficient routing in wireless sensor networks. This type of architecture is known as Tree Cast. We propose a unique method of address allocation, building up multiple disjoint trees which are geographically inter-twined and rooted at the data sink. Using these trees, routing messages to and from the sink node without maintaining any routing state in the sensor nodes is possible. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, this routing architecture moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this paper, we focus on data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in this architecture, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server

  16. Data storage for managing the health enterprise and achieving business continuity.

    PubMed

    Hinegardner, Sam

    2003-01-01

    As organizations move away from a silo mentality to a vision of enterprise-level information, more healthcare IT departments are rejecting the idea of information storage as an isolated, system-by-system solution. IT executives want storage solutions that act as a strategic element of an IT infrastructure, centralizing storage management activities to effectively reduce operational overhead and costs. This article focuses on three areas of enterprise storage: tape, disk, and disaster avoidance.

  17. Hydrological storage variations in a lake water balance, observed from multi-sensor satellite data and hydrological models.

    NASA Astrophysics Data System (ADS)

    Singh, Alka; Seitz, Florian; Schwatke, Christian; Guentner, Andreas

    2013-04-01

    Freshwater lakes and reservoirs account for 74.5% of continental water storage in surface water bodies and only 1.8% resides in rivers. Lakes and reservoirs are a key component of the continental hydrological cycle but in-situ monitoring networks are very limited either because of sparse spatial distribution of gauges or national data policy. Monitoring and predicting extreme events is very challenging in that case. In this study we demonstrate the use of optical remote sensing, satellite altimetry and the GRACE gravity field mission to monitor the lake water storage variations in the Aral Sea. Aral Sea is one of the most unfortunate examples of a large anthropogenic catastrophe. The 4th largest lake of 1960s has been decertified for more than 75% of its area due to the diversion of its primary rivers for irrigation purposes. Our study is focused on the time frame of the GRACE mission; therefore we consider changes from 2002 onwards. Continuous monthly time series of water masks from Landsat satellite data and water level from altimetry missions were derived. Monthly volumetric variations of the lake water storage were computed by intersecting a digital elevation model of the lake with respective water mask and altimetry water level. With this approach we obtained volume from two independent remote sensing methods to reduce the error in the estimated volume through least square adjustment. The resultant variations were then compared with mass variability observed by GRACE. In addition, GARCE estimates of water storage variations were compared with simulation results of the Water Gap Hydrology Model (WGHM). The different observations from all missions agree that the lake reached an absolute minimum in autumn 2009. A marked reversal of the negative trend occured in 2010 but water storage in the lake decreased again afterwards. The results reveal that water storage variations in the Aral Sea are indeed the principal, but not the only contributor to the GRACE signal of

  18. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization.

    PubMed

    Morrell, William C; Birkel, Garrett W; Forrer, Mark; Lopez, Teresa; Backman, Tyler W H; Dussault, Michael; Petzold, Christopher J; Baidoo, Edward E K; Costello, Zak; Ando, David; Alonso-Gutierrez, Jorge; George, Kevin W; Mukhopadhyay, Aindrila; Vaino, Ian; Keasling, Jay D; Adams, Paul D; Hillson, Nathan J; Garcia Martin, Hector

    2017-12-15

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high-quality data to be parametrized and tested, which are not generally available. Here, we present the Experiment Data Depot (EDD), an online tool designed as a repository of experimental data and metadata. EDD provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms. In this paper, we describe EDD and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes.

  19. A data driven model for the impact of IFT and density variations on CO2 storage capacity in geologic formations

    NASA Astrophysics Data System (ADS)

    Nomeli, Mohammad A.; Riaz, Amir

    2017-09-01

    Carbon dioxide (CO2) storage in depleted hydrocarbon reservoirs and deep saline aquifers is one of the most promising solutions for decreasing CO2 concentration in the atmosphere. One of the important issues for CO2 storage in subsurface environments is the sealing efficiency of low-permeable cap-rocks overlying potential CO2 storage reservoirs. Though we focus on the effect of IFT in this study as a factor influencing sealing efficiency or storage capacity, other factors such as interfacial interactions, wettability, pore radius and interfacial mass transfer also affect the mobility and storage capacity of CO2 phase in the pore space. The study of the variation of IFT is however important because the pressure needed to penetrate a pore depends on both the pore size and the interfacial tension. Hence small variations in IFT can affect flow across a large population of pores. A novel model is proposed to find the IFT of the ternary systems (CO2/brine-salt) in a range of temperatures (300-373 K), pressures (50-250 bar), and up to 6 molal salinity applicable to CO2 storage in geological formations through a multi-variant non-linear regression of experimental data. The method uses a general empirical model for the quaternary system CO2/brine-salts that can be made to coincide with experimental data for a variety of solutions. We introduce correction parameters into the model, which compensates for uncertainties, and enforce agreement with experimental data. The results for IFT show a strong dependence on temperature, pressure, and salinity. The model has been found to describe the experimental data in the appropriate parameter space with reasonable precision. Finally, we use the new model to evaluate the effects of formation depth on the actual efficiency of CO2 storage. The results indicate that, in the case of CO2 storage in deep subsurface environments as a global-warming mitigation strategy, CO2 storage capacity increases with reservoir depth.

  20. Packaged digital holographic data storage with fast access

    NASA Astrophysics Data System (ADS)

    Ma, Jian; Chang, Tallis Y.; Choi, Sung; Hong, John H.

    1998-11-01

    Recent investigations in holographic mass memory systems have produced proof of concept demonstrations that have highlighted their potential for providing unprecedented capacity, data transfer rates and fast random access performance. The exploratory nature of most such investigations have been largely confined to benchtop experiments in which the practical constraints of packaging and environmental concerns have been ignored. We have embarked on an effort to demonstrate the holographic mass memory concept by developing a compact prototype system geared for avionics and similar applications which demand the following features (mostly interdependent factors): (1) solid state design (no moving parts), (2) fast data seek time, (3) robust with respect to environmental factors (temperature, vibration, shock). In this paper, we report on the development and demonstration of two systems, one with 100 Mbytes and the other with more than 1 Gbyte of storage capacity. Both systems feature solid state design with the addressing mechanism realized with acousto- optic deflectors that are capable of better than 50 microseconds data seek time. Since the basic designs for the two systems are similar, we describe only the larger system in detail. The operation of the smaller system has been demonstrated in various environments including hand-held operation and thermal/mechanical shock and a photograph of the smaller system is provided as well as actual digital data retrieved from the same system.

  1. Holographic storage of three-dimensional image and data using photopolymer and polymer dispersed liquid crystal films

    NASA Astrophysics Data System (ADS)

    Gao, Hong-Yue; Liu, Pan; Zeng, Chao; Yao, Qiu-Xiang; Zheng, Zhiqiang; Liu, Jicheng; Zheng, Huadong; Yu, Ying-Jie; Zeng, Zhen-Xiang; Sun, Tao

    2016-09-01

    We present holographic storage of three-dimensional (3D) images and data in a photopolymer film without any applied electric field. Its absorption and diffraction efficiency are measured, and reflective analog hologram of real object and image of digital information are recorded in the films. The photopolymer is compared with polymer dispersed liquid crystals as holographic materials. Besides holographic diffraction efficiency of the former is little lower than that of the latter, this work demonstrates that the photopolymer is more suitable for analog hologram and big data permanent storage because of its high definition and no need of high voltage electric field. Therefore, our study proposes a potential holographic storage material to apply in large size static 3D holographic displays, including analog hologram displays, digital hologram prints, and holographic disks. Project supported by the National Natural Science Foundation of China (Grant Nos. 11474194, 11004037, and 61101176) and the Natural Science Foundation of Shanghai, China (Grant No. 14ZR1415500).

  2. Apparatus And Method For Reconstructing Data Using Cross-Parity Stripes On Storage Media

    DOEpatents

    Hughes, James Prescott

    2003-06-17

    An apparatus and method for reconstructing missing data using cross-parity stripes on a storage medium is provided. The apparatus and method may operate on data symbols having sizes greater than a data bit. The apparatus and method makes use of a plurality of parity stripes for reconstructing missing data stripes. The parity symbol values in the parity stripes are used as a basis for determining the value of the missing data symbol in a data stripe. A correction matrix is shifted along the data stripes, correcting missing data symbols as it is shifted. The correction is performed from the outside data stripes towards the inner data stripes to thereby use previously reconstructed data symbols to reconstruct other missing data symbols.

  3. A Comparison of Groundwater Storage Using GRACE Data, Groundwater Levels, and a Hydrological Model in Californias Central Valley

    NASA Technical Reports Server (NTRS)

    Kuss, Amber; Brandt, William; Randall, Joshua; Floyd, Bridget; Bourai, Abdelwahab; Newcomer, Michelle; Skiles, Joseph; Schmidt, Cindy

    2011-01-01

    The Gravity Recovery and Climate Experiment (GRACE) measures changes in total water storage (TWS) remotely, and may provide additional insight to the use of well-based data in California's agriculturally productive Central Valley region. Under current California law, well owners are not required to report groundwater extraction rates, making estimation of total groundwater extraction difficult. As a result, other groundwater change detection techniques may prove useful. From October 2002 to September 2009, GRACE was used to map changes in TWS for the three hydrological regions (the Sacramento River Basin, the San Joaquin River Basin, and the Tulare Lake Basin) encompassing the Central Valley aquifer. Net groundwater storage changes were calculated from the changes in TWS for each of the three hydrological regions and by incorporating estimates for additional components of the hydrological budget including precipitation, evapotranspiration, soil moisture, snow pack, and surface water storage. The calculated changes in groundwater storage were then compared to simulated values from the California Department of Water Resource's Central Valley Groundwater- Surface Water Simulation Model (C2VSIM) and their Water Data Library (WDL) Geographic Information System (GIS) change in storage tool. The results from the three methods were compared. Downscaling GRACE data into the 21 smaller Central Valley sub-regions included in C2VSIM was also evaluated. This work has the potential to improve California's groundwater resource management and use of existing hydrological models for the Central Valley.

  4. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrell, William C.; Birkel, Garrett W.; Forrer, Mark

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high-quality data to be parametrized and tested, which are not generally available. Here, we present the Experiment Data Depot (EDD), an online tool designed as a repository of experimental data and metadata. EDD provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms. In this paper, we describe EDDmore » and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes.« less

  5. The Experiment Data Depot: A Web-Based Software Tool for Biological Experimental Data Storage, Sharing, and Visualization

    DOE PAGES

    Morrell, William C.; Birkel, Garrett W.; Forrer, Mark; ...

    2017-08-21

    Although recent advances in synthetic biology allow us to produce biological designs more efficiently than ever, our ability to predict the end result of these designs is still nascent. Predictive models require large amounts of high-quality data to be parametrized and tested, which are not generally available. Here, we present the Experiment Data Depot (EDD), an online tool designed as a repository of experimental data and metadata. EDD provides a convenient way to upload a variety of data types, visualize these data, and export them in a standardized fashion for use with predictive algorithms. In this paper, we describe EDDmore » and showcase its utility for three different use cases: storage of characterized synthetic biology parts, leveraging proteomics data to improve biofuel yield, and the use of extracellular metabolite concentrations to predict intracellular metabolic fluxes.« less

  6. Discrete event simulation and the resultant data storage system response in the operational mission environment of Jupiter-Saturn /Voyager/ spacecraft

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1978-01-01

    The Data Storage Subsystem Simulator (DSSSIM) simulating (by ground software) occurrence of discrete events in the Voyager mission is described. Functional requirements for Data Storage Subsystems (DSS) simulation are discussed, and discrete event simulation/DSSSIM processing is covered. Four types of outputs associated with a typical DSSSIM run are presented, and DSSSIM limitations and constraints are outlined.

  7. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    NASA Astrophysics Data System (ADS)

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  8. Tenth Goddard Conference on Mass Storage Systems and Technologies in Cooperation with the Nineteenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    2002-01-01

    This document contains copies of those technical papers received in time for publication prior to the Tenth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Nineteenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center April 15-18, 2002. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the ingest, storage, and management of large volumes of data. The Conference encourages all interested organizations to discuss long-term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long-term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, storage networking with emphasis on IP storage, performance, standards, site reports, and vendor solutions. Tutorials will be available on perpendicular magnetic recording, object based storage, storage virtualization and IP storage.

  9. Online & Offline data storage and data processing at the European XFEL facility

    NASA Astrophysics Data System (ADS)

    Gasthuber, Martin; Dietrich, Stefan; Malka, Janusz; Kuhn, Manuela; Ensslin, Uwe; Wrona, Krzysztof; Szuba, Janusz

    2017-10-01

    For the upcoming experiments at the European XFEL light source facility, a new online and offline data processing and storage infrastructure is currently being built and verified. Based on the experience of the system being developed for the Petra III light source at DESY, presented at the last CHEP conference, we further develop the system to cope with the much higher volumes and rates ( 50GB/sec) together with a more complex data analysis and infrastructure conditions (i.e. long range InfiniBand connections). This work will be carried out in collaboration of DESY/IT, European XFEL and technology support from IBM/Research. This presentation will shortly wrap up the experience of 1 year runtime of the PetraIII ([3]) system, continue with a short description of the challenges for the European XFEL ([2]) experiments and the main section, showing the proposed system for online and offline with initial result from real implementation (HW & SW). This will cover the selected cluster filesystem GPFS ([5]) including Quality of Service (QOS), extensive use of flash based subsystems and other new and unique features this architecture will benefit from.

  10. Public storage for the Open Science Grid

    NASA Astrophysics Data System (ADS)

    Levshina, T.; Guru, A.

    2014-06-01

    The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.

  11. Generation system impacts of storage heating and storage water heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gellings, C.W.; Quade, A.W.; Stovall, J.P.

    Thermal energy storage systems offer the electric utility a means to change customer energy use patterns. At present, however, the costs and benefit to both the customers and utility are uncertain. As part of a nationwide demonstration program Public Service Electric and Gas Company installed storage space heating and water heating appliances in residential homes. Both the test homes and similiar homes using conventional space and water heating appliances were monitored, allowing for detailed comparisons between the two systems. The purpose of this paper is to detail the methodology used and the results of studies completed on the generation systemmore » impacts of storage space and water heating systems. Other electric system impacts involving service entrance size, metering, secondary distribution and primary distribution were detailed in two previous IEEE Papers. This paper is organized into three main sections. The first gives background data on PSEandG and their experience in a nationwide thermal storage demonstration project. The second section details results of the demonstration project and studies that have been performed on the impacts of thermal storage equipment. The last section reports on the conclusions arrived at concerning the impacts of thermal storage on generation. The study was conducted in early 1982 using available data at that time, while PSEandG system plans have changed since then, the conclusions are pertinent and valuable to those contemplating inpacts of thermal energy storage.« less

  12. The adaptive approach for storage assignment by mining data of warehouse management system for distribution centres

    NASA Astrophysics Data System (ADS)

    Ming-Huang Chiang, David; Lin, Chia-Ping; Chen, Mu-Chen

    2011-05-01

    Among distribution centre operations, order picking has been reported to be the most labour-intensive activity. Sophisticated storage assignment policies adopted to reduce the travel distance of order picking have been explored in the literature. Unfortunately, previous research has been devoted to locating entire products from scratch. Instead, this study intends to propose an adaptive approach, a Data Mining-based Storage Assignment approach (DMSA), to find the optimal storage assignment for newly delivered products that need to be put away when there is vacant shelf space in a distribution centre. In the DMSA, a new association index (AIX) is developed to evaluate the fitness between the put away products and the unassigned storage locations by applying association rule mining. With AIX, the storage location assignment problem (SLAP) can be formulated and solved as a binary integer programming. To evaluate the performance of DMSA, a real-world order database of a distribution centre is obtained and used to compare the results from DMSA with a random assignment approach. It turns out that DMSA outperforms random assignment as the number of put away products and the proportion of put away products with high turnover rates increase.

  13. Novel carbazole derivatives with quinoline ring: synthesis, electronic transition, and two-photon absorption three-dimensional optical data storage.

    PubMed

    Li, Liang; Wang, Ping; Hu, Yanlei; Lin, Geng; Wu, Yiqun; Huang, Wenhao; Zhao, Quanzhong

    2015-03-15

    We designed carbazole unit with an extended π conjugation by employing Vilsmeier formylation reaction and Knoevenagel condensation to facilitate the functional groups of quinoline from 3- or 3,6-position of carbazole. Two compounds doped with poly(methyl methacrylate) (PMMA) films were prepared. To explore the electronic transition properties of these compounds, one-photon absorption properties were experimentally measured and theoretically calculated by using the time-dependent density functional theory. We surveyed these films by using an 800 nm Ti:sapphire 120-fs laser with two-photon absorption (TPA) fluorescence emission properties and TPA coefficients to obtain the TPA cross sections. A three-dimensional optical data storage experiment was conducted by using a TPA photoreaction with an 800 nm-fs laser on the film to obtain a seven-layer optical data storage. The experiment proves that these carbazole derivatives are well suited for two-photon 3D optical storage, thus laying the foundation for the research of multilayer high-density and ultra-high-density optical information storage materials. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Protecting Location Privacy for Outsourced Spatial Data in Cloud Storage

    PubMed Central

    Gui, Xiaolin; An, Jian; Zhao, Jianqiang; Zhang, Xuejun

    2014-01-01

    As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC∗) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC∗ and DSC are more secure than SHC, and DSC achieves the best index generation performance. PMID:25097865

  15. Protecting location privacy for outsourced spatial data in cloud storage.

    PubMed

    Tian, Feng; Gui, Xiaolin; An, Jian; Yang, Pan; Zhao, Jianqiang; Zhang, Xuejun

    2014-01-01

    As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC(∗)) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC(∗) and DSC are more secure than SHC, and DSC achieves the best index generation performance.

  16. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip

    NASA Astrophysics Data System (ADS)

    Shulaker, Max M.; Hills, Gage; Park, Rebecca S.; Howe, Roger T.; Saraswat, Krishna; Wong, H.-S. Philip; Mitra, Subhasish

    2017-07-01

    The computing demands of future data-intensive applications will greatly exceed the capabilities of current electronics, and are unlikely to be met by isolated improvements in transistors, data storage technologies or integrated circuit architectures alone. Instead, transformative nanosystems, which use new nanotechnologies to simultaneously realize improved devices and new integrated circuit architectures, are required. Here we present a prototype of such a transformative nanosystem. It consists of more than one million resistive random-access memory cells and more than two million carbon-nanotube field-effect transistors—promising new nanotechnologies for use in energy-efficient digital logic circuits and for dense data storage—fabricated on vertically stacked layers in a single chip. Unlike conventional integrated circuit architectures, the layered fabrication realizes a three-dimensional integrated circuit architecture with fine-grained and dense vertical connectivity between layers of computing, data storage, and input and output (in this instance, sensing). As a result, our nanosystem can capture massive amounts of data every second, store it directly on-chip, perform in situ processing of the captured data, and produce ‘highly processed’ information. As a working prototype, our nanosystem senses and classifies ambient gases. Furthermore, because the layers are fabricated on top of silicon logic circuitry, our nanosystem is compatible with existing infrastructure for silicon-based technologies. Such complex nano-electronic systems will be essential for future high-performance and highly energy-efficient electronic systems.

  17. Quantifying the impacts of vegetation changes on catchment storage-discharge dynamics using paired-catchment data

    NASA Astrophysics Data System (ADS)

    Cheng, Lei; Zhang, Lu; Chiew, Francis H. S.; Canadell, Josep G.; Zhao, Fangfang; Wang, Ying-Ping; Hu, Xianqun; Lin, Kairong

    2017-07-01

    It is widely recognized that vegetation changes can significantly affect the local water availability. Methods have been developed to predict the effects of vegetation change on water yield or total streamflow. However, it is still a challenge to predict changes in base flow following vegetation change due to limited understanding of catchment storage-discharge dynamics. In this study, the power law relationship for describing catchment storage-discharge dynamics is reformulated to quantify the changes in storage-discharge relationship resulting from vegetation changes using streamflow data from six paired-catchment experiments, of which two are deforestation catchments and four are afforestation catchments. Streamflow observations from the paired-catchment experiments clearly demonstrate that vegetation changes have led to significant changes in catchment storage-discharge relationships, accounting for about 83-128% of the changes in groundwater discharge in the treated catchments. Deforestation has led to increases in groundwater discharge (or base flow) but afforestation has resulted in decreases in groundwater discharge. Further analysis shows that the contribution of changes in groundwater discharge to the total changes in streamflow varies greatly among experimental catchments ranging from 12% to 80% with a mean of 38 ± 22% (μ ± σ). This study proposed a new method to quantify the effects of vegetation changes on groundwater discharge from catchment storage and will improve our predictability about the impacts of vegetation changes on catchment water yields.

  18. Managing the On-Board Data Storage, Acknowledgment and Retransmission System for Spitzer

    NASA Technical Reports Server (NTRS)

    Sarrel, Marc A.; Carrion, Carlos; Hunt, Joseph C., Jr.

    2006-01-01

    The Spitzer Space Telescope has a two-phase downlink system. Data are transmitted during one telecom session. Then commands are sent during the next session to delete those data that were received and to retransmit those data that were missed. We must build sequences that are as efficient as possible to make the best use of our finite supply of liquid helium, One way to improve efficiency is to use only the minimum time needed during telecom sessions to transmit the predicted volume of data. But, we must also not fill the onboard storage and must allow enough time margin to retransmit missed data. We describe tools and procedures that allow us to build observatory sequences that are single-fault tolerant in this regard and that allow us to recover quickly and safely from anomalies that affect the receipt or acknowledgment of data.

  19. Sensitivity analysis of conservative and reactive stream transient storage models applied to field data from multiple-reach experiments

    USGS Publications Warehouse

    Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.

    2005-01-01

    The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.

  20. Storage requirements for Georgia streams

    USGS Publications Warehouse

    Carter, Robert F.

    1983-01-01

    The suitability of a stream as a source of water supply or for waste disposal may be severely limited by low flow during certain periods. A water user may be forced to provide storage facilities to supplement the natural flow if the low flow is insufficient for his needs. This report provides data for evaluating the feasibility of augmenting low streamflow by means of storage facilities. It contains tabular data on storage requirements for draft rates that are as much as 60 percent of the mean annual flow at 99 continuous-record gaging stations, and draft-storage diagrams for estimating storage requirements at many additional sites. Through analyses of streamflow data, the State was divided into four regions. Draft-storage diagrams for each region provide a means of estimating storage requirements for sites on streams where data are scant, provided the drainage area, mean annual flow, and the 7-day, 10-year low flow are known or can be estimated. These data are tabulated for the 99 gaging stations used in the analyses and for 102 partial-record sites where only base-flow measurements have been made. The draft-storage diagrams are useful not only for estimating in-channel storage required for low-flow augmentation, but also can be used for estimating the volume of off-channel storage required to retain wastewater during low-flow periods for later release. In addition, these relationships can be helpful in estimating the volume of wastewater to be disposed of by spraying on land, provided that the water disposed of in this manner is only that for which streamflow dilution water is not currently available. Mean annual flow can be determined for any stream within the State by using the runoff map in this report. Low-flow indices can be estimated by several methods, including correlation of base-flow measurements with concurrent flow at nearby continuous-record gaging stations where low-flow indices have been determined.

  1. Online mass storage system detailed requirements document

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The requirements for an online high density magnetic tape data storage system that can be implemented in a multipurpose, multihost environment is set forth. The objective of the mass storage system is to provide a facility for the compact storage of large quantities of data and to make this data accessible to computer systems with minimum operator handling. The results of a market survey and analysis of candidate vendor who presently market high density tape data storage systems are included.

  2. Data Resilience in the dCache Storage System

    DOE PAGES

    Rossi, A. L.; Adeyemi, F.; Ashish, A.; ...

    2017-11-23

    In this study we discuss design, implementation considerations, and performance of a new Resilience Service in the dCache storage system responsible for file availability and durability functionality.

  3. Eternal 5D optical data storage in glass (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kazansky, Peter G.; Cerkauskaite, Ausra; Drevinskas, Rokas; Zhang, Jingyu

    2016-09-01

    A decade ago it has been discovered that during femtosecond laser writing self-organized subwavelength structures with record small features of 20 nm, could be created in the volume of silica glass. On the macroscopic scale the self-assembled nanostructure behaves as a uniaxial optical crystal with negative birefringence. The optical anisotropy, which results from the alignment of nano-platelets, referred to as form birefringence, is of the same order of magnitude as positive birefringence in crystalline quartz. The two independent parameters describing birefringence, the slow axis orientation (4th dimension) and the strength of retardance (5th dimension), are explored for the optical encoding of information in addition to three spatial coordinates. The slow axis orientation and the retardance are independently manipulated by the polarization and intensity of the femtosecond laser beam. The data optically encoded into five dimensions is successfully retrieved by quantitative birefringence measurements. The storage allows unprecedented parameters including hundreds of terabytes per disc data capacity and thermal stability up to 1000°. Even at elevated temperatures of 160oC, the extrapolated decay time of nanogratings is comparable with the age of the Universe - 13.8 billion years. The recording of the digital documents, which will survive the human race, including the eternal copies of Universal Declaration of Human Rights, Newton's Opticks, Kings James Bible and Magna Carta, is a vital step towards an eternal archive. Additionally, a number of projects (such as Time Capsule to Mars, MoonMail, and the Google Lunar XPRIZE) could benefit from the technique's extreme durability, which fulfills a crucial requirement for storage on the Moon or Mars.

  4. Working and Net Available Shell Storage Capacity

    EIA Publications

    2017-01-01

    Working and Net Available Shell Storage Capacity is the U.S. Energy Information Administration’s (EIA) report containing storage capacity data for crude oil, petroleum products, and selected biofuels. The report includes tables detailing working and net available shell storage capacity by type of facility, product, and Petroleum Administration for Defense District (PAD District). Net available shell storage capacity is broken down further to show the percent for exclusive use by facility operators and the percent leased to others. Crude oil storage capacity data are also provided for Cushing, Oklahoma, an important crude oil market center. Data are released twice each year near the end of May (data for March 31) and near the end of November (data for September 30).

  5. Littrow-type external-cavity blue laser for holographic data storage.

    PubMed

    Tanaka, Tomiji; Takahashi, Kazuo; Sako, Kageyasu; Kasegawa, Ryo; Toishi, Mitsuru; Watanabe, Kenjiro; Samuels, David; Takeya, Motonobu

    2007-06-10

    An external-cavity laser with a wavelength of 405 nm and an output of 80 mW has been developed for holographic data storage. The laser has three states: the first is a perfect single mode, whose coherent length is 14 m; the second is a three-mode state with a coherent length of 3 mm; and the third is a six-mode state with a coherent length of 0.3 mm. The first and second states are available for angular-multiplexing recording; all states are available for coaxial multiplexing recording. Due to its short wavelength, the recording density is higher than that of a 532 nm laser.

  6. Managing the On-Board Data Storage, Acknowledgement and Retransmission System for Spitzer

    NASA Technical Reports Server (NTRS)

    Sarrel, Marc A.; Carrion, Carlos; Hunt, Joseph C., Jr.

    2006-01-01

    The Spitzer Space Telescope has a two-phase downlink system. Recorded data are transmitted during one telecom session. Then commands are sent during the next session to delete those data that were received on the ground and to retransmit those data that were missed. We must build science sequences that are as efficient as possible to make the best use of our supply of liquid helium. One way to improve efficiency is to use only the minimum time needed during telecom sessions to transmit the predicted volume of data. But, we must also not fill the on-board storage and must allow enough time margin to retransmit missed data. We describe tools and procedures that allow us to build science sequences that are single-fault tolerant in this regard and that allow us to recover quickly and safely from anomalies that affect the receipt or acknowledgment (i.e. deletion) of data.

  7. An object-based storage model for distributed remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng

    2006-10-01

    It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.

  8. Phase modulated high density collinear holographic data storage system with phase-retrieval reference beam locking and orthogonal reference encoding.

    PubMed

    Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Huang, Yong; Tan, Xiaodi

    2018-02-19

    A novel phase modulation method for holographic data storage with phase-retrieval reference beam locking is proposed and incorporated into an amplitude-encoding collinear holographic storage system. Unlike the conventional phase retrieval method, the proposed method locks the data page and the corresponding phase-retrieval interference beam together at the same location with a sequential recording process, which eliminates piezoelectric elements, phase shift arrays and extra interference beams, making the system more compact and phase retrieval easier. To evaluate our proposed phase modulation method, we recorded and then recovered data pages with multilevel phase modulation using two spatial light modulators experimentally. For 4-level, 8-level, and 16-level phase modulation, we achieved the bit error rate (BER) of 0.3%, 1.5% and 6.6% respectively. To further improve data storage density, an orthogonal reference encoding multiplexing method at the same position of medium is also proposed and validated experimentally. We increased the code rate of pure 3/16 amplitude encoding method from 0.5 up to 1.0 and 1.5 using 4-level and 8-level phase modulation respectively.

  9. A computer system for the storage and retrieval of gravity data, Kingdom of Saudi Arabia

    USGS Publications Warehouse

    Godson, Richard H.; Andreasen, Gordon H.

    1974-01-01

    A computer system has been developed for the systematic storage and retrieval of gravity data. All pertinent facts relating to gravity station measurements and computed Bouguer values may be retrieved either by project name or by geographical coordinates. Features of the system include visual display in the form of printer listings of gravity data and printer plots of station locations. The retrieved data format interfaces with the format of GEOPAC, a system of computer programs designed for the analysis of geophysical data.

  10. Determining water storage depletion within Iran by assimilating GRACE data into the W3RA hydrological model

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Forootan, E.; Kuhn, M.; Awange, J.; van Dijk, A. I. J. M.; Schumacher, M.; Sharifi, M. A.

    2018-04-01

    Groundwater depletion, due to both unsustainable water use and a decrease in precipitation, has been reported in many parts of Iran. In order to analyze these changes during the recent decade, in this study, we assimilate Terrestrial Water Storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) into the World-Wide Water Resources Assessment (W3RA) model. This assimilation improves model derived water storage simulations by introducing missing trends and correcting the amplitude and phase of seasonal water storage variations. The Ensemble Square-Root Filter (EnSRF) technique is applied, which showed stable performance in propagating errors during the assimilation period (2002-2012). Our focus is on sub-surface water storage changes including groundwater and soil moisture variations within six major drainage divisions covering the whole Iran including its eastern part (East), Caspian Sea, Centre, Sarakhs, Persian Gulf and Oman Sea, and Lake Urmia. Results indicate an average of -8.9 mm/year groundwater reduction within Iran during the period 2002 to 2012. A similar decrease is also observed in soil moisture storage especially after 2005. We further apply the canonical correlation analysis (CCA) technique to relate sub-surface water storage changes to climate (e.g., precipitation) and anthropogenic (e.g., farming) impacts. Results indicate an average correlation of 0.81 between rainfall and groundwater variations and also a large impact of anthropogenic activities (mainly for irrigations) on Iran's water storage depletions.

  11. A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems

    PubMed Central

    Liang, Junbin; Liu, Renping; Ni, Wei; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda

    2018-01-01

    In the post-Cloud era, the proliferation of Internet of Things (IoT) has pushed the horizon of Edge computing, which is a new computing paradigm with data processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs) play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems. PMID:29439442

  12. A Survey on Data Storage and Information Discovery in the WSANs-Based Edge Computing Systems.

    PubMed

    Ma, Xingpo; Liang, Junbin; Liu, Renping; Ni, Wei; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda

    2018-02-10

    In the post-Cloud era, the proliferation of Internet of Things (IoT) has pushed the horizon of Edge computing, which is a new computing paradigm with data are processed at the edge of the network. As the important systems of Edge computing, wireless sensor and actuator networks (WSANs) play an important role in collecting and processing the sensing data from the surrounding environment as well as taking actions on the events happening in the environment. In WSANs, in-network data storage and information discovery schemes with high energy efficiency, high load balance and low latency are needed because of the limited resources of the sensor nodes and the real-time requirement of some specific applications, such as putting out a big fire in a forest. In this article, the existing schemes of WSANs on data storage and information discovery are surveyed with detailed analysis on their advancements and shortcomings, and possible solutions are proposed on how to achieve high efficiency, good load balance, and perfect real-time performances at the same time, hoping that it can provide a good reference for the future research of the WSANs-based Edge computing systems.

  13. 10 CFR 95.25 - Protection of National Security Information and Restricted Data in storage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Protection of National Security Information and Restricted Data in storage. 95.25 Section 95.25 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) FACILITY SECURITY... protection during non-working hours; or (2) Any steel file cabinet that has four sides and a top and bottom...

  14. 10 CFR 95.25 - Protection of National Security Information and Restricted Data in storage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Protection of National Security Information and Restricted Data in storage. 95.25 Section 95.25 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) FACILITY SECURITY... protection during non-working hours; or (2) Any steel file cabinet that has four sides and a top and bottom...

  15. 10 CFR 95.25 - Protection of National Security Information and Restricted Data in storage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Protection of National Security Information and Restricted Data in storage. 95.25 Section 95.25 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) FACILITY SECURITY... protection during non-working hours; or (2) Any steel file cabinet that has four sides and a top and bottom...

  16. 10 CFR 95.25 - Protection of National Security Information and Restricted Data in storage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Protection of National Security Information and Restricted Data in storage. 95.25 Section 95.25 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) FACILITY SECURITY... protection during non-working hours; or (2) Any steel file cabinet that has four sides and a top and bottom...

  17. 10 CFR 95.25 - Protection of National Security Information and Restricted Data in storage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Protection of National Security Information and Restricted Data in storage. 95.25 Section 95.25 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) FACILITY SECURITY... protection during non-working hours; or (2) Any steel file cabinet that has four sides and a top and bottom...

  18. Data collection and storage in long-term ecological and evolutionary studies: The Mongoose 2000 system.

    PubMed

    Marshall, Harry H; Griffiths, David J; Mwanguhya, Francis; Businge, Robert; Griffiths, Amber G F; Kyabulima, Solomon; Mwesige, Kenneth; Sanderson, Jennifer L; Thompson, Faye J; Vitikainen, Emma I K; Cant, Michael A

    2018-01-01

    Studying ecological and evolutionary processes in the natural world often requires research projects to follow multiple individuals in the wild over many years. These projects have provided significant advances but may also be hampered by needing to accurately and efficiently collect and store multiple streams of the data from multiple individuals concurrently. The increase in the availability and sophistication of portable computers (smartphones and tablets) and the applications that run on them has the potential to address many of these data collection and storage issues. In this paper we describe the challenges faced by one such long-term, individual-based research project: the Banded Mongoose Research Project in Uganda. We describe a system we have developed called Mongoose 2000 that utilises the potential of apps and portable computers to meet these challenges. We discuss the benefits and limitations of employing such a system in a long-term research project. The app and source code for the Mongoose 2000 system are freely available and we detail how it might be used to aid data collection and storage in other long-term individual-based projects.

  19. Data security in genomics: A review of Australian privacy requirements and their relation to cryptography in data storage.

    PubMed

    Schlosberg, Arran

    2016-01-01

    The advent of next-generation sequencing (NGS) brings with it a need to manage large volumes of patient data in a manner that is compliant with both privacy laws and long-term archival needs. Outside of the realm of genomics there is a need in the broader medical community to store data, and although radiology aside the volume may be less than that of NGS, the concepts discussed herein are similarly relevant. The relation of so-called "privacy principles" to data protection and cryptographic techniques is explored with regards to the archival and backup storage of health data in Australia, and an example implementation of secure management of genomic archives is proposed with regards to this relation. Readers are presented with sufficient detail to have informed discussions - when implementing laboratory data protocols - with experts in the fields.

  20. Data security in genomics: A review of Australian privacy requirements and their relation to cryptography in data storage

    PubMed Central

    Schlosberg, Arran

    2016-01-01

    The advent of next-generation sequencing (NGS) brings with it a need to manage large volumes of patient data in a manner that is compliant with both privacy laws and long-term archival needs. Outside of the realm of genomics there is a need in the broader medical community to store data, and although radiology aside the volume may be less than that of NGS, the concepts discussed herein are similarly relevant. The relation of so-called “privacy principles” to data protection and cryptographic techniques is explored with regards to the archival and backup storage of health data in Australia, and an example implementation of secure management of genomic archives is proposed with regards to this relation. Readers are presented with sufficient detail to have informed discussions – when implementing laboratory data protocols – with experts in the fields. PMID:26955504

  1. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less

  2. Overview of Probe-based Storage Technologies

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Yang, Ci Hui; Wen, Jing; Gong, Si Di; Peng, Yuan Xiu

    2016-07-01

    The current world is in the age of big data where the total amount of global digital data is growing up at an incredible rate. This indeed necessitates a drastic enhancement on the capacity of conventional data storage devices that are, however, suffering from their respective physical drawbacks. Under this circumstance, it is essential to aggressively explore and develop alternative promising mass storage devices, leading to the presence of probe-based storage devices. In this paper, the physical principles and the current status of several different probe storage devices, including thermo-mechanical probe memory, magnetic probe memory, ferroelectric probe memory, and phase-change probe memory, are reviewed in details, as well as their respective merits and weakness. This paper provides an overview of the emerging probe memories potentially for next generation storage device so as to motivate the exploration of more innovative technologies to push forward the development of the probe storage devices.

  3. Overview of Probe-based Storage Technologies.

    PubMed

    Wang, Lei; Yang, Ci Hui; Wen, Jing; Gong, Si Di; Peng, Yuan Xiu

    2016-12-01

    The current world is in the age of big data where the total amount of global digital data is growing up at an incredible rate. This indeed necessitates a drastic enhancement on the capacity of conventional data storage devices that are, however, suffering from their respective physical drawbacks. Under this circumstance, it is essential to aggressively explore and develop alternative promising mass storage devices, leading to the presence of probe-based storage devices. In this paper, the physical principles and the current status of several different probe storage devices, including thermo-mechanical probe memory, magnetic probe memory, ferroelectric probe memory, and phase-change probe memory, are reviewed in details, as well as their respective merits and weakness. This paper provides an overview of the emerging probe memories potentially for next generation storage device so as to motivate the exploration of more innovative technologies to push forward the development of the probe storage devices.

  4. Clinical Data Systems to Support Public Health Practice: A National Survey of Software and Storage Systems Among Local Health Departments.

    PubMed

    McCullough, J Mac; Goodin, Kate

    2016-01-01

    Numerous software and data storage systems are employed by local health departments (LHDs) to manage clinical and nonclinical data needs. Leveraging electronic systems may yield improvements in public health practice. However, information is lacking regarding current usage patterns among LHDs. To analyze clinical and nonclinical data storage and software types by LHDs. Data came from the 2015 Informatics Capacity and Needs Assessment Survey, conducted by Georgia Southern University in collaboration with the National Association of County and City Health Officials. A total of 324 LHDs from all 50 states completed the survey (response rate: 50%). Outcome measures included LHD's primary clinical service data system, nonclinical data system(s) used, and plans to adopt electronic clinical data system (if not already in use). Predictors of interest included jurisdiction size and governance type, and other informatics capacities within the LHD. Bivariate analyses were performed using χ and t tests. Up to 38.4% of LHDs reported using an electronic health record (EHR). Usage was common especially among LHDs that provide primary care and/or dental services. LHDs serving smaller populations and those with state-level governance were both less likely to use an EHR. Paper records were a common data storage approach for both clinical data (28.9%) and nonclinical data (59.4%). Among LHDs without an EHR, 84.7% reported implementation plans. Our findings suggest that LHDs are increasingly using EHRs as a clinical data storage solution and that more LHDs are likely to adopt EHRs in the foreseeable future. Yet use of paper records remains common. Correlates of electronic system usage emerged across a range of factors. Program- or system-specific needs may be barriers or facilitators to EHR adoption. Policy makers can tailor resources to address barriers specific to LHD size, governance, service portfolio, existing informatics capabilities, and other pertinent characteristics.

  5. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  6. EXP-PAC: providing comparative analysis and storage of next generation gene expression data.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Lefèvre, Christophe

    2012-07-01

    Microarrays and more recently RNA sequencing has led to an increase in available gene expression data. How to manage and store this data is becoming a key issue. In response we have developed EXP-PAC, a web based software package for storage, management and analysis of gene expression and sequence data. Unique to this package is SQL based querying of gene expression data sets, distributed normalization of raw gene expression data and analysis of gene expression data across experiments and species. This package has been populated with lactation data in the international milk genomic consortium web portal (http://milkgenomics.org/). Source code is also available which can be hosted on a Windows, Linux or Mac APACHE server connected to a private or public network (http://mamsap.it.deakin.edu.au/~pcc/Release/EXP_PAC.html). Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Lessons learned from implementing a national infrastructure in Sweden for storage and analysis of next-generation sequencing data

    PubMed Central

    2013-01-01

    Analyzing and storing data and results from next-generation sequencing (NGS) experiments is a challenging task, hampered by ever-increasing data volumes and frequent updates of analysis methods and tools. Storage and computation have grown beyond the capacity of personal computers and there is a need for suitable e-infrastructures for processing. Here we describe UPPNEX, an implementation of such an infrastructure, tailored to the needs of data storage and analysis of NGS data in Sweden serving various labs and multiple instruments from the major sequencing technology platforms. UPPNEX comprises resources for high-performance computing, large-scale and high-availability storage, an extensive bioinformatics software suite, up-to-date reference genomes and annotations, a support function with system and application experts as well as a web portal and support ticket system. UPPNEX applications are numerous and diverse, and include whole genome-, de novo- and exome sequencing, targeted resequencing, SNP discovery, RNASeq, and methylation analysis. There are over 300 projects that utilize UPPNEX and include large undertakings such as the sequencing of the flycatcher and Norwegian spruce. We describe the strategic decisions made when investing in hardware, setting up maintenance and support, allocating resources, and illustrate major challenges such as managing data growth. We conclude with summarizing our experiences and observations with UPPNEX to date, providing insights into the successful and less successful decisions made. PMID:23800020

  8. Eighth Goddard Conference on Mass Storage Systems and Technologies in Cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    2000-01-01

    This document contains copies of those technical papers received in time for publication prior to the Eighth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center March 27-30, 2000. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, new technology with a special emphasis on holographic storage, performance, standards, site reports, vendor solutions. Tutorials will be available on stability of optical media, disk subsystem performance evaluation, I/O and storage tuning, functionality and performance evaluation of file systems for storage area networks.

  9. Storage resource manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Bakken, J.; Petravick, D.

    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less

  10. Analysis and comparison of NoSQL databases with an introduction to consistent references in big data storage systems

    NASA Astrophysics Data System (ADS)

    Dziedzic, Adam; Mulawka, Jan

    2014-11-01

    NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.

  11. Optical storage networking

    NASA Astrophysics Data System (ADS)

    Mohr, Ulrich

    2001-11-01

    For efficient business continuance and backup of mission- critical data an inter-site storage network is required. Where traditional telecommunications costs are prohibitive for all but the largest organizations, there is an opportunity for regional carries to deliver an innovative storage service. This session reveals how a combination of optical networking and protocol-aware SAN gateways can provide an extended storage networking platform with the lowest cost of ownership and the highest possible degree of reliability, security and availability. Companies of every size, with mainframe and open-systems environments, can afford to use this integrated service. Three mayor applications are explained; channel extension, Network Attached Storage (NAS), Storage Area Networks (SAN) and how optical networks address the specific requirements. One advantage of DWDM is the ability for protocols such as ESCON, Fibre Channel, ATM and Gigabit Ethernet, to be transported natively and simultaneously across a single fiber pair, and the ability to multiplex many individual fiber pairs over a single pair, thereby reducing fiber cost and recovering fiber pairs already in use. An optical storage network enables a new class of service providers, Storage Service Providers (SSP) aiming to deliver value to the enterprise by managing storage, backup, replication and restoration as an outsourced service.

  12. Hierarchical and hybrid energy storage devices in data centers: Architecture, control and provisioning.

    PubMed

    Sun, Mengshu; Xue, Yuankun; Bogdan, Paul; Tang, Jian; Wang, Yanzhi; Lin, Xue

    2018-01-01

    Recently, a new approach has been introduced that leverages and over-provisions energy storage devices (ESDs) in data centers for performing power capping and facilitating capex/opex reductions, without performance overhead. To fully realize the potential benefits of the hierarchical ESD structure, we propose a comprehensive design, control, and provisioning framework including (i) designing power delivery architecture supporting hierarchical ESD structure and hybrid ESDs for some levels, as well as (ii) control and provisioning of the hierarchical ESD structure including run-time ESD charging/discharging control and design-time determination of ESD types, homogeneous/hybrid options, ESD provisioning at each level. Experiments have been conducted using real Google data center workloads based on realistic data center specifications.

  13. Hierarchical and hybrid energy storage devices in data centers: Architecture, control and provisioning

    PubMed Central

    Xue, Yuankun; Bogdan, Paul; Tang, Jian; Wang, Yanzhi; Lin, Xue

    2018-01-01

    Recently, a new approach has been introduced that leverages and over-provisions energy storage devices (ESDs) in data centers for performing power capping and facilitating capex/opex reductions, without performance overhead. To fully realize the potential benefits of the hierarchical ESD structure, we propose a comprehensive design, control, and provisioning framework including (i) designing power delivery architecture supporting hierarchical ESD structure and hybrid ESDs for some levels, as well as (ii) control and provisioning of the hierarchical ESD structure including run-time ESD charging/discharging control and design-time determination of ESD types, homogeneous/hybrid options, ESD provisioning at each level. Experiments have been conducted using real Google data center workloads based on realistic data center specifications. PMID:29351553

  14. myPhyloDB: a local web server for the storage and analysis of metagenomics data

    USDA-ARS?s Scientific Manuscript database

    myPhyloDB is a user-friendly personal database with a browser-interface designed to facilitate the storage, processing, analysis, and distribution of metagenomics data. MyPhyloDB archives raw sequencing files, and allows for easy selection of project(s)/sample(s) of any combination from all availab...

  15. SPINS: standardized protein NMR storage. A data dictionary and object-oriented relational database for archiving protein NMR spectra.

    PubMed

    Baran, Michael C; Moseley, Hunter N B; Sahota, Gurmukh; Montelione, Gaetano T

    2002-10-01

    Modern protein NMR spectroscopy laboratories have a rapidly growing need for an easily queried local archival system of raw experimental NMR datasets. SPINS (Standardized ProteIn Nmr Storage) is an object-oriented relational database that provides facilities for high-volume NMR data archival, organization of analyses, and dissemination of results to the public domain by automatic preparation of the header files required for submission of data to the BioMagResBank (BMRB). The current version of SPINS coordinates the process from data collection to BMRB deposition of raw NMR data by standardizing and integrating the storage and retrieval of these data in a local laboratory file system. Additional facilities include a data mining query tool, graphical database administration tools, and a NMRStar v2. 1.1 file generator. SPINS also includes a user-friendly internet-based graphical user interface, which is optionally integrated with Varian VNMR NMR data collection software. This paper provides an overview of the data model underlying the SPINS database system, a description of its implementation in Oracle, and an outline of future plans for the SPINS project.

  16. Optimal micro-mirror tilt angle and sync mark design for digital micro-mirror device based collinear holographic data storage system.

    PubMed

    Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Liu, Jinyan; Huang, Yong; Tan, Xiaodi

    2017-06-01

    The collinear holographic data storage system (CHDSS) is a very promising storage system due to its large storage capacities and high transfer rates in the era of big data. The digital micro-mirror device (DMD) as a spatial light modulator is the key device of the CHDSS due to its high speed, high precision, and broadband working range. To improve the system stability and performance, an optimal micro-mirror tilt angle was theoretically calculated and experimentally confirmed by analyzing the relationship between the tilt angle of the micro-mirror on the DMD and the power profiles of diffraction patterns of the DMD at the Fourier plane. In addition, we proposed a novel chess board sync mark design in the data page to reduce the system bit error rate in circumstances of reduced aperture required to decrease noise and median exposure amount. It will provide practical guidance for future DMD based CHDSS development.

  17. Data collection and storage in long-term ecological and evolutionary studies: The Mongoose 2000 system

    PubMed Central

    Griffiths, David J.; Mwanguhya, Francis; Businge, Robert; Griffiths, Amber G. F.; Kyabulima, Solomon; Mwesige, Kenneth; Sanderson, Jennifer L.; Thompson, Faye J.; Vitikainen, Emma I. K.; Cant, Michael A.

    2018-01-01

    Studying ecological and evolutionary processes in the natural world often requires research projects to follow multiple individuals in the wild over many years. These projects have provided significant advances but may also be hampered by needing to accurately and efficiently collect and store multiple streams of the data from multiple individuals concurrently. The increase in the availability and sophistication of portable computers (smartphones and tablets) and the applications that run on them has the potential to address many of these data collection and storage issues. In this paper we describe the challenges faced by one such long-term, individual-based research project: the Banded Mongoose Research Project in Uganda. We describe a system we have developed called Mongoose 2000 that utilises the potential of apps and portable computers to meet these challenges. We discuss the benefits and limitations of employing such a system in a long-term research project. The app and source code for the Mongoose 2000 system are freely available and we detail how it might be used to aid data collection and storage in other long-term individual-based projects. PMID:29315317

  18. POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.; Betts, W.

    2015-12-01

    The STAR online computing infrastructure has become an intensive dynamic system used for first-hand data collection and analysis resulting in a dense collection of data output. As we have transitioned to our current state, inefficient, limited storage systems have become an impediment to fast feedback to online shift crews. Motivation for a centrally accessible, scalable and redundant distributed storage system had become a necessity in this environment. OpenStack Swift Object Storage and Ceph Object Storage are two eye-opening technologies as community use and development have led to success elsewhere. In this contribution, OpenStack Swift and Ceph have been put to the test with single and parallel I/O tests, emulating real world scenarios for data processing and workflows. The Ceph file system storage, offering a POSIX compliant file system mounted similarly to an NFS share was of particular interest as it aligned with our requirements and was retained as our solution. I/O performance tests were run against the Ceph POSIX file system and have presented surprising results indicating true potential for fast I/O and reliability. STAR'S online compute farm historical use has been for job submission and first hand data analysis. The goal of reusing the online compute farm to maintain a storage cluster and job submission will be an efficient use of the current infrastructure.

  19. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    NASA Astrophysics Data System (ADS)

    Potekhin, M.; ATLAS Collaboration

    2012-06-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with a R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic loads.

  20. Specific storage and hydraulic conductivity tomography through the joint inversion of hydraulic heads and self-potential data

    NASA Astrophysics Data System (ADS)

    Ahmed, A. Soueid; Jardani, A.; Revil, A.; Dupont, J. P.

    2016-03-01

    Transient hydraulic tomography is used to image the heterogeneous hydraulic conductivity and specific storage fields of shallow aquifers using time series of hydraulic head data. Such ill-posed and non-unique inverse problem can be regularized using some spatial geostatistical characteristic of the two fields. In addition to hydraulic heads changes, the flow of water, during pumping tests, generates an electrical field of electrokinetic nature. These electrical field fluctuations can be passively recorded at the ground surface using a network of non-polarizing electrodes connected to a high impedance (> 10 MOhm) and sensitive (0.1 mV) voltmeter, a method known in geophysics as the self-potential method. We perform a joint inversion of the self-potential and hydraulic head data to image the hydraulic conductivity and specific storage fields. We work on a 3D synthetic confined aquifer and we use the adjoint state method to compute the sensitivities of the hydraulic parameters to the hydraulic head and self-potential data in both steady-state and transient conditions. The inverse problem is solved using the geostatistical quasi-linear algorithm framework of Kitanidis. When the number of piezometers is small, the record of the transient self-potential signals provides useful information to characterize the hydraulic conductivity and specific storage fields. These results show that the self-potential method reveals the heterogeneities of some areas of the aquifer, which could not been captured by the tomography based on the hydraulic heads alone. In our analysis, the improvement on the hydraulic conductivity and specific storage estimations were based on perfect knowledge of electrical resistivity field. This implies that electrical resistivity will need to be jointly inverted with the hydraulic parameters in future studies and the impact of its uncertainty assessed with respect to the final tomograms of the hydraulic parameters.

  1. Optimising LAN access to grid enabled storage elements

    NASA Astrophysics Data System (ADS)

    Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.

    2008-07-01

    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.

  2. A Note on Interfacing Object Warehouses and Mass Storage Systems for Data Mining Applications

    NASA Technical Reports Server (NTRS)

    Grossman, Robert L.; Northcutt, Dave

    1996-01-01

    Data mining is the automatic discovery of patterns, associations, and anomalies in data sets. Data mining requires numerically and statistically intensive queries. Our assumption is that data mining requires a specialized data management infrastructure to support the aforementioned intensive queries, but because of the sizes of data involved, this infrastructure is layered over a hierarchical storage system. In this paper, we discuss the architecture of a system which is layered for modularity, but exploits specialized lightweight services to maintain efficiency. Rather than use a full functioned database for example, we use light weight object services specialized for data mining. We propose using information repositories between layers so that components on either side of the layer can access information in the repositories to assist in making decisions about data layout, the caching and migration of data, the scheduling of queries, and related matters.

  3. Design of a Mission Data Storage and Retrieval System for NASA Dryden Flight Research Center

    NASA Technical Reports Server (NTRS)

    Lux, Jessica; Downing, Bob; Sheldon, Jack

    2007-01-01

    The Western Aeronautical Test Range (WATR) at the NASA Dryden Flight Research Center (DFRC) employs the WATR Integrated Next Generation System (WINGS) for the processing and display of aeronautical flight data. This report discusses the post-mission segment of the WINGS architecture. A team designed and implemented a system for the near- and long-term storage and distribution of mission data for flight projects at DFRC, providing the user with intelligent access to data. Discussed are the legacy system, an industry survey, system operational concept, high-level system features, and initial design efforts.

  4. Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Anderson, Thomas E.; Ousterhout, John K.; Patterson, David A.

    1991-01-01

    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications.

  5. Neuroinformatics Database (NiDB) – A Modular, Portable Database for the Storage, Analysis, and Sharing of Neuroimaging Data

    PubMed Central

    Anderson, Beth M.; Stevens, Michael C.; Glahn, David C.; Assaf, Michal; Pearlson, Godfrey D.

    2013-01-01

    We present a modular, high performance, open-source database system that incorporates popular neuroimaging database features with novel peer-to-peer sharing, and a simple installation. An increasing number of imaging centers have created a massive amount of neuroimaging data since fMRI became popular more than 20 years ago, with much of that data unshared. The Neuroinformatics Database (NiDB) provides a stable platform to store and manipulate neuroimaging data and addresses several of the impediments to data sharing presented by the INCF Task Force on Neuroimaging Datasharing, including 1) motivation to share data, 2) technical issues, and 3) standards development. NiDB solves these problems by 1) minimizing PHI use, providing a cost effective simple locally stored platform, 2) storing and associating all data (including genome) with a subject and creating a peer-to-peer sharing model, and 3) defining a sample, normalized definition of a data storage structure that is used in NiDB. NiDB not only simplifies the local storage and analysis of neuroimaging data, but also enables simple sharing of raw data and analysis methods, which may encourage further sharing. PMID:23912507

  6. Storage system architectures and their characteristics

    NASA Technical Reports Server (NTRS)

    Sarandrea, Bryan M.

    1993-01-01

    Not all users storage requirements call for 20 MBS data transfer rates, multi-tier file or data migration schemes, or even automated retrieval of data. The number of available storage solutions reflects the broad range of user requirements. It is foolish to think that any one solution can address the complete range of requirements. For users with simple off-line storage requirements, the cost and complexity of high end solutions would provide no advantage over a more simple solution. The correct answer is to match the requirements of a particular storage need to the various attributes of the available solutions. The goal of this paper is to introduce basic concepts of archiving and storage management in combination with the most common architectures and to provide some insight into how these concepts and architectures address various storage problems. The intent is to provide potential consumers of storage technology with a framework within which to begin the hunt for a solution which meets their particular needs. This paper is not intended to be an exhaustive study or to address all possible solutions or new technologies, but is intended to be a more practical treatment of todays storage system alternatives. Since most commercial storage systems today are built on Open Systems concepts, the majority of these solutions are hosted on the UNIX operating system. For this reason, some of the architectural issues discussed focus around specific UNIX architectural concepts. However, most of the architectures are operating system independent and the conclusions are applicable to such architectures on any operating system.

  7. KEYNOTE ADDRESS: The role of standards in the emerging optical digital data disk storage systems market

    NASA Astrophysics Data System (ADS)

    Bainbridge, Ross C.

    1984-09-01

    The Institute for Computer Sciences and Technology at the National Bureau of Standards is pleased to cooperate with the International Society for Optical Engineering and to join with the other distinguished organizations in cosponsoring this conference on applications of optical digital data disk storage systems.

  8. A system approach to archival storage

    NASA Technical Reports Server (NTRS)

    Corcoran, John W.

    1991-01-01

    The introduction and viewgraphs of a discussion on a system approach to archival storage presented at the National Space Science Data Center (NSSDC) Mass Storage Workshop is included. The use of D-2 iron particles for archival storage is discussed along with how acceleration factors relating short-term tests to archival life times can be justified. Ampex Recording Systems is transferring D-2 video technology to data storage applications, and encountering concerns about corrosion. To protect the D-2 standard, Battelle tests were done on all four tapes in the Class 2 environment. Error rates were measured before and after the test on both exposed and control groups.

  9. Open systems storage platforms

    NASA Technical Reports Server (NTRS)

    Collins, Kirby

    1992-01-01

    The building blocks for an open storage system includes a system platform, a selection of storage devices and interfaces, system software, and storage applications CONVEX storage systems are based on the DS Series Data Server systems. These systems are a variant of the C3200 supercomputer with expanded I/O capabilities. These systems support a variety of medium and high speed interfaces to networks and peripherals. System software is provided in the form of ConvexOS, a POSIX compliant derivative of 4.3BSD UNIX. Storage applications include products such as UNITREE and EMASS. With the DS Series of storage systems, Convex has developed a set of products which provide open system solutions for storage management applications. The systems are highly modular, assembled from off the shelf components with industry standard interfaces. The C Series system architecture provides a stable base, with the performance and reliability of a general purpose platform. This combination of a proven system architecture with a variety of choices in peripherals and application software allows wide flexibility in configurations, and delivers the benefits of open systems to the mass storage world.

  10. The data storage grid: the next generation of fault-tolerant storage for backup and disaster recovery of clinical images

    NASA Astrophysics Data System (ADS)

    King, Nelson E.; Liu, Brent; Zhou, Zheng; Documet, Jorge; Huang, H. K.

    2005-04-01

    Grid Computing represents the latest and most exciting technology to evolve from the familiar realm of parallel, peer-to-peer and client-server models that can address the problem of fault-tolerant storage for backup and recovery of clinical images. We have researched and developed a novel Data Grid testbed involving several federated PAC systems based on grid architecture. By integrating a grid computing architecture to the DICOM environment, a failed PACS archive can recover its image data from others in the federation in a timely and seamless fashion. The design reflects the five-layer architecture of grid computing: Fabric, Resource, Connectivity, Collective, and Application Layers. The testbed Data Grid architecture representing three federated PAC systems, the Fault-Tolerant PACS archive server at the Image Processing and Informatics Laboratory, Marina del Rey, the clinical PACS at Saint John's Health Center, Santa Monica, and the clinical PACS at the Healthcare Consultation Center II, USC Health Science Campus, will be presented. The successful demonstration of the Data Grid in the testbed will provide an understanding of the Data Grid concept in clinical image data backup as well as establishment of benchmarks for performance from future grid technology improvements and serve as a road map for expanded research into large enterprise and federation level data grids to guarantee 99.999 % up time.

  11. Use of information-retrieval languages in automated retrieval of experimental data from long-term storage

    NASA Technical Reports Server (NTRS)

    Khovanskiy, Y. D.; Kremneva, N. I.

    1975-01-01

    Problems and methods are discussed of automating information retrieval operations in a data bank used for long term storage and retrieval of data from scientific experiments. Existing information retrieval languages are analyzed along with those being developed. The results of studies discussing the application of the descriptive 'Kristall' language used in the 'ASIOR' automated information retrieval system are presented. The development and use of a specialized language of the classification-descriptive type, using universal decimal classification indices as the main descriptors, is described.

  12. Evaluation of Big Data Containers for Popular Storage, Retrieval, and Computation Primitives in Earth Science Analysis

    NASA Astrophysics Data System (ADS)

    Das, K.; Clune, T.; Kuo, K. S.; Mattmann, C. A.; Huang, T.; Duffy, D.; Yang, C. P.; Habermann, T.

    2015-12-01

    Data containers are infrastructures that facilitate storage, retrieval, and analysis of data sets. Big data applications in Earth Science require a mix of processing techniques, data sources and storage formats that are supported by different data containers. Some of the most popular data containers used in Earth Science studies are Hadoop, Spark, SciDB, AsterixDB, and RasDaMan. These containers optimize different aspects of the data processing pipeline and are, therefore, suitable for different types of applications. These containers are expected to undergo rapid evolution and the ability to re-test, as they evolve, is very important to ensure the containers are up to date and ready to be deployed to handle large volumes of observational data and model output. Our goal is to develop an evaluation plan for these containers to assess their suitability for Earth Science data processing needs. We have identified a selection of test cases that are relevant to most data processing exercises in Earth Science applications and we aim to evaluate these systems for optimal performance against each of these test cases. The use cases identified as part of this study are (i) data fetching, (ii) data preparation for multivariate analysis, (iii) data normalization, (iv) distance (kernel) computation, and (v) optimization. In this study we develop a set of metrics for performance evaluation, define the specifics of governance, and test the plan on current versions of the data containers. The test plan and the design mechanism are expandable to allow repeated testing with both new containers and upgraded versions of the ones mentioned above, so that we can gauge their utility as they evolve.

  13. myPhyloDB: a local web server for the storage and analysis of metagenomic data.

    PubMed

    Manter, Daniel K; Korsa, Matthew; Tebbe, Caleb; Delgado, Jorge A

    2016-01-01

    myPhyloDB v.1.1.2 is a user-friendly personal database with a browser-interface designed to facilitate the storage, processing, analysis, and distribution of microbial community populations (e.g. 16S metagenomics data). MyPhyloDB archives raw sequencing files, and allows for easy selection of project(s)/sample(s) of any combination from all available data in the database. The data processing capabilities of myPhyloDB are also flexible enough to allow the upload and storage of pre-processed data, or use the built-in Mothur pipeline to automate the processing of raw sequencing data. myPhyloDB provides several analytical (e.g. analysis of covariance,t-tests, linear regression, differential abundance (DESeq2), and principal coordinates analysis (PCoA)) and normalization (rarefaction, DESeq2, and proportion) tools for the comparative analysis of taxonomic abundance, species richness and species diversity for projects of various types (e.g. human-associated, human gut microbiome, air, soil, and water) for any taxonomic level(s) desired. Finally, since myPhyloDB is a local web-server, users can quickly distribute data between colleagues and end-users by simply granting others access to their personal myPhyloDB database. myPhyloDB is available athttp://www.ars.usda.gov/services/software/download.htm?softwareid=472 and more information along with tutorials can be found on our websitehttp://www.myphylodb.org. Database URL:http://www.myphylodb.org. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the United States.

  14. Exascale Storage Systems the SIRIUS Way

    NASA Astrophysics Data System (ADS)

    Klasky, S. A.; Abbasi, H.; Ainsworth, M.; Choi, J.; Curry, M.; Kurc, T.; Liu, Q.; Lofstead, J.; Maltzahn, C.; Parashar, M.; Podhorszki, N.; Suchyta, E.; Wang, F.; Wolf, M.; Chang, C. S.; Churchill, M.; Ethier, S.

    2016-10-01

    As the exascale computing age emerges, data related issues are becoming critical factors that determine how and where we do computing. Popular approaches used by traditional I/O solution and storage libraries become increasingly bottlenecked due to their assumptions about data movement, re-organization, and storage. While, new technologies, such as “burst buffers”, can help address some of the short-term performance issues, it is essential that we reexamine the underlying storage and I/O infrastructure to effectively support requirements and challenges at exascale and beyond. In this paper we present a new approach to the exascale Storage System and I/O (SSIO), which is based on allowing users to inject application knowledge into the system and leverage this knowledge to better manage, store, and access large data volumes so as to minimize the time to scientific insights. Central to our approach is the distinction between the data, metadata, and the knowledge contained therein, transferred from the user to the system by describing “utility” of data as it ages.

  15. Storage requirements for Arkansas streams

    USGS Publications Warehouse

    Patterson, James Lee

    1968-01-01

    The supply of good-quality surface water in Arkansas is abundant. owing to seasonal and annual variability of streamflow, however, storage must be provided to insure dependable year-round supplies in most of the State. Storage requirements for draft rates that are as much as 60 percent of the mean annual flow at 49 continuous-record gaging stations can be obtained from tabular data in this report. Through regional analyses of streamflow data, the State was divided into three regions. Draft-storage diagrams for each region provide a means of estimating storage requirements for sites on streams where data are scant, provided the drainage area, the mean annual flow, and the low-flow index are known. These data are tabulated for 53 gaging stations used in the analyses and for 132 partial-record sites where only base-flow measurements have been made. Mean annual flow can be determined for any stream whose drainage lies within the State by using the runoff map in this report. Low-flow indices can be estimated by correlating base flows, determined from several discharge measurements, with concurrent flows at nearby continuous-record gaging stations, whose low-flow indices have been determined.

  16. Reading data stored in the state of metastable defects in silicon using band-band photoluminescence: Proof of concept and physical limits to the data storage density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rougieux, F. E.; Macdonald, D.

    2014-03-24

    The state of bistable defects in crystalline silicon such as iron-boron pairs or the boron-oxygen defect can be changed at room temperature. In this letter, we experimentally demonstrate that the chemical state of a group of defects can be changed to represent a bit of information. The state can then be read without direct contact via the intensity of the emitted band-band photoluminescence signal of the group of defects, via their impact on the carrier lifetime. The theoretical limit of the information density is then computed. The information density is shown to be low for two-dimensional storage but significant formore » three-dimensional data storage. Finally, we compute the maximum storage capacity as a function of the lower limit of the photoluminescence detector sensitivity.« less

  17. Applications of ultrafast laser direct writing: from polarization control to data storage

    NASA Astrophysics Data System (ADS)

    Donko, A.; Gertus, T.; Brambilla, G.; Beresna, M.

    2018-02-01

    Ultrafast laser direct writing is a fascinating technology which emerged more than two decades from fundamental studies of material resistance to high-intensity optical fields. Its development saw the discovery of many puzzling phenomena and demonstration of useful applications. Today, ultrafast laser writing is seen as a technology with great potential and is rapidly entering the industrial environment. Whereas, less than 10 years ago, ultrafast lasers were still confined within the research labs. This talk will overview some of the unique features of ultrafast lasers and give examples of its applications in optical data storage, polarization control and optical fibers.

  18. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.

  19. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.

  20. Use of biphase-coded pulses for wideband data storage in time-domain optical memories.

    PubMed

    Shen, X A; Kachru, R

    1993-06-10

    We demonstrate that temporally long laser pulses with appropriate phase modulation can replace either temporally brief or frequency-chirped pulses in a time-domain optical memory to store and retrieve information. A 1.65-µs-long write pulse was biphase modulated according to the 13-bit Barker code for storing multiple bits of optical data into a Pr(3+):YAlO(3) crystal, and the stored information was later recalled faithfully by using a read pulse that was identical to the write pulse. Our results further show that the stored data cannot be retrieved faithfully if mismatched write and read pulses are used. This finding opens up the possibility of designing encrypted optical memories for secure data storage.

  1. Storage media for computers in radiology.

    PubMed

    Dandu, Ravi Varma

    2008-11-01

    The introduction and wide acceptance of digital technology in medical imaging has resulted in an exponential increase in the amount of data produced by the radiology department. There is an insatiable need for storage space to archive this ever-growing volume of image data. Healthcare facilities should plan the type and size of the storage media that they needed, based not just on the volume of data but also on considerations such as the speed and ease of access, redundancy, security, costs, as well as the longevity of the archival technology. This article reviews the various digital storage media and compares their merits and demerits.

  2. Storage media for computers in radiology

    PubMed Central

    Dandu, Ravi Varma

    2008-01-01

    The introduction and wide acceptance of digital technology in medical imaging has resulted in an exponential increase in the amount of data produced by the radiology department. There is an insatiable need for storage space to archive this ever-growing volume of image data. Healthcare facilities should plan the type and size of the storage media that they needed, based not just on the volume of data but also on considerations such as the speed and ease of access, redundancy, security, costs, as well as the longevity of the archival technology. This article reviews the various digital storage media and compares their merits and demerits. PMID:19774182

  3. Mass storage technology in networks

    NASA Astrophysics Data System (ADS)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  4. Development of noSQL data storage for the ATLAS PanDA Monitoring System

    NASA Astrophysics Data System (ADS)

    Ito, H.; Potekhin, M.; Wenaus, T.

    2012-12-01

    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with an R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic rate of queries. In conclusion, we present our experience with operating a Cassandra cluster over an extended period of time and with data load adequate for planned application.

  5. MeV ion-beam analysis of optical data storage films

    NASA Technical Reports Server (NTRS)

    Leavitt, J. A.; Mcintyre, L. C., Jr.; Lin, Z.

    1993-01-01

    Our objectives are threefold: (1) to accurately characterize optical data storage films by MeV ion-beam analysis (IBA) for ODSC collaborators; (2) to develop new and/or improved analysis techniques; and (3) to expand the capabilities of the IBA facility itself. Using H-1(+), He-4(+), and N-15(++) ion beams in the 1.5 MeV to 10 MeV energy range from a 5.5 MV Van de Graaff accelerator, film thickness (in atoms/sq cm), stoichiometry, impurity concentration profiles, and crystalline structure were determined by Rutherford backscattering (RBS), high-energy backscattering, channeling, nuclear reaction analysis (NRA) and proton induced X-ray emission (PIXE). Most of these techniques are discussed in detail in the ODSC Annual Report (February 17, 1987), p. 74. The PIXE technique is briefly discussed in the ODSC Annual Report (March 15, 1991), p. 23.

  6. PDF Weaving - Linking Inventory Data and Monte Carlo Uncertainty Analysis in the Study of how Disturbance Affects Forest Carbon Storage

    NASA Astrophysics Data System (ADS)

    Healey, S. P.; Patterson, P.; Garrard, C.

    2014-12-01

    Altered disturbance regimes are likely a primary mechanism by which a changing climate will affect storage of carbon in forested ecosystems. Accordingly, the National Forest System (NFS) has been mandated to assess the role of disturbance (harvests, fires, insects, etc.) on carbon storage in each of its planning units. We have developed a process which combines 1990-era maps of forest structure and composition with high-quality maps of subsequent disturbance type and magnitude to track the impact of disturbance on carbon storage. This process, called the Forest Carbon Management Framework (ForCaMF), uses the maps to apply empirically calibrated carbon dynamics built into a widely used management tool, the Forest Vegetation Simulator (FVS). While ForCaMF offers locally specific insights into the effect of historical or hypothetical disturbance trends on carbon storage, its dependence upon the interaction of several maps and a carbon model poses a complex challenge in terms of tracking uncertainty. Monte Carlo analysis is an attractive option for tracking the combined effects of error in several constituent inputs as they impact overall uncertainty. Monte Carlo methods iteratively simulate alternative values for each input and quantify how much outputs vary as a result. Variation of each input is controlled by a Probability Density Function (PDF). We introduce a technique called "PDF Weaving," which constructs PDFs that ensure that simulated uncertainty precisely aligns with uncertainty estimates that can be derived from inventory data. This hard link with inventory data (derived in this case from FIA - the US Forest Service Forest Inventory and Analysis program) both provides empirical calibration and establishes consistency with other types of assessments (e.g., habitat and water) for which NFS depends upon FIA data. Results from the NFS Northern Region will be used to illustrate PDF weaving and insights gained from ForCaMF about the role of disturbance in carbon

  7. Basin-Scale Freshwater Storage Trends from GRACE

    NASA Astrophysics Data System (ADS)

    Famiglietti, J.; Kiel, B.; Frappart, F.; Syed, T. H.; Rodell, M.

    2006-12-01

    Four years have passed since the GRACE satellite tandem began recording variations in Earth's gravitational field. On monthly to annual timescales, variations in the gravity signal for a given location correspond primarily to changes in water storage. GRACE thus reveals, in a comprehensive, vertically-integrated manner, which areas and basins have experienced net increases or decreases in water storage. GRACE data (April 2002 to November 2005) released by the Center for Space Research at the University of Texas at Austin (RL01) is used for this study. Model-based data from GLDAS (Global Land Data Assimilation System) is integrated into this study for comparison with the CSR GRACE data. Basin-scale GLDAS storage trends are similar to those from GRACE, except in the Arctic, likely due to the GLDAS snow module. Outside of the Arctic, correlation of GRACE and GLDAS data confirms significant basin-scale storage trends across the GRACE data collection period. Sharp storage decreases are noted in the Congo, Zambezi, Mekong, Parana, and Yukon basins, among others. Significant increases are noted in the Niger, Lena, and Volga basins, and others. Current and future work involves assessment of these trends and their causes in the context of hydroclimatological variability.

  8. Minimally buffered data transfers between nodes in a data communications network

    DOEpatents

    Miller, Douglas R.

    2015-06-23

    Methods, apparatus, and products for minimally buffered data transfers between nodes in a data communications network are disclosed that include: receiving, by a messaging module on an origin node, a storage identifier, a origin data type, and a target data type, the storage identifier specifying application storage containing data, the origin data type describing a data subset contained in the origin application storage, the target data type describing an arrangement of the data subset in application storage on a target node; creating, by the messaging module, origin metadata describing the origin data type; selecting, by the messaging module from the origin application storage in dependence upon the origin metadata and the storage identifier, the data subset; and transmitting, by the messaging module to the target node, the selected data subset for storing in the target application storage in dependence upon the target data type without temporarily buffering the data subset.

  9. Carbon storage in forests and peatlands of Russia

    Treesearch

    V.A. Alexeyev; R.A. Birdsey; [Editors

    1998-01-01

    Contains information about carbon storage in the vegetation, soils, and peatlands of Russia. Estimates of carbon storage in forests are derived from statistical data from the 1988 national forest inventory of Russia and from other sources. Methods are presented for converting data on timber stock into phytomass of tree stands, and for estimating carbon storage in...

  10. Evaluating the effect of online data compression on the disk cache of a mass storage system

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Yesha, Yelena

    1994-01-01

    A trace driven simulation of the disk cache of a mass storage system was used to evaluate the effect of an online compression algorithm on various performance measures. Traces from the system at NASA's Center for Computational Sciences were used to run the simulation and disk cache hit ratios, number of files and bytes migrating to tertiary storage were measured. The measurements were performed for both an LRU and a size based migration algorithm. In addition to seeing the effect of online data compression on the disk cache performance measure, the simulation provided insight into the characteristics of the interactive references, suggesting that hint based prefetching algorithms are the only alternative for any future improvements to the disk cache hit ratio.

  11. Dynamics of transit times and StorAge Selection functions in four forested catchments from stable isotope data

    NASA Astrophysics Data System (ADS)

    Rodriguez, Nicolas B.; McGuire, Kevin J.; Klaus, Julian

    2017-04-01

    Transit time distributions, residence time distributions and StorAge Selection functions are fundamental integrated descriptors of water storage, mixing, and release in catchments. In this contribution, we determined these time-variant functions in four neighboring forested catchments in H.J. Andrews Experimental Forest, Oregon, USA by employing a two year time series of 18O in precipitation and discharge. Previous studies in these catchments assumed stationary, exponentially distributed transit times, and complete mixing/random sampling to explore the influence of various catchment properties on the mean transit time. Here we relaxed such assumptions to relate transit time dynamics and the variability of StoreAge Selection functions to catchment characteristics, catchment storage, and meteorological forcing seasonality. Conceptual models of the catchments, consisting of two reservoirs combined in series-parallel, were calibrated to discharge and stable isotope tracer data. We assumed randomly sampled/fully mixed conditions for each reservoir, which resulted in an incompletely mixed system overall. Based on the results we solved the Master Equation, which describes the dynamics of water ages in storage and in catchment outflows Consistent between all catchments, we found that transit times were generally shorter during wet periods, indicating the contribution of shallow storage (soil, saprolite) to discharge. During extended dry periods, transit times increased significantly indicating the contribution of deeper storage (bedrock) to discharge. Our work indicated that the strong seasonality of precipitation impacted transit times by leading to a dynamic selection of stored water ages, whereas catchment size was not a control on transit times. In general this work showed the usefulness of using time-variant transit times with conceptual models and confirmed the existence of the catchment age mixing behaviors emerging from other similar studies.

  12. Tribology of magnetic storage systems

    NASA Technical Reports Server (NTRS)

    Bhushan, Bharat

    1992-01-01

    The construction and the materials used in different magnetic storage devices are defined. The theories of friction and adhesion, interface temperatures, wear, and solid-liquid lubrication relevant to magnetic storage systems are presented. Experimental data are presented wherever possible to support the relevant theories advanced.

  13. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-05-13

    Here, we propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. Finally, the method has been successfully demonstrated on the NSLS-II storage ring.

  14. Comprehensive, Process-based Identification of Hydrologic Models using Satellite and In-situ Water Storage Data: A Multi-objective calibration Approach

    NASA Astrophysics Data System (ADS)

    Abdo Yassin, Fuad; Wheater, Howard; Razavi, Saman; Sapriza, Gonzalo; Davison, Bruce; Pietroniro, Alain

    2015-04-01

    The credible identification of vertical and horizontal hydrological components and their associated parameters is very challenging (if not impossible) by only constraining the model to streamflow data, especially in regions where the vertical processes significantly dominate the horizontal processes. The prairie areas of the Saskatchewan River basin, a major water system in Canada, demonstrate such behavior, where the hydrologic connectivity and vertical fluxes are mainly controlled by the amount of surface and sub-surface water storages. In this study, we develop a framework for distributed hydrologic model identification and calibration that jointly constrains the model response (i.e., streamflows) as well as a set of model state variables (i.e., water storages) to observations. This framework is set up in the form of multi-objective optimization, where multiple performance criteria are defined and used to simultaneously evaluate the fidelity of the model to streamflow observations and observed (estimated) changes of water storage in the gridded landscape over daily and monthly time scales. The time series of estimated changes in total water storage (including soil, canopy, snow and pond storages) used in this study were derived from an experimental study enhanced by the information obtained from the GRACE satellite. We test this framework on the calibration of a Land Surface Scheme-Hydrology model, called MESH (Modélisation Environmentale Communautaire - Surface and Hydrology), for the Saskatchewan River basin. Pareto Archived Dynamically Dimensioned Search (PA-DDS) is used as the multi-objective optimization engine. The significance of using the developed framework is demonstrated in comparison with the results obtained through a conventional calibration approach to streamflow observations. The approach of incorporating water storage data into the model identification process can more potentially constrain the posterior parameter space, more comprehensively

  15. Influence of technology on magnetic tape storage device characteristics

    NASA Technical Reports Server (NTRS)

    Gniewek, John J.; Vogel, Stephen M.

    1994-01-01

    There are available today many data storage devices that serve the diverse application requirements of the consumer, professional entertainment, and computer data processing industries. Storage technologies include semiconductors, several varieties of optical disk, optical tape, magnetic disk, and many varieties of magnetic tape. In some cases, devices are developed with specific characteristics to meet specification requirements. In other cases, an existing storage device is modified and adapted to a different application. For magnetic tape storage devices, examples of the former case are 3480/3490 and QIC device types developed for the high end and low end segments of the data processing industry respectively, VHS, Beta, and 8 mm formats developed for consumer video applications, and D-1, D-2, D-3 formats developed for professional video applications. Examples of modified and adapted devices include 4 mm, 8 mm, 12.7 mm and 19 mm computer data storage devices derived from consumer and professional audio and video applications. With the conversion of the consumer and professional entertainment industries from analog to digital storage and signal processing, there have been increasing references to the 'convergence' of the computer data processing and entertainment industry technologies. There has yet to be seen, however, any evidence of convergence of data storage device types. There are several reasons for this. The diversity of application requirements results in varying degrees of importance for each of the tape storage characteristics.

  16. Comprehensive monitoring for heterogeneous geographically distributed storage

    DOE PAGES

    Ratnikova, Natalia; Karavakis, E.; Lammel, S.; ...

    2015-12-23

    Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a centrally managed storage space, will be increased to up to 40%. For comprehensive tracking and monitoring of the storage utilization across all participating sites, CMS developed a space monitoring system, which provides a central view of the geographically dispersed heterogeneous storage systems. The first prototype was deployed at pilot sites in summer 2014, and has been substantially reworked since then.more » In this study, we discuss the functionality and our experience of system deployment and operation on the full CMS scale.« less

  17. Rewritable three-dimensional holographic data storage via optical forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yetisen, Ali K., E-mail: ayetisen@mgh.harvard.edu; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139; Montelongo, Yunuen

    2016-08-08

    The development of nanostructures that can be reversibly arranged and assembled into 3D patterns may enable optical tunability. However, current dynamic recording materials such as photorefractive polymers cannot be used to store information permanently while also retaining configurability. Here, we describe the synthesis and optimization of a silver nanoparticle doped poly(2-hydroxyethyl methacrylate-co-methacrylic acid) recording medium for reversibly recording 3D holograms. We theoretically and experimentally demonstrate organizing nanoparticles into 3D assemblies in the recording medium using optical forces produced by the gradients of standing waves. The nanoparticles in the recording medium are organized by multiple nanosecond laser pulses to produce reconfigurablemore » slanted multilayer structures. We demonstrate the capability of producing rewritable optical elements such as multilayer Bragg diffraction gratings, 1D photonic crystals, and 3D multiplexed optical gratings. We also show that 3D virtual holograms can be reversibly recorded. This recording strategy may have applications in reconfigurable optical elements, data storage devices, and dynamic holographic displays.« less

  18. A new data set for estimating organic carbon storage to 3 m depth in soils of the northern circumpolar permafrost region

    USGS Publications Warehouse

    Hugelius, G.; Bockheim, James G.; Camill, P.; Elberling, B.; Grosse, G.; Harden, J.W.; Johnson, Kevin; Jorgenson, T.; Koven, C.D.; Kuhry, P.; Michaelson, G.; Mishra, U.; Palmtag, J.; Ping, C.-L.; O'Donnell, J.; Schirrmeister, L.; Schuur, E.A.G.; Sheng, Y.; Smith, L.C.; Strauss, J.; Yu, Z.

    2013-01-01

    High-latitude terrestrial ecosystems are key components in the global carbon cycle. The Northern Circumpolar Soil Carbon Database (NCSCD) was developed to quantify stocks of soil organic carbon (SOC) in the northern circumpolar permafrost region (a total area of 18.7 × 106 km2). The NCSCD is a geographical information system (GIS) data set that has been constructed using harmonized regional soil classification maps together with pedon data from the northern permafrost region. Previously, the NCSCD has been used to calculate SOC storage to the reference depths 0–30 cm and 0–100 cm (based on 1778 pedons). It has been shown that soils of the northern circumpolar permafrost region also contain significant quantities of SOC in the 100–300 cm depth range, but there has been no circumpolar compilation of pedon data to quantify this deeper SOC pool and there are no spatially distributed estimates of SOC storage below 100 cm depth in this region. Here we describe the synthesis of an updated pedon data set for SOC storage (kg C m-2) in deep soils of the northern circumpolar permafrost regions, with separate data sets for the 100–200 cm (524 pedons) and 200–300 cm (356 pedons) depth ranges. These pedons have been grouped into the North American and Eurasian sectors and the mean SOC storage for different soil taxa (subdivided into Gelisols including the sub-orders Histels, Turbels, Orthels, permafrost-free Histosols, and permafrost-free mineral soil orders) has been added to the updated NCSCDv2. The updated version of the data set is freely available online in different file formats and spatial resolutions that enable spatially explicit applications in GIS mapping and terrestrial ecosystem models. While this newly compiled data set adds to our knowledge of SOC in the 100–300 cm depth range, it also reveals that large uncertainties remain. Identified data gaps include spatial coverage of deep (> 100 cm) pedons in many regions as well as the spatial extent of areas

  19. Groundwater Storage Changes: Present Status from GRACE Observations

    NASA Technical Reports Server (NTRS)

    Chen, Jianli; Famiglietti, James S.; Scanlon, Bridget R.; Rodell, Matthew

    2015-01-01

    Satellite gravity measurements from the Gravity Recovery and Climate Experiment (GRACE) provide quantitative measurement of terrestrial water storage (TWS) changes with unprecedented accuracy. Combining GRACE-observed TWS changes and independent estimates of water change in soil and snow and surface reservoirs offers a means for estimating groundwater storage change. Since its launch in March 2002, GRACE time-variable gravity data have been successfully used to quantify long-term groundwater storage changes in different regions over the world, including northwest India, the High Plains Aquifer and the Central Valley in the USA, the North China Plain, Middle East, and southern Murray-Darling Basin in Australia, where groundwater storage has been significantly depleted in recent years (or decades). It is difficult to rely on in situ groundwater measurements for accurate quantification of large, regional-scale groundwater storage changes, especially at long timescales due to inadequate spatial and temporal coverage of in situ data and uncertainties in storage coefficients. The now nearly 13 years of GRACE gravity data provide a successful and unique complementary tool for monitoring and measuring groundwater changes on a global and regional basis. Despite the successful applications of GRACE in studying global groundwater storage change, there are still some major challenges limiting the application and interpretation of GRACE data. In this paper, we present an overview of GRACE applications in groundwater studies and discuss if and how the main challenges to using GRACE data can be addressed.

  20. Solar energy storage via liquid filled cans - Test data and analysis

    NASA Technical Reports Server (NTRS)

    Saha, H.

    1978-01-01

    This paper describes the design of a solar thermal storage test facility with water-filled metal cans as heat storage medium and also presents some preliminary tests results and analysis. This combination of solid and liquid mediums shows unique heat transfer and heat contents characteristics and will be well suited for use with solar air systems for space and hot water heating. The trends of the test results acquired thus far are representative of the test bed characteristics while operating in the various modes.

  1. A study of data representation in Hadoop to optimize data storage and search performance for the ATLAS EventIndex

    NASA Astrophysics Data System (ADS)

    Baranowski, Z.; Canali, L.; Toebbicke, R.; Hrivnac, J.; Barberis, D.

    2017-10-01

    This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions of event records, each of which consists of ∼100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of applications based on Hadoop. This work reports on the production usage and on tests using several data formats including Map Files, Apache Parquet, Avro, and various compression algorithms. The query engine plays also a critical role in the architecture. We report also on the use of HBase for the EventIndex, focussing on the optimizations performed in production and on the scalability tests. Additional engines that have been tested include Cloudera Impala, in particular for its SQL interface, and the optimizations for data warehouse workloads and reports.

  2. Similar Tensor Arrays - A Framework for Storage of Tensor Array Data

    NASA Astrophysics Data System (ADS)

    Brun, Anders; Martin-Fernandez, Marcos; Acar, Burak; Munoz-Moreno, Emma; Cammoun, Leila; Sigfridsson, Andreas; Sosa-Cabrera, Dario; Svensson, Björn; Herberthson, Magnus; Knutsson, Hans

    This chapter describes a framework for storage of tensor array data, useful to describe regularly sampled tensor fields. The main component of the framework, called Similar Tensor Array Core (STAC), is the result of a collaboration between research groups within the SIMILAR network of excellence. It aims to capture the essence of regularly sampled tensor fields using a minimal set of attributes and can therefore be used as a “greatest common divisor” and interface between tensor array processing algorithms. This is potentially useful in applied fields like medical image analysis, in particular in Diffusion Tensor MRI, where misinterpretation of tensor array data is a common source of errors. By promoting a strictly geometric perspective on tensor arrays, with a close resemblance to the terminology used in differential geometry, (STAC) removes ambiguities and guides the user to define all necessary information. In contrast to existing tensor array file formats, it is minimalistic and based on an intrinsic and geometric interpretation of the array itself, without references to other coordinate systems.

  3. The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    1993-01-01

    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction.

  4. Data compilation, synthesis, and calculations used for organic-carbon storage and inventory estimates for mineral soils of the Mississippi River Basin

    USGS Publications Warehouse

    Buell, Gary R.; Markewich, Helaine W.

    2004-01-01

    U.S. Geological Survey investigations of environmental controls on carbon cycling in soils and sediments of the Mississippi River Basin (MRB), an area of 3.3 x 106 square kilometers (km2), have produced an assessment tool for estimating the storage and inventory of soil organic carbon (SOC) by using soil-characterization data from Federal, State, academic, and literature sources. The methodology is based on the linkage of site-specific SOC data (pedon data) to the soil-association map units of the U.S. Department of Agriculture State Soil Geographic (STATSGO) and Soil Survey Geographic (SSURGO) digital soil databases in a geographic information system. The collective pedon database assembled from individual sources presently contains 7,321 pedon records representing 2,581 soil series. SOC storage, in kilograms per square meter (kg/m2), is calculated for each pedon at standard depth intervals from 0 to 10, 10 to 20, 20 to 50, and 50 to 100 centimeters. The site-specific storage estimates are then regionalized to produce national-scale (STATSGO) and county-scale (SSURGO) maps of SOC to a specified depth. Based on this methodology, the mean SOC storage for the top meter of mineral soil in the MRB is approximately 10 kg/m2, and the total inventory is approximately 32.3 Pg (1 petagram = 109 metric tons). This inventory is from 2.5 to 3 percent of the estimated global mineral SOC pool.

  5. Estimation of carbon storage based on individual tree detection in Pinus densiflora stands using a fusion of aerial photography and LiDAR data.

    PubMed

    Kim, So-Ra; Kwak, Doo-Ahn; Lee, Woo-Kyun; oLee, Woo-Kyun; Son, Yowhan; Bae, Sang-Won; Kim, Choonsig; Yoo, Seongjin

    2010-07-01

    The objective of this study was to estimate the carbon storage capacity of Pinus densiflora stands using remotely sensed data by combining digital aerial photography with light detection and ranging (LiDAR) data. A digital canopy model (DCM), generated from the LiDAR data, was combined with aerial photography for segmenting crowns of individual trees. To eliminate errors in over and under-segmentation, the combined image was smoothed using a Gaussian filtering method. The processed image was then segmented into individual trees using a marker-controlled watershed segmentation method. After measuring the crown area from the segmented individual trees, the individual tree diameter at breast height (DBH) was estimated using a regression function developed from the relationship observed between the field-measured DBH and crown area. The above ground biomass of individual trees could be calculated by an image-derived DBH using a regression function developed by the Korea Forest Research Institute. The carbon storage, based on individual trees, was estimated by simple multiplication using the carbon conversion index (0.5), as suggested in guidelines from the Intergovernmental Panel on Climate Change. The mean carbon storage per individual tree was estimated and then compared with the field-measured value. This study suggested that the biomass and carbon storage in a large forest area can be effectively estimated using aerial photographs and LiDAR data.

  6. Using Emergent and Internal Catchment Data to Elucidate the Influence of Landscape Structure and Storage State on Hydrologic Response in a Piedmont Watershed

    NASA Astrophysics Data System (ADS)

    Putnam, S. M.; Harman, C. J.

    2017-12-01

    Many studies have sought to unravel the influence of landscape structure and catchment state on the quantity and composition of water at the catchment outlet. These studies run into issues of equifinality where multiple conceptualizations of flow pathways or storage states cannot be discriminated against on the basis of the quantity and composition of water alone. Here we aim to parse out the influence of landscape structure, flow pathways, and storage on both the observed catchment hydrograph and chemograph, using hydrometric and water isotope data collected from multiple locations within Pond Branch, a 37-hectare Piedmont catchment of the eastern US. This data is used to infer the quantity and age distribution of water stored and released by individual hydrogeomorphic units, and the catchment as a whole, in order to test hypotheses relating landscape structure, flow pathways, and catchment storage to the hydrograph and chemograph. Initial hypotheses relating internal catchment properties or processes to the hydrograph or chemograph are formed at the catchment scale. Data from Pond Branch include spring and catchment discharge measurements, well water levels, and soil moisture, as well as three years of high frequency precipitation and surface water stable water isotope data. The catchment hydrograph is deconstructed using hydrograph separation and the quantity of water associated with each time-scale of response is compared to the quantity of discharge that could be produced from hillslope and riparian hydrogeomorphic units. Storage is estimated for each hydrogeomorphic unit as well as the vadose zone, in order to construct a continuous time series of total storage, broken down by landscape unit. Rank StorAge Selection (rSAS) functions are parameterized for each hydrogeomorphic unit as well as the catchment as a whole, and the relative importance of changing proportions of discharge from each unit as well as storage in controlling the variability in the catchment

  7. Sixth Goddard Conference on Mass Storage Systems and Technologies Held in Cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    1998-01-01

    This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence.

  8. CMS users data management service integration and first experiences with its NoSQL data storage

    NASA Astrophysics Data System (ADS)

    Riahi, H.; Spiga, D.; Boccali, T.; Ciangottini, D.; Cinquilli, M.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Santocchia, A.

    2014-06-01

    The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.

  9. 7 CFR 1767.70 - Record storage media.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 12 2012-01-01 2012-01-01 false Record storage media. 1767.70 Section 1767.70... Record storage media. The media used to capture and store the data will play an important part of each Rural Development borrower. Each borrower has the flexibility to select its own storage media. The...

  10. 7 CFR 1767.70 - Record storage media.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true Record storage media. 1767.70 Section 1767.70... Record storage media. The media used to capture and store the data will play an important part of each Rural Development borrower. Each borrower has the flexibility to select its own storage media. The...

  11. 7 CFR 1767.70 - Record storage media.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 12 2013-01-01 2013-01-01 false Record storage media. 1767.70 Section 1767.70... Record storage media. The media used to capture and store the data will play an important part of each Rural Development borrower. Each borrower has the flexibility to select its own storage media. The...

  12. 7 CFR 1767.70 - Record storage media.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 12 2011-01-01 2011-01-01 false Record storage media. 1767.70 Section 1767.70... Record storage media. The media used to capture and store the data will play an important part of each Rural Development borrower. Each borrower has the flexibility to select its own storage media. The...

  13. Petabyte Class Storage at Jefferson Lab (CEBAF)

    NASA Technical Reports Server (NTRS)

    Chambers, Rita; Davis, Mark

    1996-01-01

    By 1997, the Thomas Jefferson National Accelerator Facility will collect over one Terabyte of raw information per day of Accelerator operation from three concurrently operating Experimental Halls. When post-processing is included, roughly 250 TB of raw and formatted experimental data will be generated each year. By the year 2000, a total of one Petabyte will be stored on-line. Critical to the experimental program at Jefferson Lab (JLab) is the networking and computational capability to collect, store, retrieve, and reconstruct data on this scale. The design criteria include support of a raw data stream of 10-12 MB/second from Experimental Hall B, which will operate the CEBAF (Continuous Electron Beam Accelerator Facility) Large Acceptance Spectrometer (CLAS). Keeping up with this data stream implies design strategies that provide storage guarantees during accelerator operation, minimize the number of times data is buffered allow seamless access to specific data sets for the researcher, synchronize data retrievals with the scheduling of postprocessing calculations on the data reconstruction CPU farms, as well as support the site capability to perform data reconstruction and reduction at the same overall rate at which new data is being collected. The current implementation employs state-of-the-art StorageTek Redwood tape drives and robotics library integrated with the Open Storage Manager (OSM) Hierarchical Storage Management software (Computer Associates, International), the use of Fibre Channel RAID disks dual-ported between Sun Microsystems SMP servers, and a network-based interface to a 10,000 SPECint92 data processing CPU farm. Issues of efficiency, scalability, and manageability will become critical to meet the year 2000 requirements for a Petabyte of near-line storage interfaced to over 30,000 SPECint92 of data processing power.

  14. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    PubMed

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  15. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  16. Assimilating GRACE terrestrial water storage data into a conceptual hydrology model for the River Rhine

    NASA Astrophysics Data System (ADS)

    Widiastuti, E.; Steele-Dunne, S. C.; Gunter, B.; Weerts, A.; van de Giesen, N.

    2009-12-01

    Terrestrial water storage (TWS) is a key component of the terrestrial and global hydrological cycles, and plays a major role in the Earth’s climate. The Gravity Recovery and Climate Experiment (GRACE) twin satellite mission provided the first space-based dataset of TWS variations, albeit with coarse resolution and limited accuracy. Here, we examine the value of assimilating GRACE observations into a well-calibrated conceptual hydrology model of the Rhine river basin. In this study, the ensemble Kalman filter (EnKF) and smoother (EnKS) were applied to assimilate the GRACE TWS variation data into the HBV-96 rainfall run-off model, from February 2003 to December 2006. Two GRACE datasets were used, the DMT-1 models produced at TU Delft, and the CSR-RL04 models produced by UT-Austin . Each center uses its own data processing and filtering methods, yielding two different estimates of TWS variations and therefore two sets of assimilated TWS estimates. To validate the results, the model estimated discharge after the data assimilation was compared with measured discharge at several stations. As expected, the updated TWS was generally somewhere between the modeled and observed TWS in both experiments and the variance was also lower than both the prior error covariance and the assumed GRACE observation error. However, the impact on the discharge was found to depend heavily on the assimilation strategy used, in particular on how the TWS increments were applied to the individual storage terms of the hydrology model.

  17. Systems Issues Pertaining to Holographic Optical Data Storage in Thick Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Oezcan, Meric; Smithey, Daniel T.; Crew, Marshall; Lau, Sonie (Technical Monitor)

    1998-01-01

    The optical data storage capacity and raw bit-error-rate achievable with thick photochromic bacteriorhodopsin (BR) films are investigated for sequential recording and read- out of angularly- and shift-multiplexed digital holograms inside a thick blue-membrane D85N BR film. We address the determination of an exposure schedule that produces equal diffraction efficiencies among each of the multiplexed holograms. This exposure schedule is determined by numerical simulations of the holographic recording process within the BR material, and maximizes the total grating strength. We also experimentally measure the shift selectivity and compare the results to theoretical predictions. Finally, we evaluate the bit-error-rate of a single hologram, and of multiple holograms stored within the film.

  18. Energy Storage.

    ERIC Educational Resources Information Center

    Eaton, William W.

    Described are technological considerations affecting storage of energy, particularly electrical energy. The background and present status of energy storage by batteries, water storage, compressed air storage, flywheels, magnetic storage, hydrogen storage, and thermal storage are discussed followed by a review of development trends. Included are…

  19. Sirocco Storage Server v. pre-alpha 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew L.; Danielson, Geoffrey; Ward, H. Lee

    Sirocco is a parallel storage system under development, designed for write-intensive workloads on large-scale HPC platforms. It implements a keyvalue object store on top of a set of loosely federated storage servers that cooperate to ensure data integrity and performance. It includes support for a range of different types of storage transactions. This software release constitutes a conformant storage server, along with the client-side libraries to access the storage over a network.

  20. Archival storage solutions for PACS

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy

    1997-05-01

    While they are many, one of the inhibitors to the wide spread diffusion of PACS systems has been robust, cost effective digital archive storage solutions. Moreover, an automated Nearline solution is key to a central, sharable data repository, enabling many applications such as PACS, telemedicine and teleradiology, and information warehousing and data mining for research such as patient outcome analysis. Selecting the right solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, configuration architecture and flexibility, subsystem availability and reliability, security requirements, system cost, achievable benefits and cost savings, investment protection, strategic fit and more.This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on storage system throughput will be analyzed. The concept of automated migration of images from high performance, high cost storage devices to high capacity, low cost storage devices will be introduced as a viable way to minimize overall storage costs for an archive. The concept of access density will also be introduced and applied to the selection of the most cost effective archive solution.

  1. Spatially pooled depth-dependent reservoir storage, elevation, and water-quality data for selected reservoirs in Texas, January 1965-January 2010

    USGS Publications Warehouse

    Burley, Thomas E.; Asquith, William H.; Brooks, Donald L.

    2011-01-01

    The U.S. Geological Survey (USGS), in cooperation with Texas Tech University, constructed a dataset of selected reservoir storage (daily and instantaneous values), reservoir elevation (daily and instantaneous values), and water-quality data from 59 reservoirs throughout Texas. The period of record for the data is as large as January 1965-January 2010. Data were acquired from existing databases, spreadsheets, delimited text files, and hard-copy reports. The goal was to obtain as much data as possible; therefore, no data acquisition restrictions specifying a particular time window were used. Primary data sources include the USGS National Water Information System, the Texas Commission on Environmental Quality Surface Water-Quality Management Information System, and the Texas Water Development Board monthly Texas Water Condition Reports. Additional water-quality data for six reservoirs were obtained from USGS Texas Annual Water Data Reports. Data were combined from the multiple sources to create as complete a set of properties and constituents as the disparate databases allowed. By devising a unique per-reservoir short name to represent all sites on a reservoir regardless of their source, all sampling sites at a reservoir were spatially pooled by reservoir and temporally combined by date. Reservoir selection was based on various criteria including the availability of water-quality properties and constituents that might affect the trophic status of the reservoir and could also be important for understanding possible effects of climate change in the future. Other considerations in the selection of reservoirs included the general reservoir-specific period of record, the availability of concurrent reservoir storage or elevation data to match with water-quality data, and the availability of sample depth measurements. Additional separate selection criteria included historic information pertaining to blooms of golden algae. Physical properties and constituents were water

  2. Goddard Conference on Mass Storage Systems and Technologies, volume 2

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor)

    1993-01-01

    Papers and viewgraphs from the conference are presented. Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional discussion topics addressed the evolution of the identifiable unit for processing (file, granule, data set, or some similar object) as data ingestion rates increase dramatically, and the present state of the art in mass storage technology.

  3. Striped tertiary storage arrays

    NASA Technical Reports Server (NTRS)

    Drapeau, Ann L.

    1993-01-01

    Data stripping is a technique for increasing the throughput and reducing the response time of large access to a storage system. In striped magnetic or optical disk arrays, a single file is striped or interleaved across several disks; in a striped tape system, files are interleaved across tape cartridges. Because a striped file can be accessed by several disk drives or tape recorders in parallel, the sustained bandwidth to the file is greater than in non-striped systems, where access to the file are restricted to a single device. It is argued that applying striping to tertiary storage systems will provide needed performance and reliability benefits. The performance benefits of striping for applications using large tertiary storage systems is discussed. It will introduce commonly available tape drives and libraries, and discuss their performance limitations, especially focusing on the long latency of tape accesses. This section will also describe an event-driven tertiary storage array simulator that is being used to understand the best ways of configuring these storage arrays. The reliability problems of magnetic tape devices are discussed, and plans for modeling the overall reliability of striped tertiary storage arrays to identify the amount of error correction required are described. Finally, work being done by other members of the Sequoia group to address latency of accesses, optimizing tertiary storage arrays that perform mostly writes, and compression is discussed.

  4. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    NASA Astrophysics Data System (ADS)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  5. Goddard Conference on Mass Storage Systems and Technologies, Volume 1

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor)

    1993-01-01

    Copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in Sep. 1992 are included. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems (data ingestion rates now approach the order of terabytes per day). Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional topics addressed the evolution of the identifiable unit for processing purposes as data ingestion rates increase dramatically, and the present state of the art in mass storage technology.

  6. An investigation of used electronics return flows: A data-driven approach to capture and predict consumers storage and utilization behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabbaghi, Mostafa, E-mail: mostafas@buffalo.edu; Esmaeilian, Behzad, E-mail: b.esmaeilian@neu.edu; Raihanian Mashhadi, Ardeshir, E-mail: ardeshir@buffalo.edu

    Highlights: • We analyzed a data set of HDDs returned back to an e-waste collection site. • We studied factors that affect the storage behavior. • Consumer type, brand and size are among factors which affect the storage behavior. • Commercial consumers have stored computers more than household consumers. • Machine learning models were used to predict the storage behavior. - Abstract: Consumers often have a tendency to store their used, old or un-functional electronics for a period of time before they discard them and return them back to the waste stream. This behavior increases the obsolescence rate of usedmore » still-functional products leading to lower profitability that could be resulted out of End-of-Use (EOU) treatments such as reuse, upgrade, and refurbishment. These types of behaviors are influenced by several product and consumer-related factors such as consumers’ traits and lifestyles, technology evolution, product design features, product market value, and pro-environmental stimuli. Better understanding of different groups of consumers, their utilization and storage behavior and the connection of these behaviors with product design features helps Original Equipment Manufacturers (OEMs) and recycling and recovery industry to better overcome the challenges resulting from the undesirable storage of used products. This paper aims at providing insightful statistical analysis of Electronic Waste (e-waste) dynamic nature by studying the effects of design characteristics, brand and consumer type on the electronics usage time and end of use time-in-storage. A database consisting of 10,063 Hard Disk Drives (HDD) of used personal computers returned back to a remanufacturing facility located in Chicago, IL, USA during 2011–2013 has been selected as the base for this study. The results show that commercial consumers have stored computers more than household consumers regardless of brand and capacity factors. Moreover, a heterogeneous storage behavior

  7. A new data set for estimating organic carbon storage to 3 m depth in soils of the northern circumpolar permafrost region

    DOE PAGES

    Hugelius, Gustaf; Bockheim, J. G.; Camill, P.; ...

    2013-12-23

    High-latitude terrestrial ecosystems are key components in the global carbon cycle. The Northern Circumpolar Soil Carbon Database (NCSCD) was developed to quantify stocks of soil organic carbon (SOC) in the northern circumpolar permafrost region (a total area of 18.7 × 10 6 km 2). The NCSCD is a geographical information system (GIS) data set that has been constructed using harmonized regional soil classification maps together with pedon data from the northern permafrost region. Previously, the NCSCD has been used to calculate SOC storage to the reference depths 0–30 cm and 0–100 cm (based on 1778 pedons). It has been shownmore » that soils of the northern circumpolar permafrost region also contain significant quantities of SOC in the 100–300 cm depth range, but there has been no circumpolar compilation of pedon data to quantify this deeper SOC pool and there are no spatially distributed estimates of SOC storage below 100 cm depth in this region. Here we describe the synthesis of an updated pedon data set for SOC storage (kg C m -2) in deep soils of the northern circumpolar permafrost regions, with separate data sets for the 100–200 cm (524 pedons) and 200–300 cm (356 pedons) depth ranges. These pedons have been grouped into the North American and Eurasian sectors and the mean SOC storage for different soil taxa (subdivided into Gelisols including the sub-orders Histels, Turbels, Orthels, permafrost-free Histosols, and permafrost-free mineral soil orders) has been added to the updated NCSCDv2. The updated version of the data set is freely available online in different file formats and spatial resolutions that enable spatially explicit applications in GIS mapping and terrestrial ecosystem models. While this newly compiled data set adds to our knowledge of SOC in the 100–300 cm depth range, it also reveals that large uncertainties remain. In conclusion, identified data gaps include spatial coverage of deep (> 100 cm) pedons in many regions as well as

  8. A Method of Signal Scrambling to Secure Data Storage for Healthcare Applications.

    PubMed

    Bao, Shu-Di; Chen, Meng; Yang, Guang-Zhong

    2017-11-01

    A body sensor network that consists of wearable and/or implantable biosensors has been an important front-end for collecting personal health records. It is expected that the full integration of outside-hospital personal health information and hospital electronic health records will further promote preventative health services as well as global health. However, the integration and sharing of health information is bound to bring with it security and privacy issues. With extensive development of healthcare applications, security and privacy issues are becoming increasingly important. This paper addresses the potential security risks of healthcare data in Internet-based applications and proposes a method of signal scrambling as an add-on security mechanism in the application layer for a variety of healthcare information, where a piece of tiny data is used to scramble healthcare records. The former is kept locally and the latter, along with security protection, is sent for cloud storage. The tiny data can be derived from a random number generator or even a piece of healthcare data, which makes the method more flexible. The computational complexity and security performance in terms of theoretical and experimental analysis has been investigated to demonstrate the efficiency and effectiveness of the proposed method. The proposed method is applicable to all kinds of data that require extra security protection within complex networks.

  9. Gas storage materials, including hydrogen storage materials

    DOEpatents

    Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

    2013-02-19

    A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

  10. Gas storage materials, including hydrogen storage materials

    DOEpatents

    Mohtadi, Rana F; Wicks, George G; Heung, Leung K; Nakamura, Kenji

    2014-11-25

    A material for the storage and release of gases comprises a plurality of hollow elements, each hollow element comprising a porous wall enclosing an interior cavity, the interior cavity including structures of a solid-state storage material. In particular examples, the storage material is a hydrogen storage material, such as a solid state hydride. An improved method for forming such materials includes the solution diffusion of a storage material solution through a porous wall of a hollow element into an interior cavity.

  11. Integration of end-user Cloud storage for CMS analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  12. Integration of end-user Cloud storage for CMS analysis

    DOE PAGES

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...

    2017-05-19

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  13. Using "StorAge Selection" functions and high resolution isotope data to unravel travel time distributions in headwater catchments

    NASA Astrophysics Data System (ADS)

    Benettin, Paolo; Soulsby, Chris; Birkel, Christian; Tetzlaff, Doerthe; Botter, Gianluca; Rinaldo, Andrea

    2017-04-01

    We use high resolution tracer data from the Bruntland Burn catchment (UK) to test theoretical approaches that integrate catchment-scale flow and transport processes in a unified framework centered on selective age sampling by streamflow and evapotranspiration fluxes. Hydrologic transport is here described through StorAge Selection (SAS) functions, parametrized as simple power laws. By representing the way in which catchment storage generates outflows composed by water of different ages, the main mechanism regulating the tracer composition of runoff is clearly identified. The calibrated numerical model provides simulations that convincingly reproduce complex measured signals of daily deuterium content in stream waters during wet and dry periods. The results for the catchment under consideration are consistent with other recent studies indicating a tendency for natural catchments to preferentially release younger available water. The model allows estimating transient water age and its related uncertainty, as well as the total catchment storage. This study shows that power-law SAS functions prove a powerful tool to explain catchment-scale transport processes that also has potential in less intensively monitored sites.

  14. Analog storage integrated circuit

    DOEpatents

    Walker, J. T.; Larsen, R. S.; Shapiro, S. L.

    1989-01-01

    A high speed data storage array is defined utilizing a unique cell design for high speed sampling of a rapidly changing signal. Each cell of the array includes two input gates between the signal input and a storage capacitor. The gates are controlled by a high speed row clock and low speed column clock so that the instantaneous analog value of the signal is only sampled and stored by each cell on coincidence of the two clocks.

  15. Analog storage integrated circuit

    DOEpatents

    Walker, J.T.; Larsen, R.S.; Shapiro, S.L.

    1989-03-07

    A high speed data storage array is defined utilizing a unique cell design for high speed sampling of a rapidly changing signal. Each cell of the array includes two input gates between the signal input and a storage capacitor. The gates are controlled by a high speed row clock and low speed column clock so that the instantaneous analog value of the signal is only sampled and stored by each cell on coincidence of the two clocks. 6 figs.

  16. Groundwater and Terrestrial Water Storage

    NASA Technical Reports Server (NTRS)

    Rodell, Matthew; Chambers, Don P.; Famiglietti, James S.

    2012-01-01

    Groundwater is a vital resource and also a dynamic component of the water cycle. Unconfined aquifer storage is less responsive to short term weather conditions than the near surface terrestrial water storage (TWS) components (soil moisture, surface water, and snow). However, save for the permanently frozen regions, it typically exhibits a larger range of variability over multi-annual periods than the other components. Groundwater is poorly monitored at the global scale, but terrestrial water storage (TWS) change data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission are a reasonable proxy for unconfined groundwater at climatic scales.

  17. Robust holographic storage system design.

    PubMed

    Watanabe, Takahiro; Watanabe, Minoru

    2011-11-21

    Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration. © 2011 Optical Society of America

  18. iSDS: a self-configurable software-defined storage system for enterprise

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen

    2018-01-01

    Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.

  19. Integrating new Storage Technologies into EOS

    NASA Astrophysics Data System (ADS)

    Peters, Andreas J.; van der Ster, Dan C.; Rocha, Joaquim; Lensing, Paul

    2015-12-01

    The EOS[1] storage software was designed to cover CERN disk-only storage use cases in the medium-term trading scalability against latency. To cover and prepare for long-term requirements the CERN IT data and storage services group (DSS) is actively conducting R&D and open source contributions to experiment with a next generation storage software based on CEPH[3] and ethernet enabled disk drives. CEPH provides a scale-out object storage system RADOS and additionally various optional high-level services like S3 gateway, RADOS block devices and a POSIX compliant file system CephFS. The acquisition of CEPH by Redhat underlines the promising role of CEPH as the open source storage platform of the future. CERN IT is running a CEPH service in the context of OpenStack on a moderate scale of 1 PB replicated storage. Building a 100+PB storage system based on CEPH will require software and hardware tuning. It is of capital importance to demonstrate the feasibility and possibly iron out bottlenecks and blocking issues beforehand. The main idea behind this R&D is to leverage and contribute to existing building blocks in the CEPH storage stack and implement a few CERN specific requirements in a thin, customisable storage layer. A second research topic is the integration of ethernet enabled disks. This paper introduces various ongoing open source developments, their status and applicability.

  20. Study of Basin Recession Characteristics and Groundwater Storage Properties

    NASA Astrophysics Data System (ADS)

    Yen-Bo, Chen; Cheng-Haw, Lee

    2017-04-01

    Stream flow and groundwater storage are freshwater resources that human live on.In this study, we discuss southern area basin recession characteristics and Kao-Ping River basin groundwater storage, and hope to supply reference to Taiwan water resource management. The first part of this study is about recession characteristics. We apply Brutsaert (2008) low flow analysis model to establish two recession data pieces sifting models, including low flow steady period model and normal condition model. Within individual event analysis, group event analysis and southern area basin recession assessment, stream flow and base flow recession characteristics are parameterized. The second part of this study is about groundwater storage. Among main basin in southern Taiwan, there are sufficient stream flow and precipitation gaging station data about Kao-Ping River basin and extensive drainage data, and data about different hydrological characteristics between upstream and downstream area. Therefore, this study focuses on Kao-Ping River basin and accesses groundwater storage properties. Taking residue of groundwater volume in dry season into consideration, we use base flow hydrograph to access periodical property of groundwater storage, in order to establish hydrological period conceptual model. With groundwater storage and precipitation accumulative linearity quantified by hydrological period conceptual model, their periodical changing and alternation trend properties in each drainage areas of Kao-Ping River basin have been estimated. Results of this study showed that the recession time of stream flow is related to initial flow rate of the recession events. The recession time index is lower when the flow is stream flow, not base flow, and the recession time index is higher in low flow steady flow period than in normal recession condition. By applying hydrological period conceptual model, groundwater storage could explicitly be analyzed and compared with precipitation, by only

  1. Digital radiography and electronic data storage from the perspective of legal requirements for record keeping.

    PubMed

    Figgener, L; Runte, C

    2003-12-01

    In some countries physicians and dentists are required by law to keep medical and dental records. These records not only serve as personal notes and memory aids but have to be in accordance with the necessary standard of care and may be used as evidence in litigation. Inadequate, incomplete or even missing records can lead to reversal of the burden of proof, resulting in a dramatically reduced chance of successful defence in litigation. The introduction of digital radiography and electronic data storage presents a new problem with respect to legal evidence, since digital data can easily be manipulated and industry is now required to provide adequate measures to prevent manipulations and forgery.

  2. CO2 Storage related Groundwater Impacts and Protection

    NASA Astrophysics Data System (ADS)

    Fischer, Sebastian; Knopf, Stefan; May, Franz; Rebscher, Dorothee

    2016-03-01

    Injection of CO2 into the deep subsurface will affect physical and chemical conditions in the storage environment. Hence, geological CO2 storage can have potential impacts on groundwater resources. Shallow freshwater can only be affected if leakage pathways facilitate the ascent of CO2 or saline formation water. Leakage associated with CO2 storage cannot be excluded, but potential environmental impacts could be reduced by selecting suitable storage locations. In the framework of risk assessment, testing of models and scenarios against operational data has to be performed repeatedly in order to predict the long-term fate of CO2. Monitoring of a storage site should reveal any deviations from expected storage performance, so that corrective measures can be taken. Comprehensive R & D activities and experience from several storage projects will enhance the state of knowledge on geological CO2 storage, thus enabling safe storage operations at well-characterised and carefully selected storage sites while meeting the requirements of groundwater protection.

  3. Integration of cloud-based storage in BES III computing environment

    NASA Astrophysics Data System (ADS)

    Wang, L.; Hernandez, F.; Deng, Z.

    2014-06-01

    We present an on-going work that aims to evaluate the suitability of cloud-based storage as a supplement to the Lustre file system for storing experimental data for the BES III physics experiment and as a backend for storing files belonging to individual members of the collaboration. In particular, we discuss our findings regarding the support of cloud-based storage in the software stack of the experiment. We report on our development work that improves the support of CERN' s ROOT data analysis framework and allows efficient remote access to data through several cloud storage protocols. We also present our efforts providing the experiment with efficient command line tools for navigating and interacting with cloud storage-based data repositories both from interactive sessions and grid jobs.

  4. Tunable blue laser compensates for thermal expansion of the medium in holographic data storage.

    PubMed

    Tanaka, Tomiji; Sako, Kageyasu; Kasegawa, Ryo; Toishi, Mitsuru; Watanabe, Kenjiro

    2007-09-01

    A tunable laser optical source equipped with wavelength and mode-hop monitors was developed to compensate for thermal expansion of the medium in holographic data storage. The laser's tunable range is 402-409 nm, and supplying 90 mA of laser diode current provides an output power greater than 40 mW. The aberration of output light is less than 0.05 lambdarms. The temperature range within which the laser can compensate for thermal expansion of the medium is estimated based on the tunable range, which is +/-13.5 degrees C for glass substrates and +/-17.5 degrees C for amorphous polyolefin substrates.

  5. Analysis of carbon and nutrient storage of dry tropical forest of chhattisgarh using satellite data

    NASA Astrophysics Data System (ADS)

    Thakur, T. K.

    2014-11-01

    The purpose of this study was to characterize the carbon, nitrogen, phosphorus and potassium in the Barnowpara Sanctuary, Raipur district, Chhattisgarh, India through the use of satellite remote sensing and GIS The total storage of nutrients in vegetation (OS + US + GS) varied from 105.1 to 560.69 kg ha-1 in N, 4.09 kg ha-1 to 49.59 kg ha-1 in P, 24.59 kg ha-1 to 255.58 kg ha-1 for K and 7310 to 4836 kg ha-1 for C in different forest types. They were highest in Dense mixed forest and lowest in Degraded mixed forest. The study also showed that NDVI and carbon storage was strongly correlated to Shannon Index and species richness thus it indicates that the diversity of forest type play a vital role in carbon accumulation. The study also developed reliable regression model for the estimation of LAI, biomass, NPP, C & N storage in dry tropical forests by using NDVI and different vegetation indices, which can be derived from fine resolution satellite data. The study shows that dry tropical forests of Central India are quite immature and not in standing state and have strong potential for carbon sequestration. Both quantitative and qualitative information derived in the study helped in evolving key strategies for maintaining existing C pools and also improving the C sequestration in different forest types. The study explores the scope and potential of dry tropical forests for improving C sequestration and mitigating the global warming and climatic change.

  6. Reorganizing Nigeria's Vaccine Supply Chain Reduces Need For Additional Storage Facilities, But More Storage Is Required.

    PubMed

    Shittu, Ekundayo; Harnly, Melissa; Whitaker, Shanta; Miller, Roger

    2016-02-01

    One of the major problems facing Nigeria's vaccine supply chain is the lack of adequate vaccine storage facilities. Despite the introduction of solar-powered refrigerators and the use of new tools to monitor supply levels, this problem persists. Using data on vaccine supply for 2011-14 from Nigeria's National Primary Health Care Development Agency, we created a simulation model to explore the effects of variance in supply and demand on storage capacity requirements. We focused on the segment of the supply chain that moves vaccines inside Nigeria. Our findings suggest that 55 percent more vaccine storage capacity is needed than is currently available. We found that reorganizing the supply chain as proposed by the National Primary Health Care Development Agency could reduce that need to 30 percent more storage. Storage requirements varied by region of the country and vaccine type. The Nigerian government may want to consider the differences in storage requirements by region and vaccine type in its proposed reorganization efforts. Project HOPE—The People-to-People Health Foundation, Inc.

  7. Used Nuclear Fuel-Storage, Transportation & Disposal Analysis Resource and Data System (UNF-ST&DARDS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Kaushik; Clarity, Justin B; Cumberland, Riley M

    This will be licensed via RSICC. A new, integrated data and analysis system has been designed to simplify and automate the performance of accurate and efficient evaluations for characterizing the input to the overall nuclear waste management system -UNF-Storage, Transportation & Disposal Analysis Resource and Data System (UNF-ST&DARDS). A relational database within UNF-ST&DARDS provides a standard means by which UNF-ST&DARDS can succinctly store and retrieve modeling and simulation (M&S) parameters for specific spent nuclear fuel analysis. A library of various analysis model templates provides the ability to communicate the various set of M&S parameters to the most appropriate M&S application.more » Interactive visualization capabilities facilitate data analysis and results interpretation. UNF-ST&DARDS current analysis capabilities include (1) assembly-specific depletion and decay, (2) and spent nuclear fuel cask-specific criticality and shielding. Currently, UNF-ST&DARDS uses SCALE nuclear analysis code system for performing nuclear analysis.« less

  8. ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization

    NASA Astrophysics Data System (ADS)

    Antcheva, I.; Ballintijn, M.; Bellenot, B.; Biskup, M.; Brun, R.; Buncic, N.; Canal, Ph.; Casadei, D.; Couet, O.; Fine, V.; Franco, L.; Ganis, G.; Gheata, A.; Maline, D. Gonzalez; Goto, M.; Iwaszkiewicz, J.; Kreshuk, A.; Segura, D. Marcos; Maunder, R.; Moneta, L.; Naumann, A.; Offermann, E.; Onuchin, V.; Panacek, S.; Rademakers, F.; Russo, P.; Tadel, M.

    2009-12-01

    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advanced statistical tools. Multivariate classification methods based on machine learning techniques are available via the TMVA package. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks — e.g. data mining in HEP — by using PROOF, which will take care of optimally

  9. Osmotically inactive sodium and potassium storage: lessons learned from the Edelman and Boling data.

    PubMed

    Nguyen, Minhtri K; Nguyen, Dai-Scott; Nguyen, Minh-Kevin

    2016-09-01

    Because changes in the plasma water sodium concentration ([Na(+)]pw) are clinically due to changes in the mass balance of Na(+), K(+), and H2O, the analysis and treatment of the dysnatremias are dependent on the validity of the Edelman equation in defining the quantitative interrelationship between the [Na(+)]pw and the total exchangeable sodium (Nae), total exchangeable potassium (Ke), and total body water (TBW) (Edelman IS, Leibman J, O'Meara MP, Birkenfeld LW. J Clin Invest 37: 1236-1256, 1958): [Na(+)]pw = 1.11(Nae + Ke)/TBW - 25.6. The interrelationship between [Na(+)]pw and Nae, Ke, and TBW in the Edelman equation is empirically determined by accounting for measurement errors in all of these variables. In contrast, linear regression analysis of the same data set using [Na(+)]pw as the dependent variable yields the following equation: [Na(+)]pw = 0.93(Nae + Ke)/TBW + 1.37. Moreover, based on the study by Boling et al. (Boling EA, Lipkind JB. 18: 943-949, 1963), the [Na(+)]pw is related to the Nae, Ke, and TBW by the following linear regression equation: [Na(+)]pw = 0.487(Nae + Ke)/TBW + 71.54. The disparities between the slope and y-intercept of these three equations are unknown. In this mathematical analysis, we demonstrate that the disparities between the slope and y-intercept in these three equations can be explained by how the osmotically inactive Na(+) and K(+) storage pool is quantitatively accounted for. Our analysis also indicates that the osmotically inactive Na(+) and K(+) storage pool is dynamically regulated and that changes in the [Na(+)]pw can be predicted based on changes in the Nae, Ke, and TBW despite dynamic changes in the osmotically inactive Na(+) and K(+) storage pool. Copyright © 2016 the American Physiological Society.

  10. Research on an IP disaster recovery storage system

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Wang, Yusheng; Zhu, Jianfeng

    2008-12-01

    According to both the Fibre Channel (FC) Storage Area Network (SAN) switch and Fabric Application Interface Standard (FAIS) mechanism, an iSCSI storage controller is put forward and based upon it, an internet Small Computer System Interface (iSCSI) SAN construction strategy for disaster recovery (DR) is proposed and some multiple sites replication models and a closed queue performance analysis method are also discussed in this paper. The iSCSI storage controller lies in the fabric level of the networked storage infrastructure, and it can be used to connect to both the hybrid storage applications and storage subsystems, besides, it can provide virtualized storage environment and support logical volume access control, and by cooperating with the remote peerparts, a disaster recovery storage system can be built on the basis of the data replication, block-level snapshot and Internet Protocol (IP) take-over functions.

  11. FPGA-based prototype storage system with phase change memory

    NASA Astrophysics Data System (ADS)

    Li, Gezi; Chen, Xiaogang; Chen, Bomy; Li, Shunfen; Zhou, Mi; Han, Wenbing; Song, Zhitang

    2016-10-01

    With the ever-increasing amount of data being stored via social media, mobile telephony base stations, and network devices etc. the database systems face severe bandwidth bottlenecks when moving vast amounts of data from storage to the processing nodes. At the same time, Storage Class Memory (SCM) technologies such as Phase Change Memory (PCM) with unique features like fast read access, high density, non-volatility, byte-addressability, positive response to increasing temperature, superior scalability, and zero standby leakage have changed the landscape of modern computing and storage systems. In such a scenario, we present a storage system called FLEET which can off-load partial or whole SQL queries to the storage engine from CPU. FLEET uses an FPGA rather than conventional CPUs to implement the off-load engine due to its highly parallel nature. We have implemented an initial prototype of FLEET with PCM-based storage. The results demonstrate that significant performance and CPU utilization gains can be achieved by pushing selected query processing components inside in PCM-based storage.

  12. An experiment in big data: storage, querying and visualisation of data taken from the Liverpool Telescope's wide field cameras

    NASA Astrophysics Data System (ADS)

    Barnsley, R. M.; Steele, Iain A.; Smith, R. J.; Mawson, Neil R.

    2014-07-01

    The Small Telescopes Installed at the Liverpool Telescope (STILT) project has been in operation since March 2009, collecting data with three wide field unfiltered cameras: SkycamA, SkycamT and SkycamZ. To process the data, a pipeline was developed to automate source extraction, catalogue cross-matching, photometric calibration and database storage. In this paper, modifications and further developments to this pipeline will be discussed, including a complete refactor of the pipeline's codebase into Python, migration of the back-end database technology from MySQL to PostgreSQL, and changing the catalogue used for source cross-matching from USNO-B1 to APASS. In addition to this, details will be given relating to the development of a preliminary front-end to the source extracted database which will allow a user to perform common queries such as cone searches and light curve comparisons of catalogue and non-catalogue matched objects. Some next steps and future ideas for the project will also be presented.

  13. High Burnup Dry Storage Cask Research and Development Project, Final Test Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2014-02-27

    EPRI is leading a project team to develop and implement the first five years of a Test Plan to collect data from a SNF dry storage system containing high burnup fuel.12 The Test Plan defined in this document outlines the data to be collected, and the storage system design, procedures, and licensing necessary to implement the Test Plan.13 The main goals of the proposed test are to provide confirmatory data14 for models, future SNF dry storage cask design, and to support license renewals and new licenses for ISFSIs. To provide data that is most relevant to high burnup fuel inmore » dry storage, the design of the test storage system must mimic real conditions that high burnup SNF experiences during all stages of dry storage: loading, cask drying, inert gas backfilling, and transfer to the ISFSI for multi-year storage.15 Along with other optional modeling, SETs, and SSTs, the data collected in this Test Plan can be used to evaluate the integrity of dry storage systems and the high burnup fuel contained therein over many decades. It should be noted that the Test Plan described in this document discusses essential activities that go beyond the first five years of Test Plan implementation.16 The first five years of the Test Plan include activities up through loading the cask, initiating the data collection, and beginning the long-term storage period at the ISFSI. The Test Plan encompasses the overall project that includes activities that may not be completed until 15 or more years from now, including continued data collection, shipment of the Research Project Cask to a Fuel Examination Facility, opening the cask at the Fuel Examination Facility, and examining the high burnup fuel after the initial storage period.« less

  14. Preliminary study on the relationships between aboveground storage and remotely sensed data at Pingdong plain afforestation land in Southern Taiwan

    NASA Astrophysics Data System (ADS)

    Wei, C.; Chen, J. M.; Yu, J.; Cheng, C.; Lai, Y.; Chiang, P.; Hong, C.; Chang, C.; Wey, T.; Tsai, M.; Wang, Y.

    2013-12-01

    This research aims on the relationships between LAI and five vegetation index (BR, SRBR, BD, NDVI and TNDVI) from remotely sensed images, in situ measurements and aboveground storage for 10-11yr old plain afforestation (14 species) located at Wanlong farm of subtropical-tropical region at the southern part of Taiwan which originally governed by Taiwan Sugar Corporation. The preliminary results show the aboveground storage is 14.19×9.19 m3 ha-1 and the correlation coefficient between aboveground storage and BR, SRBR, BD, NDVI and TNDVI is 0.331 (p=0.211), 0.317 (p=0.232), 0.310 (p=0.244), 0.714 (p=0.002) and 0.706 (p=0.002) while NDVI performs the best correlation. LAI value using Fisheye or Tracing Radiation and Architecture of Canopies (TRAC) is 0.76×0.37 and 3.89×2.81, respectively. Besides, CI measured by TRAC is 0.83×0.09 and the correlation coefficient with LAI is 0.868 (p<0.001). It shows feasible to estimate aboveground storage using ground investigation incorporating remotely sensed data for young plain afforestation stand. Due to the mixed-plantation and difference between growing and non-growing season at the sample site, the relationship between aboveground storage, LAI and VI is yet to be developed for independent species and may need to modify due to seasonally and inter-annually variation.

  15. Optimizing tertiary storage organization and access for spatio-temporal datasets

    NASA Technical Reports Server (NTRS)

    Chen, Ling Tony; Rotem, Doron; Shoshani, Arie; Drach, Bob; Louis, Steve; Keating, Meridith

    1994-01-01

    We address in this paper data management techniques for efficiently retrieving requested subsets of large datasets stored on mass storage devices. This problem represents a major bottleneck that can negate the benefits of fast networks, because the time to access a subset from a large dataset stored on a mass storage system is much greater that the time to transmit that subset over a network. This paper focuses on very large spatial and temporal datasets generated by simulation programs in the area of climate modeling, but the techniques developed can be applied to other applications that deal with large multidimensional datasets. The main requirement we have addressed is the efficient access of subsets of information contained within much larger datasets, for the purpose of analysis and interactive visualization. We have developed data partitioning techniques that partition datasets into 'clusters' based on analysis of data access patterns and storage device characteristics. The goal is to minimize the number of clusters read from mass storage systems when subsets are requested. We emphasize in this paper proposed enhancements to current storage server protocols to permit control over physical placement of data on storage devices. We also discuss in some detail the aspects of the interface between the application programs and the mass storage system, as well as a workbench to help scientists to design the best reorganization of a dataset for anticipated access patterns.

  16. An object-oriented approach to data display and storage: 3 years experience, 25,000 cases.

    PubMed

    Sainsbury, D A

    1993-11-01

    Object-oriented programming techniques were used to develop computer based data display and storage systems. These have been operating in the 8 anaesthetising areas of the Adelaide Children's Hospital for 3 years. The analogue and serial outputs from an array of patient monitors are connected to IBM compatible PC-XT computers. The information is displayed on a colour screen as wave-form and trend graphs and digital format in 'real time'. The trend data is printed simultaneously on a dot matrix printer. This data is also stored for 24 hours on 'hard' disk. The major benefit has been the provision of a single visual focus for all monitored variables. The automatic logging of data has been invaluable in the analysis of critical incidents. The systems were made possible by recent, rapid improvements in computer hardware and software. This paper traces the development of the program and demonstrates the advantages of object-oriented programming techniques.

  17. Use of a thin-section archive and enterprise 3D software for long-term storage of thin-slice CT data sets.

    PubMed

    Meenan, Christopher; Daly, Barry; Toland, Christopher; Nagy, Paul

    2006-01-01

    Rapid advances are changing the technology and applications of multidetector computed tomography (CT) scanners. The major increase in data associated with this new technology, however, breaks most commercial picture archiving and communication system (PACS) architectures by preventing them from delivering data in real time to radiologists and outside clinicians. We proposed a phased model for 3D workflow, installed a thin-slice archive and measured thin-slice data storage over a period of 5 months. A mean of 1,869 CT studies were stored per month, with an average of 643 images per study and a mean total volume of 588 GB/month. We also surveyed 48 radiologists to determine diagnostic use, impressions of thin-slice value, and requirements for retention times. The majority of radiologists thought thin slice was helpful for diagnosis and regularly used the application. Permanent storage of thin slice CT is likely to become best practice and a mission-critical pursuit for the health care enterprise.

  18. Weighty data: importance information influences estimated weight of digital information storage devices

    PubMed Central

    Schneider, Iris K.; Parzuchowski, Michal; Wojciszke, Bogdan; Schwarz, Norbert; Koole, Sander L.

    2015-01-01

    Previous work suggests that perceived importance of an object influences estimates of its weight. Specifically, important books were estimated to be heavier than non-important books. However, the experimental set-up of these studies may have suffered from a potential confound and findings may be confined to books only. Addressing this, we investigate the effect of importance on weight estimates by examining whether the importance of information stored on a data storage device (USB-stick or portable hard drive) can alter weight estimates. Results show that people thinking a USB-stick holds important tax information (vs. expired tax information vs. no information) estimate it to be heavier (Experiment 1) compared to people who do not. Similarly, people who are told a portable hard drive holds personally relevant information (vs. irrelevant), also estimate the drive to be heavier (Experiments 2A,B). PMID:25620942

  19. Carbon storage in China's terrestrial ecosystems: A synthesis.

    PubMed

    Xu, Li; Yu, Guirui; He, Nianpeng; Wang, Qiufeng; Gao, Yang; Wen, Ding; Li, Shenggong; Niu, Shuli; Ge, Jianping

    2018-02-12

    It is important to accurately estimate terrestrial ecosystem carbon (C) storage. However, the spatial patterns of C storage and the driving factors remain unclear, owing to lack of data. Here, we collected data from literature published between 2004 and 2014 on C storage in China's terrestrial ecosystems, to explore variation in C storage across different ecosystems and evaluate factors that influence them. We estimated that total C storage was 99.15 ± 8.71 PgC, with 14.60 ± 3.24 PgC in vegetation C (Veg-C) and 84.55 ± 8.09 PgC in soil organic C (SOC) storage. Furthermore, C storage in forest, grassland, wetland, shrub, and cropland ecosystems (excluding vegetation) was 34.08 ± 5.43, 25.69 ± 4.71, 3.62 ± 0.80, 7.42 ± 1.92, and 15.17 ± 2.20 PgC, respectively. In addition to soil nutrients and texture, climate was the main factor regulating the spatial patterns of C storage. Climate influenced the spatial patterns of Veg-C and SOC density via different approaches, Veg-C was mainly positively influenced by mean annual precipitation (MAP), whereas SOC was negatively dependent on mean annual temperature (MAT). This systematic estimate of C storage in China provides new insights about how climate constrains C sequestration, demonstrating the contrasting effects of MAP and MAT on Veg-C and SOC; thus, these parameters should be incorporated into future land management and C sequestration strategies.

  20. 19 CFR 163.5 - Methods for storage of records.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... standard business practice for storage of records include, but are not limited to, machine readable data... 19 Customs Duties 2 2012-04-01 2012-04-01 false Methods for storage of records. 163.5 Section 163... THE TREASURY (CONTINUED) RECORDKEEPING § 163.5 Methods for storage of records. (a) Original records...