Sample records for disk storage system

  1. Archive Storage Media Alternatives.

    ERIC Educational Resources Information Center

    Ranade, Sanjay

    1990-01-01

    Reviews requirements for a data archive system and describes storage media alternatives that are currently available. Topics discussed include data storage; data distribution; hierarchical storage architecture, including inline storage, online storage, nearline storage, and offline storage; magnetic disks; optical disks; conventional magnetic…

  2. Mass storage technology in networks

    NASA Astrophysics Data System (ADS)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  3. Planning for optical disk technology with digital cartography.

    USGS Publications Warehouse

    Light, D.L.

    1986-01-01

    A major shortfall that still exists in digital systems is the need for very large mass storage capacity. The decade of the 1980s has introduced laser optical disk storage technology, which may be the breakthrough needed for mass storage. This paper addresses system concepts for digital cartography during the transition period. Emphasis will be placed on determining USGS mass storage requirements and introducing laser optical disk technology for handling storage problems for digital data in this decade.-from Author

  4. A high-speed, large-capacity, 'jukebox' optical disk system

    NASA Technical Reports Server (NTRS)

    Ammon, G. J.; Calabria, J. A.; Thomas, D. T.

    1985-01-01

    Two optical disk 'jukebox' mass storage systems which provide access to any data in a store of 10 to the 13th bits (1250G bytes) within six seconds have been developed. The optical disk jukebox system is divided into two units, including a hardware/software controller and a disk drive. The controller provides flexibility and adaptability, through a ROM-based microcode-driven data processor and a ROM-based software-driven control processor. The cartridge storage module contains 125 optical disks housed in protective cartridges. Attention is given to a conceptual view of the disk drive unit, the NASA optical disk system, the NASA database management system configuration, the NASA optical disk system interface, and an open systems interconnect reference model.

  5. PLANNING FOR OPTICAL DISK TECHNOLOGY WITH DIGITAL CARTOGRAPHY.

    USGS Publications Warehouse

    Light, Donald L.

    1984-01-01

    Progress in the computer field continues to suggest that the transition from traditional analog mapping systems to digital systems has become a practical possibility. A major shortfall that still exists in digital systems is the need for very large mass storage capacity. The decade of the 1980's has introduced laser optical disk storage technology, which may be the breakthrough needed for mass storage. This paper addresses system concepts for digital cartography during the transition period. Emphasis is placed on determining U. S. Geological Survey mass storage requirements and introducing laser optical disk technology for handling storage problems for digital data in this decade.

  6. Optical Disks Compete with Videotape and Magnetic Storage Media: Part I.

    ERIC Educational Resources Information Center

    Urrows, Henry; Urrows, Elizabeth

    1988-01-01

    Describes the latest technology in videotape cassette systems and other magnetic storage devices and their possible effects on optical data disks. Highlights include Honeywell's Very Large Data Store (VLDS); Exabyte's tape cartridge storage system; standards for tape drives; and Masstor System's videotape cartridge system. (LRW)

  7. Optical Disk for Digital Storage and Retrieval Systems.

    ERIC Educational Resources Information Center

    Rose, Denis A.

    1983-01-01

    Availability of low-cost digital optical disks will revolutionize storage and retrieval systems over next decade. Three major factors will effect this change: availability of disks and controllers at low-cost and in plentiful supply; availability of low-cost and better output means for system users; and more flexible, less expensive communication…

  8. Selected Conference Proceedings from the 1985 Videodisc, Optical Disk, and CD-ROM Conference and Exposition (Philadelphia, PA, December 10-12, 1985).

    ERIC Educational Resources Information Center

    Cerva, John R.; And Others

    1986-01-01

    Eight papers cover: optical storage technology; cross-cultural videodisc design; optical disk technology use at the Library of Congress Research Service and National Library of Medicine; Internal Revenue Service image storage and retrieval system; solving business problems with CD-ROM; a laser disk operating system; and an optical disk for…

  9. Optical Digital Disk Storage: An Application for News Libraries.

    ERIC Educational Resources Information Center

    Crowley, Mary Jo

    1988-01-01

    Describes the technology, equipment, and procedures necessary for converting a historical newspaper clipping collection to optical disk storage. Alternative storage systems--microforms, laser scanners, optical storage--are also retrieved, and the advantages and disadvantages of optical storage are considered. (MES)

  10. Using Solid State Disk Array as a Cache for LHC ATLAS Data Analysis

    NASA Astrophysics Data System (ADS)

    Yang, W.; Hanushevsky, A. B.; Mount, R. P.; Atlas Collaboration

    2014-06-01

    User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and cover a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of the spinning disk storage to help improve the performance of LHC ATLAS user analysis at SLAC. The system uses several days of data access records to make caching decisions. It can also use information from other sources such as a work-flow management system. We evaluate the performance of the system both in terms of caching and its impact on user analysis jobs. The system currently uses Xrootd technology, but the technique can be applied to any storage system.

  11. The performance of disk arrays in shared-memory database machines

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  12. Evaluating the effect of online data compression on the disk cache of a mass storage system

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Yesha, Yelena

    1994-01-01

    A trace driven simulation of the disk cache of a mass storage system was used to evaluate the effect of an online compression algorithm on various performance measures. Traces from the system at NASA's Center for Computational Sciences were used to run the simulation and disk cache hit ratios, number of files and bytes migrating to tertiary storage were measured. The measurements were performed for both an LRU and a size based migration algorithm. In addition to seeing the effect of online data compression on the disk cache performance measure, the simulation provided insight into the characteristics of the interactive references, suggesting that hint based prefetching algorithms are the only alternative for any future improvements to the disk cache hit ratio.

  13. Jefferson Lab Mass Storage and File Replication Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Bird; Ying Chen; Bryan Hess

    Jefferson Lab has implemented a scalable, distributed, high performance mass storage system - JASMine. The system is entirely implemented in Java, provides access to robotic tape storage and includes disk cache and stage manager components. The disk manager subsystem may be used independently to manage stand-alone disk pools. The system includes a scheduler to provide policy-based access to the storage systems. Security is provided by pluggable authentication modules and is implemented at the network socket level. The tape and disk cache systems have well defined interfaces in order to provide integration with grid-based services. The system is in production andmore » being used to archive 1 TB per day from the experiments, and currently moves over 2 TB per day total. This paper will describe the architecture of JASMine; discuss the rationale for building the system, and present a transparent 3rd party file replication service to move data to collaborating institutes using JASMine, XM L, and servlet technology interfacing to grid-based file transfer mechanisms.« less

  14. Architecture and method for a burst buffer using flash technology

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung

    2016-03-15

    A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.

  15. Disk storage management for LHCb based on Data Popularity estimator

    NASA Astrophysics Data System (ADS)

    Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-12-01

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.

  16. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  17. Laser beam modeling in optical storage systems

    NASA Technical Reports Server (NTRS)

    Treptau, J. P.; Milster, T. D.; Flagello, D. G.

    1991-01-01

    A computer model has been developed that simulates light propagating through an optical data storage system. A model of a laser beam that originates at a laser diode, propagates through an optical system, interacts with a optical disk, reflects back from the optical disk into the system, and propagates to data and servo detectors is discussed.

  18. Saying goodbye to optical storage technology.

    PubMed

    McLendon, Kelly; Babbitt, Cliff

    2002-08-01

    The days of using optical disk based mass storage devices for high volume applications like health care document imaging are coming to an end. The price/performance curve for redundant magnetic disks, known as RAID, is now more positive than for optical disks. All types of application systems, across many sectors of the marketplace are using these newer magnetic technologies, including insurance, banking, aerospace, as well as health care. The main components of these new storage technologies are RAID and SAN. SAN refers to storage area network, which is a complex mechanism of switches and connections that allow multiple systems to store huge amounts of data securely and safely.

  19. Telemetry data storage systems technology for the Space Station Freedom era

    NASA Technical Reports Server (NTRS)

    Dalton, John T.

    1989-01-01

    This paper examines the requirements and functions of the telemetry-data recording and storage systems, and the data-storage-system technology projected for the Space Station, with particular attention given to the Space Optical Disk Recorder, an on-board storage subsystem based on 160 gigabit erasable optical disk units each capable of operating at 300 M bits per second. Consideration is also given to storage systems for ground transport recording, which include systems for data capture, buffering, processing, and delivery on the ground. These can be categorized as the first in-first out storage, the fast random-access storage, and the slow access with staging. Based on projected mission manifests and data rates, the worst case requirements were developed for these three storage architecture functions. The results of the analysis are presented.

  20. Recording and reading of information on optical disks

    NASA Astrophysics Data System (ADS)

    Bouwhuis, G.; Braat, J. J. M.

    In the storage of information, related to video programs, in a spiral track on a disk, difficulties arise because the bandwidth for video is much greater than for audio signals. An attractive solution was found in optical storage. The optical noncontact method is free of wear, and allows for fast random access. Initial problems regarding a suitable light source could be overcome with the aid of appropriate laser devices. The basic concepts of optical storage on disks are treated insofar as they are relevant for the optical arrangement. A general description is provided of a video, a digital audio, and a data storage system. Scanning spot microscopy for recording and reading of optical disks is discussed, giving attention to recording of the signal, the readout of optical disks, the readout of digitally encoded signals, and cross talk. Tracking systems are also considered, taking into account the generation of error signals for radial tracking and the generation of focus error signals.

  1. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 2

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    This report contains copies of nearly all of the technical papers and viewgraphs presented at the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Application. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include the following: magnetic disk and tape technologies; optical disk and tape; software storage and file management systems; and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  2. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 3

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    This report contains copies of nearly all of the technical papers and viewgraphs presented at the National Space Science Data Center (NSSDC) Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990s.

  3. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  4. A Simulation Model Of A Picture Archival And Communication System

    NASA Astrophysics Data System (ADS)

    D'Silva, Vijay; Perros, Harry; Stockbridge, Chris

    1988-06-01

    A PACS architecture was simulated to quantify its performance. The model consisted of reading stations, acquisition nodes, communication links, a database management system, and a storage system consisting of magnetic and optical disks. Two levels of storage were simulated, a high-speed magnetic disk system for short term storage, and optical disk jukeboxes for long term storage. The communications link was a single bus via which image data were requested and delivered. Real input data to the simulation model were obtained from surveys of radiology procedures (Bowman Gray School of Medicine). From these the following inputs were calculated: - the size of short term storage necessary - the amount of long term storage required - the frequency of access of each store, and - the distribution of the number of films requested per diagnosis. The performance measures obtained were - the mean retrieval time for an image, - mean queue lengths, and - the utilization of each device. Parametric analysis was done for - the bus speed, - the packet size for the communications link, - the record size on the magnetic disk, - compression ratio, - influx of new images, - DBMS time, and - diagnosis think times. Plots give the optimum values for those values of input speed and device performance which are sufficient to achieve subsecond image retrieval times

  5. KEYNOTE ADDRESS: The role of standards in the emerging optical digital data disk storage systems market

    NASA Astrophysics Data System (ADS)

    Bainbridge, Ross C.

    1984-09-01

    The Institute for Computer Sciences and Technology at the National Bureau of Standards is pleased to cooperate with the International Society for Optical Engineering and to join with the other distinguished organizations in cosponsoring this conference on applications of optical digital data disk storage systems.

  6. Attaching IBM-compatible 3380 disks to Cray X-MP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Midlock, J.L.

    1989-01-01

    A method of attaching IBM-compatible 3380 disks directly to a Cray X-MP via the XIOP with a BMC is described. The IBM 3380 disks appear to the UNICOS operating system as DD-29 disks with UNICOS file systems. IBM 3380 disks provide cheap, reliable large capacity disk storage. Combined with a small number of high-speed Cray disks, the IBM disks provide for the bulk of the storage for small files and infrequently used files. Cray Research designed the BMC and its supporting software in the XIOP to allow IBM tapes and other devices to be attached to the X-MP. No hardwaremore » changes were necessary, and we added less than 2000 lines of code to the XIOP to accomplish this project. This system has been in operation for over eight months. Future enhancements such as the use of a cache controller and attachment to a Y-MP are also described. 1 tab.« less

  7. Mean PB To Failure - Initial results from a long-term study of disk storage patterns at the RACF

    NASA Astrophysics Data System (ADS)

    Caramarcu, C.; Hollowell, C.; Rao, T.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, S. A.

    2015-12-01

    The RACF (RHIC-ATLAS Computing Facility) has operated a large, multi-purpose dedicated computing facility since the mid-1990’s, serving a worldwide, geographically diverse scientific community that is a major contributor to various HEPN projects. A central component of the RACF is the Linux-based worker node cluster that is used for both computing and data storage purposes. It currently has nearly 50,000 computing cores and over 23 PB of storage capacity distributed over 12,000+ (non-SSD) disk drives. The majority of the 12,000+ disk drives provide a cost-effective solution for dCache/XRootD-managed storage, and a key concern is the reliability of this solution over the lifetime of the hardware, particularly as the number of disk drives and the storage capacity of individual drives grow. We report initial results of a long-term study to measure lifetime PB read/written to disk drives in the worker node cluster. We discuss the historical disk drive mortality rate, disk drive manufacturers' published MPTF (Mean PB to Failure) data and how they are correlated to our results. The results help the RACF understand the productivity and reliability of its storage solutions and have implications for other highly-available storage systems (NFS, GPFS, CVMFS, etc) with large I/O requirements.

  8. Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Blackwell, Kim; Blasso, Len (Editor); Lipscomb, Ann (Editor)

    1991-01-01

    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  9. An Effective Cache Algorithm for Heterogeneous Storage Systems

    PubMed Central

    Li, Yong; Feng, Dan

    2013-01-01

    Modern storage environment is commonly composed of heterogeneous storage devices. However, traditional cache algorithms exhibit performance degradation in heterogeneous storage systems because they were not designed to work with the diverse performance characteristics. In this paper, we present a new cache algorithm called HCM for heterogeneous storage systems. The HCM algorithm partitions the cache among the disks and adopts an effective scheme to balance the work across the disks. Furthermore, it applies benefit-cost analysis to choose the best allocation of cache block to improve the performance. Conducting simulations with a variety of traces and a wide range of cache size, our experiments show that HCM significantly outperforms the existing state-of-the-art storage-aware cache algorithms. PMID:24453890

  10. Incorporating Oracle on-line space management with long-term archival technology

    NASA Technical Reports Server (NTRS)

    Moran, Steven M.; Zak, Victor J.

    1996-01-01

    The storage requirements of today's organizations are exploding. As computers continue to escalate in processing power, applications grow in complexity and data files grow in size and in number. As a result, organizations are forced to procure more and more megabytes of storage space. This paper focuses on how to expand the storage capacity of a Very Large Database (VLDB) cost-effectively within a Oracle7 data warehouse system by integrating long term archival storage sub-systems with traditional magnetic media. The Oracle architecture described in this paper was based on an actual proof of concept for a customer looking to store archived data on optical disks yet still have access to this data without user intervention. The customer had a requirement to maintain 10 years worth of data on-line. Data less than a year old still had the potential to be updated thus will reside on conventional magnetic disks. Data older than a year will be considered archived and will be placed on optical disks. The ability to archive data to optical disk and still have access to that data provides the system a means to retain large amounts of data that is readily accessible yet significantly reduces the cost of total system storage. Therefore, the cost benefits of archival storage devices can be incorporated into the Oracle storage medium and I/O subsystem without loosing any of the functionality of transaction processing, yet at the same time providing an organization access to all their data.

  11. Kodak Optical Disk and Microfilm Technologies Carve Niches in Specific Applications.

    ERIC Educational Resources Information Center

    Gallenberger, John; Batterton, John

    1989-01-01

    Describes the Eastman Kodak Company's microfilm and optical disk technologies and their applications. Topics discussed include WORM technology; retrieval needs and cost effective archival storage needs; engineering applications; jukeboxes; optical storage options; systems for use with mainframes and microcomputers; and possible future…

  12. Electron trapping data storage system and applications

    NASA Technical Reports Server (NTRS)

    Brower, Daniel; Earman, Allen; Chaffin, M. H.

    1993-01-01

    The advent of digital information storage and retrieval has led to explosive growth in data transmission techniques, data compression alternatives, and the need for high capacity random access data storage. Advances in data storage technologies are limiting the utilization of digitally based systems. New storage technologies will be required which can provide higher data capacities and faster transfer rates in a more compact format. Magnetic disk/tape and current optical data storage technologies do not provide these higher performance requirements for all digital data applications. A new technology developed at the Optex Corporation out-performs all other existing data storage technologies. The Electron Trapping Optical Memory (ETOM) media is capable of storing as much as 14 gigabytes of uncompressed data on a single, double-sided 54 inch disk with a data transfer rate of up to 12 megabits per second. The disk is removable, compact, lightweight, environmentally stable, and robust. Since the Write/Read/Erase (W/R/E) processes are carried out 100 percent photonically, no heating of the recording media is required. Therefore, the storage media suffers no deleterious effects from repeated Write/Read/Erase cycling.

  13. Storage Media for Microcomputers.

    ERIC Educational Resources Information Center

    Trautman, Rodes

    1983-01-01

    Reviews computer storage devices designed to provide additional memory for microcomputers--chips, floppy disks, hard disks, optical disks--and describes how secondary storage is used (file transfer, formatting, ingredients of incompatibility); disk/controller/software triplet; magnetic tape backup; storage volatility; disk emulator; and…

  14. Fast disk array for image storage

    NASA Astrophysics Data System (ADS)

    Feng, Dan; Zhu, Zhichun; Jin, Hai; Zhang, Jiangling

    1997-01-01

    A fast disk array is designed for the large continuous image storage. It includes a high speed data architecture and the technology of data striping and organization on the disk array. The high speed data path which is constructed by two dual port RAM and some control circuit is configured to transfer data between a host system and a plurality of disk drives. The bandwidth can be more than 100 MB/s if the data path based on PCI (peripheral component interconnect). The organization of data stored on the disk array is similar to RAID 4. Data are striped on a plurality of disk, and each striping unit is equal to a track. I/O instructions are performed in parallel on the disk drives. An independent disk is used to store the parity information in the fast disk array architecture. By placing the parity generation circuit directly on the SCSI (or SCSI 2) bus, the parity information can be generated on the fly. It will affect little on the data writing in parallel on the other disks. The fast disk array architecture designed in the paper can meet the demands of the image storage.

  15. SAM-FS: LSC's New Solaris-Based Storage Management Product

    NASA Technical Reports Server (NTRS)

    Angell, Kent

    1996-01-01

    SAM-FS is a full featured hierarchical storage management (HSM) device that operates as a file system on Solaris-based machines. The SAM-FS file system provides the user with all of the standard UNIX system utilities and calls, and adds some new commands, i.e. archive, release, stage, sls, sfind, and a family of maintenance commands. The system also offers enhancements such as high performance virtual disk read and write, control of the disk through an extent array, and the ability to dynamically allocate block size. SAM-FS provides 'archive sets' which are groupings of data to be copied to secondary storage. In practice, as soon as a file is written to disk, SAM-FS will make copies onto secondary media. SAM-FS is a scalable storage management system. The system can manage millions of files per system, though this is limited today by the speed of UNIX and its utilities. In the future, a new search algorithm will be implemented that will remove logical and performance restrictions on the number of files managed.

  16. Striped tertiary storage arrays

    NASA Technical Reports Server (NTRS)

    Drapeau, Ann L.

    1993-01-01

    Data stripping is a technique for increasing the throughput and reducing the response time of large access to a storage system. In striped magnetic or optical disk arrays, a single file is striped or interleaved across several disks; in a striped tape system, files are interleaved across tape cartridges. Because a striped file can be accessed by several disk drives or tape recorders in parallel, the sustained bandwidth to the file is greater than in non-striped systems, where access to the file are restricted to a single device. It is argued that applying striping to tertiary storage systems will provide needed performance and reliability benefits. The performance benefits of striping for applications using large tertiary storage systems is discussed. It will introduce commonly available tape drives and libraries, and discuss their performance limitations, especially focusing on the long latency of tape accesses. This section will also describe an event-driven tertiary storage array simulator that is being used to understand the best ways of configuring these storage arrays. The reliability problems of magnetic tape devices are discussed, and plans for modeling the overall reliability of striped tertiary storage arrays to identify the amount of error correction required are described. Finally, work being done by other members of the Sequoia group to address latency of accesses, optimizing tertiary storage arrays that perform mostly writes, and compression is discussed.

  17. Integrating new Storage Technologies into EOS

    NASA Astrophysics Data System (ADS)

    Peters, Andreas J.; van der Ster, Dan C.; Rocha, Joaquim; Lensing, Paul

    2015-12-01

    The EOS[1] storage software was designed to cover CERN disk-only storage use cases in the medium-term trading scalability against latency. To cover and prepare for long-term requirements the CERN IT data and storage services group (DSS) is actively conducting R&D and open source contributions to experiment with a next generation storage software based on CEPH[3] and ethernet enabled disk drives. CEPH provides a scale-out object storage system RADOS and additionally various optional high-level services like S3 gateway, RADOS block devices and a POSIX compliant file system CephFS. The acquisition of CEPH by Redhat underlines the promising role of CEPH as the open source storage platform of the future. CERN IT is running a CEPH service in the context of OpenStack on a moderate scale of 1 PB replicated storage. Building a 100+PB storage system based on CEPH will require software and hardware tuning. It is of capital importance to demonstrate the feasibility and possibly iron out bottlenecks and blocking issues beforehand. The main idea behind this R&D is to leverage and contribute to existing building blocks in the CEPH storage stack and implement a few CERN specific requirements in a thin, customisable storage layer. A second research topic is the integration of ethernet enabled disks. This paper introduces various ongoing open source developments, their status and applicability.

  18. Disk Memories: What You Should Know before You Buy Them.

    ERIC Educational Resources Information Center

    Bursky, Dave

    1981-01-01

    Explains the basic features of floppy disk and hard disk computer storage systems and the purchasing decisions which must be made, particularly in relation to certain popular microcomputers. A disk vendors directory is included. Journal availability: Hayden Publishing Company, 50 Essex Street, Rochelle Park, NJ 07662. (SJL)

  19. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  20. RAMA: A file system for massively parallel computers

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  1. Sawmill: A Logging File System for a High-Performance RAID Disk Array

    DTIC Science & Technology

    1995-01-01

    from limiting disk performance, new controller architectures connect the disks directly to the network so that data movement bypasses the file server...These developments raise two questions for file systems: how to get the best performance from a RAID, and how to use such a controller architecture ...the RAID-II storage system; this architecture provides a fast data path that moves data rapidly among the disks, high-speed controller memory, and the

  2. Optical Disk Technology and Information.

    ERIC Educational Resources Information Center

    Goldstein, Charles M.

    1982-01-01

    Provides basic information on videodisks and potential applications, including inexpensive online storage, random access graphics to complement online information systems, hybrid network architectures, office automation systems, and archival storage. (JN)

  3. Electron trapping optical data storage system and applications

    NASA Technical Reports Server (NTRS)

    Brower, Daniel; Earman, Allen; Chaffin, M. H.

    1993-01-01

    A new technology developed at Optex Corporation out-performs all other existing data storage technologies. The Electron Trapping Optical Memory (ETOM) media stores 14 gigabytes of uncompressed data on a single, double-sided 130 mm disk with a data transfer rate of up to 120 megabits per second. The disk is removable, compact, lightweight, environmentally stable, and robust. Since the Write/Read/Erase (W/R/E) processes are carried out photonically, no heating of the recording media is required. Therefore, the storage media suffers no deleterious effects from repeated W/R/E cycling. This rewritable data storage technology has been developed for use as a basis for numerous data storage products. Industries that can benefit from the ETOM data storage technologies include: satellite data and information systems, broadcasting, video distribution, image processing and enhancement, and telecommunications. Products developed for these industries are well suited for the demanding store-and-forward buffer systems, data storage, and digital video systems needed for these applications.

  4. Large Format Multifunction 2-Terabyte Optical Disk Storage System

    NASA Technical Reports Server (NTRS)

    Kaiser, David R.; Brucker, Charles F.; Gage, Edward C.; Hatwar, T. K.; Simmons, George O.

    1996-01-01

    The Kodak Digital Science OD System 2000E automated disk library (ADL) base module and write-once drive are being developed as the next generation commercial product to the currently available System 2000 ADL. Under government sponsorship with the Air Force's Rome Laboratory, Kodak is developing magneto-optic (M-O) subsystems compatible with the Kodak Digital Science ODW25 drive architecture, which will result in a multifunction (MF) drive capable of reading and writing 25 gigabyte (GB) WORM media and 15 GB erasable media. In an OD system 2000 E ADL configuration with 4 MF drives and 100 total disks with a 50% ration of WORM and M-O media, 2.0 terabytes (TB) of versatile near line mass storage is available.

  5. Managing People's Data

    NASA Technical Reports Server (NTRS)

    Le, Diana; Cooper, David M. (Technical Monitor)

    1994-01-01

    Just imagine a mass storage system that consists of a machine with 2 CPUs, 1 Gigabyte (GB) of memory, 400 GB of disk space, 16800 cartridge tapes in the automated tape silos, 88,000 tapes located in the vault, and the software to manage the system. This system is designed to be a data repository; it will always have disk space to store all the incoming data. Currently 9.14 GB of new data per day enters the system with this rate doubling each year. To assure there is always disk space available for new data, the system. has to move data reside from the expensive disk to a much less expensive medium such as the 3480 cartridge tapes. Once the data is archived to tape, it should be able to move back to disk when someone wants to access it and the data movement should be transparent to the user. Now imagine all the tasks that a system administrator must perform to keep this system running 24 hour a day, 7 days a week. Since the filesystem maintains the illusion of unlimited disk space, data that comes to the system must get moved to tapes in an efficient manner. This paper will describe the mass storage system running at the Numerical Aerodynamic Simulation (NAS) at NASA Ames Research Center in both software and hardware aspects, then it will describe all of the tasks the system administrator has to perform on this system.

  6. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

  7. Optical storage media data integrity studies

    NASA Technical Reports Server (NTRS)

    Podio, Fernando L.

    1994-01-01

    Optical disk-based information systems are being used in private industry and many Federal Government agencies for on-line and long-term storage of large quantities of data. The storage devices that are part of these systems are designed with powerful, but not unlimited, media error correction capacities. The integrity of data stored on optical disks does not only depend on the life expectancy specifications for the medium. Different factors, including handling and storage conditions, may result in an increase of medium errors in size and frequency. Monitoring the potential data degradation is crucial, especially for long term applications. Efforts are being made by the Association for Information and Image Management Technical Committee C21, Storage Devices and Applications, to specify methods for monitoring and reporting to the user medium errors detected by the storage device while writing, reading or verifying the data stored in that medium. The Computer Systems Laboratory (CSL) of the National Institute of Standard and Technology (NIST) has a leadership role in the development of these standard techniques. In addition, CSL is researching other data integrity issues, including the investigation of error-resilient compression algorithms. NIST has conducted care and handling experiments on optical disk media with the objective of identifying possible causes of degradation. NIST work in data integrity and related standards activities is described.

  8. Wide-area-distributed storage system for a multimedia database

    NASA Astrophysics Data System (ADS)

    Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro

    1998-12-01

    We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.

  9. Performance of redundant disk array organizations in transaction processing environments

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1993-01-01

    A performance evaluation is conducted for two redundant disk-array organizations in a transaction-processing environment, relative to the performance of both mirrored disk organizations and organizations using neither striping nor redundancy. The proposed parity-striping alternative to striping with rotated parity is shown to furnish rapid recovery from failure at the same low storage cost without interleaving the data over multiple disks. Both noncached systems and systems using a nonvolatile cache as the controller are considered.

  10. RALPH: An online computer program for acquisition and reduction of pulse height data

    NASA Technical Reports Server (NTRS)

    Davies, R. C.; Clark, R. S.; Keith, J. E.

    1973-01-01

    A background/foreground data acquisition and analysis system incorporating a high level control language was developed for acquiring both singles and dual parameter coincidence data from scintillation detectors at the Radiation Counting Laboratory at the NASA Manned Spacecraft Center in Houston, Texas. The system supports acquisition of gamma ray spectra in a 256 x 256 coincidence matrix (utilizing disk storage) and simultaneous operation of any of several background support and data analysis functions. In addition to special instruments and interfaces, the hardware consists of a PDP-9 with 24K core memory, 256K words of disk storage, and Dectape and Magtape bulk storage.

  11. Redundant Disk Arrays in Transaction Processing Systems. Ph.D. Thesis, 1993

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine Nagib

    1994-01-01

    We address various issues dealing with the use of disk arrays in transaction processing environments. We look at the problem of transaction undo recovery and propose a scheme for using the redundancy in disk arrays to support undo recovery. The scheme uses twin page storage for the parity information in the array. It speeds up transaction processing by eliminating the need for undo logging for most transactions. The use of redundant arrays of distributed disks to provide recovery from disasters as well as temporary site failures and disk crashes is also studied. We investigate the problem of assigning the sites of a distributed storage system to redundant arrays in such a way that a cost of maintaining the redundant parity information is minimized. Heuristic algorithms for solving the site partitioning problem are proposed and their performance is evaluated using simulation. We also develop a heuristic for which an upper bound on the deviation from the optimal solution can be established.

  12. NSSDC activities with 12-inch optical disk drives

    NASA Technical Reports Server (NTRS)

    Lowrey, Barbara E.; Lopez-Swafford, Brian

    1986-01-01

    The development status of optical-disk data transfer and storage technology at the National Space Science Data Center (NSSDC) is surveyed. The aim of the R&D program is to facilitate the exchange of large volumes of data. Current efforts focus on a 12-inch 1-Gbyte write-once/read-many disk and a disk drive which interfaces with VAX/VMS computer systems. The history of disk development at NSSDC is traced; the results of integration and performance tests are summarized; the operating principles of the 12-inch system are explained and illustrated with diagrams; and the need for greater standardization is indicated.

  13. DPM: Future Proof Storage

    NASA Astrophysics Data System (ADS)

    Alvarez, Alejandro; Beche, Alexandre; Furano, Fabrizio; Hellmich, Martin; Keeble, Oliver; Rocha, Ricardo

    2012-12-01

    The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the last year we have been working on providing stable, high performant data access to our storage system using standard protocols, while extending the storage management functionality and adapting both configuration and deployment procedures to reuse commonly used building blocks. In this contribution we cover in detail the extensive evaluation we have performed of our new HTTP/WebDAV and NFS 4.1 frontends, in terms of functionality and performance. We summarize the issues we faced and the solutions we developed to turn them into valid alternatives to the existing grid protocols - namely the additional work required to provide multi-stream transfers for high performance wide area access, support for third party copies, credential delegation or the required changes in the experiment and fabric management frameworks and tools. We describe new functionality that has been added to ease system administration, such as different filesystem weights and a faster disk drain, and new configuration and monitoring solutions based on the industry standards Puppet and Nagios. Finally, we explain some of the internal changes we had to do in the DPM architecture to better handle the additional load from the analysis use cases.

  14. Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.

    2016-12-01

    Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.

  15. Online performance evaluation of RAID 5 using CPU utilization

    NASA Astrophysics Data System (ADS)

    Jin, Hai; Yang, Hua; Zhang, Jiangling

    1998-09-01

    Redundant arrays of independent disks (RAID) technology is the efficient way to solve the bottleneck problem between CPU processing ability and I/O subsystem. For the system point of view, the most important metric of on line performance is the utilization of CPU. This paper first employs the way to calculate the CPU utilization of system connected with RAID level 5 using statistic average method. From the simulation results of CPU utilization of system connected with RAID level 5 subsystem can we see that using multiple disks as an array to access data in parallel is the efficient way to enhance the on-line performance of disk storage system. USing high-end disk drivers to compose the disk array is the key to enhance the on-line performance of system.

  16. An Optical Disk-Based Information Retrieval System.

    ERIC Educational Resources Information Center

    Bender, Avi

    1988-01-01

    Discusses a pilot project by the Nuclear Regulatory Commission to apply optical disk technology to the storage and retrieval of documents related to its high level waste management program. Components and features of the microcomputer-based system which provides full-text and image access to documents are described. A sample search is included.…

  17. Set processing in a network environment. [data bases and magnetic disks and tapes

    NASA Technical Reports Server (NTRS)

    Hardgrave, W. T.

    1975-01-01

    A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.

  18. ZFS on RBODs - Leveraging RAID Controllers for Metrics and Enclosure Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stearman, D. M.

    2015-03-30

    Traditionally, the Lustre file system has relied on the ldiskfs file system with reliable RAID (Redundant Array of Independent Disks) storage underneath. As of Lustre 2.4, ZFS was added as a backend file system, with built-in software RAID, thereby removing the need of expensive RAID controllers. ZFS was designed to work with JBOD (Just a Bunch Of Disks) storage enclosures under the Solaris Operating System, which provided a rich device management system. Long time users of the Lustre file system have relied on the RAID controllers to provide metrics and enclosure monitoring and management services, with rich APIs and commandmore » line interfaces. This paper will study a hybrid approach using an advanced full featured RAID enclosure which is presented to the host as a JBOD, This RBOD (RAIDed Bunch Of Disks) allows ZFS to do the RAID protection and error correction, while the RAID controller handles management of the disks and monitors the enclosure. It was hoped that the value of the RAID controller features would offset the additional cost, and that performance would not suffer in this mode. The test results revealed that the hybrid RBOD approach did suffer reduced performance.« less

  19. SODR Memory Control Buffer Control ASIC

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.

    1994-01-01

    The Spacecraft Optical Disk Recorder (SODR) is a state of the art mass storage system for future NASA missions requiring high transmission rates and a large capacity storage system. This report covers the design and development of an SODR memory buffer control applications specific integrated circuit (ASIC). The memory buffer control ASIC has two primary functions: (1) buffering data to prevent loss of data during disk access times, (2) converting data formats from a high performance parallel interface format to a small computer systems interface format. Ten 144 p in, 50 MHz CMOS ASIC's were designed, fabricated and tested to implement the memory buffer control function.

  20. Data storage for managing the health enterprise and achieving business continuity.

    PubMed

    Hinegardner, Sam

    2003-01-01

    As organizations move away from a silo mentality to a vision of enterprise-level information, more healthcare IT departments are rejecting the idea of information storage as an isolated, system-by-system solution. IT executives want storage solutions that act as a strategic element of an IT infrastructure, centralizing storage management activities to effectively reduce operational overhead and costs. This article focuses on three areas of enterprise storage: tape, disk, and disaster avoidance.

  1. Advanced optical disk storage technology

    NASA Technical Reports Server (NTRS)

    Haritatos, Fred N.

    1996-01-01

    There is a growing need within the Air Force for more and better data storage solutions. Rome Laboratory, the Air Force's Center of Excellence for C3I technology, has sponsored the development of a number of operational prototypes to deal with this growing problem. This paper will briefly summarize the various prototype developments with examples of full mil-spec and best commercial practice. These prototypes have successfully operated under severe space, airborne and tactical field environments. From a technical perspective these prototypes have included rewritable optical media ranging from a 5.25-inch diameter format up to the 14-inch diameter disk format. Implementations include an airborne sensor recorder, a deployable optical jukebox and a parallel array of optical disk drives. They include stand-alone peripheral devices to centralized, hierarchical storage management systems for distributed data processing applications.

  2. PCM-Based Durable Write Cache for Fast Disk I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less

  3. Optical Digital Image Storage System

    DTIC Science & Technology

    1991-03-18

    figures courtesy of Sony Corporation x LIST OF TABLES Indexing Workstation - Ease of Learning ................................... 99 Indexing Workstation...retaining a master negative copy of the microfilm. 121 The Sony Corporation, the supplier of the optical disk media used in the ODISS projeLt, claims...disk." During the ODISS project, several CMSR files-stored on the Sony optical disks were read several thousand times with no -loss of information

  4. Records Management with Optical Disk Technology: Now Is the Time.

    ERIC Educational Resources Information Center

    Retherford, April; Williams, W. Wes

    1991-01-01

    The University of Kansas record management system using optical disk storage in a network environment and the selection process used to meet existing hardware and budgeting requirements are described. Viability of the technology, document legality, and difficulties encountered during implementation are discussed. (Author/MSE)

  5. The Design and Evolution of Jefferson Lab's Jasmine Mass Storage System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryan Hess; M. Andrew Kowalski; Michael Haddox-Schatz

    We describe the Jasmine mass storage system, in operation since 2001. Jasmine has scaled to meet the challenges of grid applications, petabyte class storage, and hundreds of MB/sec throughput using commodity hardware, Java technologies, and a small but focused development team. The evolution of the integrated disk cache system, which provides a managed online subset of the tape contents, is examined in detail. We describe how the storage system has grown to meet the special needs of the batch farm, grid clients, and new performance demands.

  6. Basics of Videodisc and Optical Disk Technology.

    ERIC Educational Resources Information Center

    Paris, Judith

    1983-01-01

    Outlines basic videodisc and optical disk technology describing both optical and capacitance videodisc technology. Optical disk technology is defined as a mass digital image and data storage device and briefly compared with other information storage media including magnetic tape and microforms. The future of videodisc and optical disk is…

  7. From Physics to industry: EOS outside HEP

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Lamanna, M.

    2017-10-01

    In the competitive market for large-scale storage solutions the current main disk storage system at CERN EOS has been showing its excellence in the multi-Petabyte high-concurrency regime. It has also shown a disruptive potential in powering the service in providing sync and share capabilities and in supporting innovative analysis environments along the storage of LHC data. EOS has also generated interest as generic storage solution ranging from university systems to very large installations for non-HEP applications.

  8. A Comprehensive Study on Energy Efficiency and Performance of Flash-based SSD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seon-Yeon; Kim, Youngjae; Urgaonkar, Bhuvan

    2011-01-01

    Use of flash memory as a storage medium is becoming popular in diverse computing environments. However, because of differences in interface, flash memory requires a hard-disk-emulation layer, called FTL (flash translation layer). Although the FTL enables flash memory storages to replace conventional hard disks, it induces significant computational and space overhead. Despite the low power consumption of flash memory, this overhead leads to significant power consumption in an overall storage system. In this paper, we analyze the characteristics of flash-based storage devices from the viewpoint of power consumption and energy efficiency by using various methodologies. First, we utilize simulation tomore » investigate the interior operation of flash-based storage of flash-based storages. Subsequently, we measure the performance and energy efficiency of commodity flash-based SSDs by using microbenchmarks to identify the block-device level characteristics and macrobenchmarks to reveal their filesystem level characteristics.« less

  9. A Disk-Based System for Producing and Distributing Science Products from MODIS

    NASA Technical Reports Server (NTRS)

    Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael

    2007-01-01

    Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.

  10. Libraries and Desktop Storage Options: Results of a Web-Based Survey.

    ERIC Educational Resources Information Center

    Hendricks, Arthur; Wang, Jian

    2002-01-01

    Reports the results of a Web-based survey that investigated what plans, if any, librarians have for dealing with the expected obsolescence of the floppy disk and still retain effective library service. Highlights include data storage options, including compact disks, zip disks, and networked storage products; and a copy of the Web survey.…

  11. Towards more stable operation of the Tokyo Tier2 center

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; Mashimo, T.; Matsui, N.; Sakamoto, H.; Ueda, I.

    2014-06-01

    The Tokyo Tier2 center, which is located at the International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hardware as the third system upgrade to deal with analysis for further growing data of the ATLAS experiment. The number of CPU cores are increased by factor of two (9984 cores in total), and the performance of individual CPU core is improved by 20% according to the HEPSPEC06 benchmark test at 32bit compile mode. The score is estimated as 18.03 (SL6) per core by using Intel Xeon E5-2680 2.70 GHz. Since all worker nodes are made by 16 CPU cores configuration, we deployed 624 blade servers in total. They are connected to 6.7 PB of disk storage system with non-blocking 10 Gbps internal network backbone by using two center network switches (NetIron MLXe-32). The disk storage is made by 102 of RAID6 disk arrays (Infortrend DS S24F-G2840-4C16DO0) and served by equivalent number of 1U file servers with 8G-FC connection to maximize the file transfer throughput per storage capacity. As of February 2013, 2560 CPU cores and 2.00 PB of disk storage have already been deployed for WLCG. Currently, the remaining non-grid resources for both CPUs and disk storage are used as dedicated resources for the data analysis by the ATLAS Japan collaborators. Since all hardware in the non-grid resources are made by same architecture with Tier2 resource, they will be able to be migrated as the Tier2 extra resource on demand of the ATLAS experiment in the future. In addition to the upgrade of computing resources, we expect the improvement of connectivity on the wide area network. Thanks to the Japanese NREN (NII), another 10 Gbps trans-Pacific line from Japan to Washington will be available additionally with existing two 10 Gbps lines (Tokyo to New York and Tokyo to Los Angeles). The new line will be connected to LHCONE for the more improvement of the connectivity. In this circumstance, we are working for the further stable operation. For instance, we have newly introduced GPFS (IBM) for the non-grid disk storage, while Disk Pool Manager (DPM) are continued to be used as Tier2 disk storage from the previous system. Since the number of files stored in a DPM pool will be increased with increasing the total amount of data, the development of stable database configuration is one of the crucial issues as well as scalability. We have started some studies on the performance of asynchronous database replication so that we can take daily full backup. In this report, we would like to introduce several improvements in terms of the performances and stability of our new system and possibility of the further improvement of local I/O performance in the multi-core worker node. We also present the status of the wide area network connectivity from Japan to US and/or EU with LHCONE.

  12. Mass storage at NSA

    NASA Technical Reports Server (NTRS)

    Shields, Michael F.

    1993-01-01

    The need to manage large amounts of data on robotically controlled devices has been critical to the mission of this Agency for many years. In many respects this Agency has helped pioneer, with their industry counterparts, the development of a number of products long before these systems became commercially available. Numerous attempts have been made to field both robotically controlled tape and optical disk technology and systems to satisfy our tertiary storage needs. Custom developed products were architected, designed, and developed without vendor partners over the past two decades to field workable systems to handle our ever increasing storage requirements. Many of the attendees of this symposium are familiar with some of the older products, such as: the Braegen Automated Tape Libraries (ATL's), the IBM 3850, the Ampex TeraStore, just to name a few. In addition, we embarked on an in-house development of a shared disk input/output support processor to manage our every increasing tape storage needs. For all intents and purposes, this system was a file server by current definitions which used CDC Cyber computers as the control processors. It served us well and was just recently removed from production usage.

  13. Facing the Limitations of Electronic Document Handling.

    ERIC Educational Resources Information Center

    Moralee, Dennis

    1985-01-01

    This essay addresses problems associated with technology used in the handling of high-resolution visual images in electronic document delivery. Highlights include visual fidelity, laser-driven optical disk storage, electronics versus micrographics for document storage, videomicrographics, and system configurations and peripherals. (EJS)

  14. Spacecraft optical disk recorder memory buffer control

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.

    1993-01-01

    This paper discusses the research completed under the NASA-ASEE summer faculty fellowship program. The project involves development of an Application Specific Integrated Circuit (ASIC) to be used as a Memory Buffer Controller (MBC) in the Spacecraft Optical Disk System (SODR). The SODR system has demanding capacity and data rate specifications requiring specialized electronics to meet processing demands. The system is being designed to support Gigabit transfer rates with Terabit storage capability. The complete SODR system is designed to exceed the capability of all existing mass storage systems today. The ASIC development for SODR consist of developing a 144 pin CMOS device to perform format conversion and data buffering. The final simulations of the MBC were completed during this summer's NASA-ASEE fellowship along with design preparations for fabrication to be performed by an ASIC manufacturer.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apyan, A.; Badillo, J.; Cruz, J. Diaz

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less

  16. Database recovery using redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1992-01-01

    Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper a method is proposed for using redundant disk arrays to support rapid-recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, it is shown that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  17. Recovery issues in databases using redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1993-01-01

    Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  18. Performance evaluation of redundant disk array support for transaction recovery

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.

    1991-01-01

    Redundant disk arrays provide a way of achieving rapid recovery from media failures with a relatively low storage cost for large scale data systems requiring high availability. Here, we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  19. Disk storage at CERN

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Cano, E.; Chan, B.; Espinal, X.; Fiorot, A.; González Labrador, H.; Iven, J.; Lamanna, M.; Lo Presti, G.; Mościcki, JT; Peters, AJ; Ponce, S.; Rousseau, H.; van der Ster, D.

    2015-12-01

    CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. The total usable space available on disk for users is about 100 PB (with relative ratios 1:20:120). EOS actively uses the two CERN Tier0 centres (Meyrin and Wigner) with 50:50 ratio. IT DSS also provide sizeable on-demand resources for IT services most notably OpenStack and NFS-based clients: this is provided by a Ceph infrastructure (3 PB) and few proprietary servers (NetApp). We will describe our operational experience and recent changes to these systems with special emphasis to the present usages for LHC data taking, the convergence to commodity hardware (nodes with 200-TB each with optional SSD) shared across all services. We also describe our experience in coupling commodity and home-grown solution (e.g. CERNBox integration in EOS, Ceph disk pools for AFS, CASTOR and NFS) and finally the future evolution of these systems for WLCG and beyond.

  20. Ability of Shiga Toxin-Producing Escherichia coli and Salmonella spp. To Survive in a Desiccation Model System and in Dry Foods

    PubMed Central

    Hiramatsu, Reiji; Matsumoto, Masakado; Sakae, Kenji; Miyazaki, Yutaka

    2005-01-01

    In order to determine desiccation tolerances of bacterial strains, the survival of 58 diarrheagenic strains (18 salmonellae, 35 Shiga toxin-producing Escherichia coli [STEC], and 5 shigellae) and of 15 nonpathogenic E. coli strains was determined after drying at 35°C for 24 h in paper disks. At an inoculum level of 107 CFU/disk, most of the salmonellae (14/18) and the STEC strains (31/35) survived with a population of 103 to 104 CFU/disk, whereas all of the shigellae (5/5) and the majority of the nonpathogenic E. coli strains (9/15) did not survive (the population was decreased to less than the detection limit of 102 CFU/disk). After 22 to 24 months of subsequent storage at 4°C, all of the selected salmonellae (4/4) and most of the selected STEC strains (12/15) survived, keeping the original populations (103 to 104 CFU/disk). In contrast to the case for storage at 4°C, all of 15 selected strains (5 strains each of Salmonella spp., STEC O157, and STEC O26) died after 35 to 70 days of storage at 25°C and 35°C. The survival rates of all of these 15 strains in paper disks after the 24 h of drying were substantially increased (10 to 79 times) by the presence of sucrose (12% to 36%). All of these 15 desiccated strains in paper disks survived after exposure to 70°C for 5 h. The populations of these 15 strains inoculated in dried foods containing sucrose and/or fat (e.g., chocolate) were 100 times higher than those in the dried paper disks after drying for 24 h at 25°C. PMID:16269694

  1. Global EOS: exploring the 300-ms-latency region

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Jericho, D.; Hsu, C.-Y.

    2017-10-01

    EOS, the CERN open-source distributed disk storage system, provides the highperformance storage solution for HEP analysis and the back-end for various work-flows. Recently EOS became the back-end of CERNBox, the cloud synchronisation service for CERN users. EOS can be used to take advantage of wide-area distributed installations: for the last few years CERN EOS uses a common deployment across two computer centres (Geneva-Meyrin and Budapest-Wigner) about 1,000 km apart (∼20-ms latency) with about 200 PB of disk (JBOD). In late 2015, the CERN-IT Storage group and AARNET (Australia) set-up a challenging R&D project: a single EOS instance between CERN and AARNET with more than 300ms latency (16,500 km apart). This paper will report about the success in deploy and run a distributed storage system between Europe (Geneva, Budapest), Australia (Melbourne) and later in Asia (ASGC Taipei), allowing different type of data placement and data access across these four sites.

  2. Site Partitioning for Redundant Arrays of Distributed Disks

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.

    1996-01-01

    Redundant arrays of distributed disks (RADD) can be used in a distributed computing system or database system to provide recovery in the presence of disk crashes and temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites of a distributed storage system into redundant arrays in such a way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-hard. We then propose and evaluate several heuristic algorithms for finding approximate solutions. Simulation results show that significant reduction in remote parity update costs can be achieved by optimizing the site partitioning scheme.

  3. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  4. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  5. A Future Accelerated Cognitive Distributed Hybrid Testbed for Big Data Science Analytics

    NASA Astrophysics Data System (ADS)

    Halem, M.; Prathapan, S.; Golpayegani, N.; Huang, Y.; Blattner, T.; Dorband, J. E.

    2016-12-01

    As increased sensor spectral data volumes from current and future Earth Observing satellites are assimilated into high-resolution climate models, intensive cognitive machine learning technologies are needed to data mine, extract and intercompare model outputs. It is clear today that the next generation of computers and storage, beyond petascale cluster architectures, will be data centric. They will manage data movement and process data in place. Future cluster nodes have been announced that integrate multiple CPUs with high-speed links to GPUs and MICS on their backplanes with massive non-volatile RAM and access to active flash RAM disk storage. Active Ethernet connected key value store disk storage drives with 10Ge or higher are now available through the Kinetic Open Storage Alliance. At the UMBC Center for Hybrid Multicore Productivity Research, a future state-of-the-art Accelerated Cognitive Computer System (ACCS) for Big Data science is being integrated into the current IBM iDataplex computational system `bluewave'. Based on the next gen IBM 200 PF Sierra processor, an interim two node IBM Power S822 testbed is being integrated with dual Power 8 processors with 10 cores, 1TB Ram, a PCIe to a K80 GPU and an FPGA Coherent Accelerated Processor Interface card to 20TB Flash Ram. This system is to be updated to the Power 8+, an NVlink 1.0 with the Pascal GPU late in 2016. Moreover, the Seagate 96TB Kinetic Disk system with 24 Ethernet connected active disks is integrated into the ACCS storage system. A Lightweight Virtual File System developed at the NASA GSFC is installed on bluewave. Since remote access to publicly available quantum annealing computers is available at several govt labs, the ACCS will offer an in-line Restricted Boltzmann Machine optimization capability to the D-Wave 2X quantum annealing processor over the campus high speed 100 Gb network to Internet 2 for large files. As an evaluation test of the cognitive functionality of the architecture, the following studies utilizing all the system components will be presented; (i) a near real time climate change study generating CO2 fluxes and (ii) a deep dive capability into an 8000 x8000 pixel image pyramid display and (iii) Large dense and sparse eigenvalue decomposition.

  6. Reference System of DNA and Protein Sequences on CD-ROM

    NASA Astrophysics Data System (ADS)

    Nasu, Hisanori; Ito, Toshiaki

    DNASIS-DBREF31 is a database for DNA and Protein sequences in the form of optical Compact Disk (CD) ROM, developed and commercialized by Hitachi Software Engineering Co., Ltd. Both nucleic acid base sequences and protein amino acid sequences can be retrieved from a single CD-ROM. Existing database is offered in the form of on-line service, floppy disks, or magnetic tape, all of which have some problems or other, such as usability or storage capacity. DNASIS-DBREF31 newly adopt a CD-ROM as a database device to realize a mass storage and personal use of the database.

  7. Design and implementation of reliability evaluation of SAS hard disk based on RAID card

    NASA Astrophysics Data System (ADS)

    Ren, Shaohua; Han, Sen

    2015-10-01

    Because of the huge advantage of RAID technology in storage, it has been widely used. However, the question associated with this technology is that the hard disk based on the RAID card can not be queried by Operating System. Therefore how to read the self-information and log data of hard disk has been a problem, while this data is necessary for reliability test of hard disk. In traditional way, this information can be read just suitable for SATA hard disk, but not for SAS hard disk. In this paper, we provide a method by using LSI RAID card's Application Program Interface, communicating with RAID card and analyzing the feedback data to solve the problem. Then we will get the necessary information to assess the SAS hard disk.

  8. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  9. The Scalable Checkpoint/Restart Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  10. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  11. Building an organic block storage service at CERN with Ceph

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel; Wiebalck, Arne

    2014-06-01

    Emerging storage requirements, such as the need for block storage for both OpenStack VMs and file services like AFS and NFS, have motivated the development of a generic backend storage service for CERN IT. The goals for such a service include (a) vendor neutrality, (b) horizontal scalability with commodity hardware, (c) fault tolerance at the disk, host, and network levels, and (d) support for geo-replication. Ceph is an attractive option due to its native block device layer RBD which is built upon its scalable, reliable, and performant object storage system, RADOS. It can be considered an "organic" storage solution because of its ability to balance and heal itself while living on an ever-changing set of heterogeneous disk servers. This work will present the outcome of a petabyte-scale test deployment of Ceph by CERN IT. We will first present the architecture and configuration of our cluster, including a summary of best practices learned from the community and discovered internally. Next the results of various functionality and performance tests will be shown: the cluster has been used as a backend block storage system for AFS and NFS servers as well as a large OpenStack cluster at CERN. Finally, we will discuss the next steps and future possibilities for Ceph at CERN.

  12. Proposal for a multilayer read-only-memory optical disk structure.

    PubMed

    Ichimura, Isao; Saito, Kimihiro; Yamasaki, Takeshi; Osato, Kiyoshi

    2006-03-10

    Coherent interlayer cross talk and stray-light intensity of multilayer read-only-memory (ROM) optical disks are investigated. From results of scalar diffraction analyses, we conclude that layer separations above 10 microm are preferred in a system using a 0.85 numerical aperture objective lens in terms of signal quality and stability in focusing control. Disk structures are optimized to prevent signal deterioration resulting from multiple reflections, and appropriate detectors are determined to maintain acceptable stray-light intensity. In the experiment, quadrilayer and octalayer high-density ROM disks are prepared by stacking UV-curable films onto polycarbonate substrates. Data-to-clock jitters of < or = 7% demonstrate the feasibility of multilayer disk storage up to 200 Gbytes.

  13. Beating the tyranny of scale with a private cloud configured for Big Data

    NASA Astrophysics Data System (ADS)

    Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag

    2015-04-01

    The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.

  14. Pooling the resources of the CMS Tier-1 sites

    DOE PAGES

    Apyan, A.; Badillo, J.; Cruz, J. Diaz; ...

    2015-12-23

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less

  15. Pooling the resources of the CMS Tier-1 sites

    NASA Astrophysics Data System (ADS)

    Apyan, A.; Badillo, J.; Diaz Cruz, J.; Gadrat, S.; Gutsche, O.; Holzman, B.; Lahiff, A.; Magini, N.; Mason, D.; Perez, A.; Stober, F.; Taneja, S.; Taze, M.; Wissing, C.

    2015-12-01

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community. The long shutdown of the LHC in 2013-2014 was an opportunity to revisit this mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems. With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Finally, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape. In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.

  16. MIDAS - ESO's new image processing system

    NASA Astrophysics Data System (ADS)

    Banse, K.; Crane, P.; Grosbol, P.; Middleburg, F.; Ounnas, C.; Ponz, D.; Waldthausen, H.

    1983-03-01

    The Munich Image Data Analysis System (MIDAS) is an image processing system whose heart is a pair of VAX 11/780 computers linked together via DECnet. One of these computers, VAX-A, is equipped with 3.5 Mbytes of memory, 1.2 Gbytes of disk storage, and two tape drives with 800/1600 bpi density. The other computer, VAX-B, has 4.0 Mbytes of memory, 688 Mbytes of disk storage, and one tape drive with 1600/6250 bpi density. MIDAS is a command-driven system geared toward the interactive user. The type and number of parameters in a command depends on the unique parameter invoked. MIDAS is a highly modular system that provides building blocks for the undertaking of more sophisticated applications. Presently, 175 commands are available. These include the modification of the color-lookup table interactively, to enhance various image features, and the interactive extraction of subimages.

  17. Microcomputers in Libraries: The Quiet Revolution.

    ERIC Educational Resources Information Center

    Boss, Richard

    1985-01-01

    This article defines three separate categories of microcomputers--personal, desk-top, multi-user devices--and relates storage capabilities (expandability, floppy disks) to library applications. Highlghts include de facto standards, operating systems, database management systems, applications software, circulation control systems, dumb and…

  18. Experiences From NASA/Langley's DMSS Project

    NASA Technical Reports Server (NTRS)

    1996-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at the NASA Langley Research Center (LaRC) has placed such a system into production use. This paper will present the experiences, both good and bad, we have had with this system since putting it into production usage. The system is comprised of: 1) National Storage Laboratory (NSL)/UniTree 2.1, 2) IBM 9570 HIPPI attached disk arrays (both RAID 3 and RAID 5), 3) IBM RS6000 server, 4) HIPPI/IPI3 third party transfers between the disk array systems and the supercomputer clients, a CRAY Y-MP and a CRAY 2, 5) a "warm spare" file server, 6) transition software to convert from CRAY's Data Migration Facility (DMF) based system to DMSS, 7) an NSC PS32 HIPPI switch, and 8) a STK 4490 robotic library accessed from the IBM RS6000 block mux interface. This paper will cover: the performance of the DMSS in the following areas: file transfer rates, migration and recall, and file manipulation (listing, deleting, etc.); the appropriateness of a workstation class of file server for NSL/UniTree with LaRC's present storage requirements in mind the role of the third party transfers between the supercomputers and the DMSS disk array systems in DMSS; a detailed comparison (both in performance and functionality) between the DMF and DMSS systems LaRC's enhancements to the NSL/UniTree system administration environment the mechanism for DMSS to provide file server redundancy the statistics on the availability of DMSS the design and experiences with the locally developed transparent transition software which allowed us to make over 1.5 million DMF files available to NSL/UniTree with minimal system outage

  19. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  20. A File Archival System

    NASA Technical Reports Server (NTRS)

    Fanselow, J. L.; Vavrus, J. L.

    1984-01-01

    ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.

  1. DICOM implementation on online tape library storage system

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Dai, Hailei L.; Elghammer, David; Levine, Betty A.; Mun, Seong K.

    1998-07-01

    The main purpose of this project is to implement a Digital Image and Communications (DICOM) compliant online tape library system over the Internet. Once finished, the system will be used to store medical exams generated from U.S. ARMY Mobile ARMY Surgical Hospital (MASH) in Tuzla, Bosnia. A modified UC Davis implementation of DICOM storage class is used for this project. DICOM storage class user and provider are implemented as the system's interface to the Internet. The DICOM software provides flexible configuration options such as types of modalities and trusted remote DICOM hosts. Metadata is extracted from each exam and indexed in a relational database for query and retrieve purposes. The medical images are stored inside the Wolfcreek-9360 tape library system from StorageTek Corporation. The tape library system has nearline access to more than 1000 tapes. Each tape has a capacity of 800 megabytes making the total nearline tape access of around 1 terabyte. The tape library uses the Application Storage Manager (ASM) which provides cost-effective file management, storage, archival, and retrieval services. ASM automatically and transparently copies files from expensive magnetic disk to less expensive nearline tape library, and restores the files back when they are needed. The ASM also provides a crash recovery tool, which enable an entire file system restore in a short time. A graphical user interface (GUI) function is used to view the contents of the storage systems. This GUI also allows user to retrieve the stored exams and send the exams to anywhere on the Internet using DICOM protocols. With the integration of different components of the system, we have implemented a high capacity online tape library storage system that is flexible and easy to use. Using tape as an alternative storage media as opposed to the magnetic disk has the great potential of cost savings in terms of dollars per megabyte of storage. As this system matures, the Hospital Information Systems/Radiology Information Systems (HIS/RIS) or other components can be developed potentially as interfaces to the outside world thus widen the usage of the tape library system.

  2. Free Vibration Analysis of a Spinning Flexible DISK-SPINDLE System Supported by Ball Bearing and Flexible Shaft Using the Finite Element Method and Substructure Synthesis

    NASA Astrophysics Data System (ADS)

    JANG, G. H.; LEE, S. H.; JUNG, M. S.

    2002-03-01

    Free vibration of a spinning flexible disk-spindle system supported by ball bearing and flexible shaft is analyzed by using Hamilton's principle, FEM and substructure synthesis. The spinning disk is described by using the Kirchhoff plate theory and von Karman non-linear strain. The rotating spindle and stationary shaft are modelled by Rayleigh beam and Euler beam respectively. Using Hamilton's principle and including the rigid body translation and tilting motion, partial differential equations of motion of the spinning flexible disk and spindle are derived consistently to satisfy the geometric compatibility in the internal boundary between substructures. FEM is used to discretize the derived governing equations, and substructure synthesis is introduced to assemble each component of the disk-spindle-bearing-shaft system. The developed method is applied to the spindle system of a computer hard disk drive with three disks, and modal testing is performed to verify the simulation results. The simulation result agrees very well with the experimental one. This research investigates critical design parameters in an HDD spindle system, i.e., the non-linearity of a spinning disk and the flexibility and boundary condition of a stationary shaft, to predict the free vibration characteristics accurately. The proposed method may be effectively applied to predict the vibration characteristics of a spinning flexible disk-spindle system supported by ball bearing and flexible shaft in the various forms of computer storage device, i.e., FDD, CD, HDD and DVD.

  3. Cost-effective data storage/archival subsystem for functional PACS

    NASA Astrophysics Data System (ADS)

    Chen, Y. P.; Kim, Yongmin

    1993-09-01

    Not the least of the requirements of a workable PACS is the ability to store and archive vast amounts of information. A medium-size hospital will generate between 1 and 2 TBytes of data annually on a fully functional PACS. A high-speed image transmission network coupled with a comparably high-speed central data storage unit can make local memory and magnetic disks in the PACS workstations less critical and, in an extreme case, unnecessary. Under these circumstances, the capacity and performance of the central data storage subsystem and database is critical in determining the response time at the workstations, thus significantly affecting clinical acceptability. The central data storage subsystem not only needs to provide sufficient capacity to store about ten days worth of images (five days worth of new studies, and on the average, about one comparison study for each new study), but also supplies images to the requesting workstation in a timely fashion. The database must provide fast retrieval responses upon users' requests for images. This paper analyzes both advantages and disadvantages of multiple parallel transfer disks versus RAID disks for short-term central data storage subsystem, as well as optical disk jukebox versus digital recorder tape subsystem for long-term archive. Furthermore, an example high-performance cost-effective storage subsystem which integrates both the RAID disks and high-speed digital tape subsystem as a cost-effective PACS data storage/archival unit are presented.

  4. Computer Sciences and Data Systems, volume 2

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: data storage; information network architecture; VHSIC technology; fiber optics; laser applications; distributed processing; spaceborne optical disk controller; massively parallel processors; and advanced digital SAR processors.

  5. A case for automated tape in clinical imaging.

    PubMed

    Bookman, G; Baune, D

    1998-08-01

    Electronic archiving of radiology images over many years will require many terabytes of storage with a need for rapid retrieval of these images. As more large PACS installations are installed and implemented, a data crisis occurs. The ability to store this large amount of data using the traditional method of optical jukeboxes or online disk alone becomes an unworkable solution. The amount of floor space number of optical jukeboxes, and off-line shelf storage required to store the images becomes unmanageable. With the recent advances in tape and tape drives, the use of tape for long term storage of PACS data has become the preferred alternative. A PACS system consisting of a centrally managed system of RAID disk, software and at the heart of the system, tape, presents a solution that for the first time solves the problems of multi-modality high end PACS, non-DICOM image, electronic medical record and ADT data storage. This paper will examine the installation of the University of Utah, Department of Radiology PACS system and the integration of automated tape archive. The tape archive is also capable of storing data other than traditional PACS data. The implementation of an automated data archive to serve the many other needs of a large hospital will also be discussed. This will include the integration of a filmless cardiology department and the backup/archival needs of a traditional MIS department. The need for high bandwidth to tape with a large RAID cache will be examined and how with an interface to a RIS pre-fetch engine, tape can be a superior solution to optical platters or other archival solutions. The data management software will be discussed in detail. The performance and cost of RAID disk cache and automated tape compared to a solution that includes optical will be examined.

  6. Multi-Level Bitmap Indexes for Flash Memory Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Madduri, Kamesh; Canon, Shane

    2010-07-23

    Due to their low access latency, high read speed, and power-efficient operation, flash memory storage devices are rapidly emerging as an attractive alternative to traditional magnetic storage devices. However, tests show that the most efficient indexing methods are not able to take advantage of the flash memory storage devices. In this paper, we present a set of multi-level bitmap indexes that can effectively take advantage of flash storage devices. These indexing methods use coarsely binned indexes to answer queries approximately, and then use finely binned indexes to refine the answers. Our new methods read significantly lower volumes of data atmore » the expense of an increased disk access count, thus taking full advantage of the improved read speed and low access latency of flash devices. To demonstrate the advantage of these new indexes, we measure their performance on a number of storage systems using a standard data warehousing benchmark called the Set Query Benchmark. We observe that multi-level strategies on flash drives are up to 3 times faster than traditional indexing strategies on magnetic disk drives.« less

  7. Software Engineering Principles 3-14 August 1981,

    DTIC Science & Technology

    1981-08-01

    small disk used (but rot that of the extended mass storage or large disk option); it is very fast (about 1/5 the speed of the primary memory, where the...extended mass storage or large disk option); it is very fast (about 1/5 the speed of the primary memory, where the disk was 1/10000 for access); and...programed and tested - must be correct and fast D. Choice of right synchronization operations: Design problem 1. Several mentioned in literature 9-22

  8. Storage resource manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Bakken, J.; Petravick, D.

    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less

  9. An ASIC memory buffer controller for a high speed disk system

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Campbell, Steve

    1993-01-01

    The need for large capacity, high speed mass memory storage devices has become increasingly evident at NASA during the past decade. High performance mass storage systems are crucial to present and future NASA systems. Spaceborne data storage system requirements have grown in response to the increasing amounts of data generated and processed by orbiting scientific experiments. Predictions indicate increases in the volume of data by orders of magnitude during the next decade. Current predictions are for storage capacities on the order of terabits (Tb), with data rates exceeding one gigabit per second (Gbps). As part of the design effort for a state of the art mass storage system, NASA Langley has designed a 144 CMOS ASIC to support high speed data transfers. This paper discusses the system architecture, ASIC design and some of the lessons learned in the development process.

  10. DPM — efficient storage in diverse environments

    NASA Astrophysics Data System (ADS)

    Hellmich, Martin; Furano, Fabrizio; Smith, David; Brito da Rocha, Ricardo; Álvarez Ayllón, Alejandro; Manzi, Andrea; Keeble, Oliver; Calvet, Ivan; Regala, Miguel Antonio

    2014-06-01

    Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular storage solution Disk Pool Manager (DPM/dmlite) as the enabling technology. The storage manager DPM is designed for these new environments, allowing users to scale up and down as they need it, and optimizing their computing centers energy efficiency and costs. DPM runs on high-performance machines, profiting from multi-core and multi-CPU setups. It supports separating the database from the metadata server, the head node, largely reducing its hard disk requirements. Since version 1.8.6, DPM is released in EPEL and Fedora, simplifying distribution and maintenance, but also supporting the ARM architecture beside i386 and x86_64, allowing it to run the smallest low-power machines such as the Raspberry Pi or the CuBox. This usage is facilitated by the possibility to scale horizontally using a main database and a distributed memcached-powered namespace cache. Additionally, DPM supports a variety of storage pools in the backend, most importantly HDFS, S3-enabled storage, and cluster file systems, allowing users to fit their DPM installation exactly to their needs. In this paper, we investigate the power-efficiency and total cost of ownership of various DPM configurations. We develop metrics to evaluate the expected performance of a setup both in terms of namespace and disk access considering the overall cost including equipment, power consumptions, or data/storage fees. The setups tested range from the lowest scale using Raspberry Pis with only 700MHz single cores and a 100Mbps network connections, over conventional multi-core servers to typical virtual machine instances in cloud settings. We evaluate the combinations of different name server setups, for example load-balanced clusters, with different storage setups, from using a classic local configuration to private and public clouds.

  11. Document Indexing for Image-Based Optical Information Systems.

    ERIC Educational Resources Information Center

    Thiel, Thomas J.; And Others

    1991-01-01

    Discussion of image-based information retrieval systems focuses on indexing. Highlights include computerized information retrieval; multimedia optical systems; optical mass storage and personal computers; and a case study that describes an optical disk system which was developed to preserve, access, and disseminate military documents. (19…

  12. NASA Langley Research Center's distributed mass storage system

    NASA Technical Reports Server (NTRS)

    Pao, Juliet Z.; Humes, D. Creig

    1993-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.

  13. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    NASA Astrophysics Data System (ADS)

    Zotes Resines, M.; Heikkila, S. S.; Duellmann, D.; Adde, G.; Toebbicke, R.; Hughes, J.; Wang, L.

    2014-06-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  14. Evaluating Non-In-Place Update Techniques for Flash-Based Transaction Processing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yongkun; Goda, Kazuo; Kitsuregawa, Masaru

    Recently, flash memory is emerging as the storage device. With price sliding fast, the cost per capacity is approaching to that of SATA disk drives. So far flash memory has been widely deployed in consumer electronics even partly in mobile computing environments. For enterprise systems, the deployment has been studied by many researchers and developers. In terms of the access performance characteristics, flash memory is quite different from disk drives. Without the mechanical components, flash memory has very high random read performance, whereas it has a limited random write performance because of the erase-before-write design. The random write performance of flash memory is comparable with or even worse than that of disk drives. Due to such a performance asymmetry, naive deployment to enterprise systems may not exploit the potential performance of flash memory at full blast. This paper studies the effectiveness of using non-in-place-update (NIPU) techniques through the IO path of flash-based transaction processing systems. Our deliberate experiments using both open-source DBMS and commercial DBMS validated the potential benefits; x3.0 to x6.6 performance improvement was confirmed by incorporating non-in-place-update techniques into file system without any modification of applications or storage devices.

  15. The impact of image storage organization on the effectiveness of PACS.

    PubMed

    Hindel, R

    1990-11-01

    Picture archiving communication system (PACS) requires efficient handling of large amounts of data. Mass storage systems are cost effective but slow, while very fast systems, like frame buffers and parallel transfer disks, are expensive. The image traffic can be divided into inbound traffic generated by diagnostic modalities and outbound traffic into workstations. At the contact points with medical professionals, the responses must be fast. Archiving, on the other hand, can employ slower but less expensive storage systems, provided that the primary activities are not impeded. This article illustrates a segmentation architecture meeting these requirements based on a clearly defined PACS concept.

  16. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  17. Onboard System Evaluation of Rotors Vibration, Engines (OBSERVE) monitoring System

    DTIC Science & Technology

    1992-07-01

    consists of a Data Acquisiiton Unit (DAU), Control and Display Unit ( CADU ), Universal Tracking Devices (UTD), Remote Cockpit Display (RCD) and a PC...and Display Unit ( CADU ) - The CADU provides data storage and a graphical user interface neccesary to display both the measured data and diagnostic...information. The CADU has an interface to a Credit Card Memory (CCM) which operates similar to a disk drive, allowing the storage of data and programs. The

  18. Reducing the Cost of System Administration of a Disk Storage System Built from Commodity Components

    DTIC Science & Technology

    2000-05-01

    quickly by using checkpointing and roll-forward logs. Microsoft Tiger is a video server built from commodity PCs which they call “cubs” [ BBD +96, BFD97...20 cents per megabyte using street prices of components. 3.2.2 Redundancy In designing the TD prototype, we have taken care to ensure it does not have... Td /GridPix/, 1999. [ATP99] Satoshi Asami, Nisha Talagala, and David Patterson. Designing a self-maintaining storage system. In Proceedings of the

  19. Design and evaluation of a hybrid storage system in HEP environment

    NASA Astrophysics Data System (ADS)

    Xu, Qi; Cheng, Yaodong; Chen, Gang

    2017-10-01

    Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.

  20. Security of patient data when decommissioning ultrasound systems.

    PubMed

    Moggridge, James

    2017-02-01

    Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system's own delete function; full reinstallation of system software; the manufacturer's own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system's own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer's wiping service or the open source wiping software. The typical method of reinstalling an ultrasound system's software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system's hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state.

  1. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  2. Data storage technology comparisons

    NASA Technical Reports Server (NTRS)

    Katti, Romney R.

    1990-01-01

    The role of data storage and data storage technology is an integral, though conceptually often underestimated, portion of data processing technology. Data storage is important in the mass storage mode in which generated data is buffered for later use. But data storage technology is also important in the data flow mode when data are manipulated and hence required to flow between databases, datasets and processors. This latter mode is commonly associated with memory hierarchies which support computation. VLSI devices can reasonably be defined as electronic circuit devices such as channel and control electronics as well as highly integrated, solid-state devices that are fabricated using thin film deposition technology. VLSI devices in both capacities play an important role in data storage technology. In addition to random access memories (RAM), read-only memories (ROM), and other silicon-based variations such as PROM's, EPROM's, and EEPROM's, integrated devices find their way into a variety of memory technologies which offer significant performance advantages. These memory technologies include magnetic tape, magnetic disk, magneto-optic disk, and vertical Bloch line memory. In this paper, some comparison between selected technologies will be made to demonstrate why more than one memory technology exists today, based for example on access time and storage density at the active bit and system levels.

  3. Moore's law realities for recording systems and memory storage components: HDD, tape, NAND, and optical

    NASA Astrophysics Data System (ADS)

    Fontana, Robert E.; Decad, Gary M.

    2018-05-01

    This paper describes trends in the storage technologies associated with Linear Tape Open (LTO) Tape cartridges, hard disk drives (HDD), and NAND Flash based storage devices including solid-state drives (SSD). This technology discussion centers on the relationship between cost/bit and bit density and, specifically on how the Moore's Law perception that areal density doubling and cost/bit halving every two years is no longer being achieved for storage based components. This observation and a Moore's Law Discussion are demonstrated with data from 9-year storage technology trends, assembled from publically available industry reporting sources.

  4. Research Studies on Advanced Optical Module/Head Designs for Optical Data Storage

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Preprints are presented from the recent 1992 Optical Data Storage meeting in San Jose. The papers are divided into the following topical areas: Magneto-optical media (Modeling/design and fabrication/characterization/testing); Optical heads (holographic optical elements); and Optical heads (integrated optics). Some representative titles are as follow: Diffraction analysis and evaluation of several focus and track error detection schemes for magneto-optical disk systems; Proposal for massively parallel data storage system; Transfer function characteristics of super resolving systems; Modeling and measurement of a micro-optic beam deflector; Oxidation processes in magneto-optic and related materials; and A modal analysis of lamellar diffraction gratings in conical mountings.

  5. Status of emerging standards for removable computer storage media and related contributions of NIST

    NASA Technical Reports Server (NTRS)

    Podio, Fernando L.

    1992-01-01

    Standards for removable computer storage media are needed so that users may reliably interchange data both within and among various computer installations. Furthermore, media interchange standards support competition in industry and prevent sole-source lock-in. NIST participates in magnetic tape and optical disk standards development through Technical Committees X3B5, Digital Magnetic Tapes, X3B11, Optical Digital Data Disk, and the Joint Technical Commission on Data Permanence. NIST also participates in other relevant national and international standards committees for removable computer storage media. Industry standards for digital magnetic tapes require the use of Standard Reference Materials (SRM's) developed and maintained by NIST. In addition, NIST has been studying care and handling procedures required for digital magnetic tapes. NIST has developed a methodology for determining the life expectancy of optical disks. NIST is developing care and handling procedures for optical digital data disks and is involved in a program to investigate error reporting capabilities of optical disk drives. This presentation reflects the status of emerging magnetic tape and optical disk standards, as well as NIST's contributions in support of these standards.

  6. Geophysical data base

    NASA Technical Reports Server (NTRS)

    Williamson, M. R.; Kirschner, L. R.

    1975-01-01

    A general data-management system that provides a random-access capability for large amounts of data is described. The system operates on a CDC 6400 computer using a combination of magnetic tape and disk storage. A FORTRAN subroutine package is provided to simplify the maintenance and use of the data.

  7. Converged photonic data storage and switch platform for exascale disaggregated data centers

    NASA Astrophysics Data System (ADS)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  8. The State of the Art in Information Handling. Operation PEP/Executive Information Systems.

    ERIC Educational Resources Information Center

    Summers, J. K.; Sullivan, J. E.

    This document explains recent developments in computer science and information systems of interest to the educational manager. A brief history of computers is included, together with an examination of modern computers' capabilities. Various features of card, tape, and disk information storage systems are presented. The importance of time-sharing…

  9. 75 FR 1625 - Privacy Act of 1974; Report of Amended or Altered System; Medical, Health and Billing Records System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-12

    ...., desktop, laptop, handheld or other computer types) containing protected personal identifiers or PHI is... as the National Indian Women's Resource Center, to conduct analytical and evaluation studies. 8... SYSTEM: STORAGE: File folders, ledgers, card files, microfiche, microfilm, computer tapes, disk packs...

  10. Eighth Goddard Conference on Mass Storage Systems and Technologies in Cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    2000-01-01

    This document contains copies of those technical papers received in time for publication prior to the Eighth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center March 27-30, 2000. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, new technology with a special emphasis on holographic storage, performance, standards, site reports, vendor solutions. Tutorials will be available on stability of optical media, disk subsystem performance evaluation, I/O and storage tuning, functionality and performance evaluation of file systems for storage area networks.

  11. Permanent-File-Validation Utility Computer Program

    NASA Technical Reports Server (NTRS)

    Derry, Stephen D.

    1988-01-01

    Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

  12. A Layered Solution for Supercomputing Storage

    ScienceCinema

    Grider, Gary

    2018-06-13

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  13. Method and apparatus for bistable optical information storage for erasable optical disks

    DOEpatents

    Land, Cecil E.; McKinney, Ira D.

    1990-01-01

    A method and an optical device for bistable storage of optical information, together with reading and erasure of the optical information, using a photoactivated shift in a field dependent phase transition between a metastable or a bias-stabilized ferroelectric (FE) phase and a stable antiferroelectric (AFE) phase in an lead lanthanum zirconate titanate (PLZT). An optical disk contains the PLZT. Writing and erasing of optical information can be accomplished by a light beam normal to the disk. Reading of optical information can be accomplished by a light beam at an incidence angle of 15 to 60 degrees to the normal of the disk.

  14. Method and apparatus for bistable optical information storage for erasable optical disks

    DOEpatents

    Land, C.E.; McKinney, I.D.

    1988-05-31

    A method and an optical device for bistable storage of optical information, together with reading and erasure of the optical information, using a photoactivated shift in a field dependent phase transition between a metastable or a bias-stabilized ferroelectric (FE) phase and a stable antiferroelectric (AFE) phase in a lead lanthanum zirconate titanate (PLZT). An optical disk contains the PLZT. Writing and erasing of optical information can be accomplished by a light beam normal to the disk. Reading of optical information can be accomplished by a light beam at an incidence angle of 15 to 60 degrees to the normal of the disk. 10 figs.

  15. Development of a software interface for optical disk archival storage for a new life sciences flight experiments computer

    NASA Technical Reports Server (NTRS)

    Bartram, Peter N.

    1989-01-01

    The current Life Sciences Laboratory Equipment (LSLE) microcomputer for life sciences experiment data acquisition is now obsolete. Among the weaknesses of the current microcomputer are small memory size, relatively slow analog data sampling rates, and the lack of a bulk data storage device. While life science investigators normally prefer data to be transmitted to Earth as it is taken, this is not always possible. No down-link exists for experiments performed in the Shuttle middeck region. One important aspect of a replacement microcomputer is provision for in-flight storage of experimental data. The Write Once, Read Many (WORM) optical disk was studied because of its high storage density, data integrity, and the availability of a space-qualified unit. In keeping with the goals for a replacement microcomputer based upon commercially available components and standard interfaces, the system studied includes a Small Computer System Interface (SCSI) for interfacing the WORM drive. The system itself is designed around the STD bus, using readily available boards. Configurations examined were: (1) master processor board and slave processor board with the SCSI interface; (2) master processor with SCSI interface; (3) master processor with SCSI and Direct Memory Access (DMA); (4) master processor controlling a separate STD bus SCSI board; and (5) master processor controlling a separate STD bus SCSI board with DMA.

  16. Data storage systems technology for the Space Station era

    NASA Technical Reports Server (NTRS)

    Dalton, John; Mccaleb, Fred; Sos, John; Chesney, James; Howell, David

    1987-01-01

    The paper presents the results of an internal NASA study to determine if economically feasible data storage solutions are likely to be available to support the ground data transport segment of the Space Station mission. An internal NASA effort to prototype a portion of the required ground data processing system is outlined. It is concluded that the requirements for all ground data storage functions can be met with commercial disk and tape drives assuming conservative technology improvements and that, to meet Space Station data rates with commercial technology, the data will have to be distributed over multiple devices operating in parallel and in a sustained maximum throughput mode.

  17. Proof of cipher text ownership based on convergence encryption

    NASA Astrophysics Data System (ADS)

    Zhong, Weiwei; Liu, Zhusong

    2017-08-01

    Cloud storage systems save disk space and bandwidth through deduplication technology, but with the use of this technology has been targeted security attacks: the attacker can get the original file just use hash value to deceive the server to obtain the file ownership. In order to solve the above security problems and the different security requirements of cloud storage system files, an efficient information theory security proof of ownership scheme is proposed. This scheme protects the data through the convergence encryption method, and uses the improved block-level proof of ownership scheme, and can carry out block-level client deduplication to achieve efficient and secure cloud storage deduplication scheme.

  18. Medical image digital archive: a comparison of storage technologies

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy; Hutchings, Matt

    1998-07-01

    A cost effective, high capacity digital archive system is one of the remaining key factors that will enable a radiology department to eliminate film as an archive medium. The ever increasing amount of digital image data is creating the need for huge archive systems that can reliably store and retrieve millions of images and hold from a few terabytes of data to possibly hundreds of terabytes. Selecting the right archive solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, conformance to open standards, archive availability and reliability, security, cost, achievable benefits and cost savings, investment protection, and more. This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. New technologies will be discussed, such as DVD and high performance tape. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on random and pre-fetch retrieval time will be analyzed. The concept of automated migration of images from high performance, RAID disk storage devices to high capacity, NearlineR storage devices will be introduced as a viable way to minimize overall storage costs for an archive.

  19. One-Dimensional Signal Extraction Of Paper-Written ECG Image And Its Archiving

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-ni; Zhang, Hong; Zhuang, Tian-ge

    1987-10-01

    A method for converting paper-written electrocardiograms to one dimensional (1-D) signals for archival storage on floppy disk is presented here. Appropriate image processing techniques were employed to remove the back-ground noise inherent to ECG recorder charts and to reconstruct the ECG waveform. The entire process consists of (1) digitization of paper-written ECGs with an image processing system via a TV camera; (2) image preprocessing, including histogram filtering and binary image generation; (3) ECG feature extraction and ECG wave tracing, and (4) transmission of the processed ECG data to IBM-PC compatible floppy disks for storage and retrieval. The algorithms employed here may also be used in the recognition of paper-written EEG or EMG and may be useful in robotic vision.

  20. Short-term storage allocation in a filmless hospital

    NASA Astrophysics Data System (ADS)

    Strickland, Nicola H.; Deshaies, Marc J.; Reynolds, R. Anthony; Turner, Jonathan E.; Allison, David J.

    1997-05-01

    Optimizing limited short term storage (STS) resources requires gradual, systematic changes, monitored and modified within an operational PACS environment. Optimization of the centralized storage requires a balance of exam numbers and types in STS to minimize lengthy retrievals from long term archive. Changes to STS parameters and work procedures were made while monitoring the effects on resource allocation by analyzing disk space temporally. Proportions of disk space allocated to each patient category on STS were measured to approach the desired proportions in a controlled manner. Key factors for STS management were: (1) sophisticated exam prefetching algorithms: HIS/RIS-triggered, body part-related and historically-selected, and (2) a 'storage onion' design allocating various exam categories to layers with differential deletion protection. Hospitals planning for STS space should consider the needs of radiology, wards, outpatient clinics and clinicoradiological conferences for new and historical exams; desired on-line time; and potential increase in image throughput and changing resources, such as an increase in short term storage disk space.

  1. Security of patient data when decommissioning ultrasound systems

    PubMed Central

    2017-01-01

    Background Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. Methods In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system’s own delete function; full reinstallation of system software; the manufacturer’s own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. Results All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system’s own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer’s wiping service or the open source wiping software. Conclusion The typical method of reinstalling an ultrasound system’s software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system’s hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state. PMID:28228821

  2. Optical Disks.

    ERIC Educational Resources Information Center

    Gale, John C.; And Others

    1985-01-01

    This four-article section focuses on information storage capacity of the optical disk covering the information workstation (uses microcomputer, optical disk, compact disc to provide reference information, information content, work product support); use of laser videodisc technology for dissemination of agricultural information; encoding databases…

  3. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  4. The medium is NOT the message or Indefinitely long-term file storage at Leeds University

    NASA Technical Reports Server (NTRS)

    Holdsworth, David

    1996-01-01

    Approximately 3 years ago we implemented an archive file storage system which embodies experiences gained over more than 25 years of using and writing file storage systems. It is the third in-house system that we have written, and all three systems have been adopted by other institutions. This paper discusses the requirements for long-term data storage in a university environment, and describes how our present system is designed to meet these requirements indefinitely. Particular emphasis is laid on experiences from past systems, and their influence on current system design. We also look at the influence of the IEEE-MSS standard. We currently have the system operating in five UK universities. The system operates in a multi-server environment, and is currently operational with UNIX (SunOS4, Solaris2, SGI-IRIX, HP-UX), NetWare3 and NetWare4. PCs logged on to NetWare can also archive and recover files that live on their hard disks.

  5. Development of superconducting magnetic bearing with superconducting coil and bulk superconductor for flywheel energy storage system

    NASA Astrophysics Data System (ADS)

    Arai, Y.; Seino, H.; Yoshizawa, K.; Nagashima, K.

    2013-11-01

    We have been developing superconducting magnetic bearing for flywheel energy storage system to be applied to the railway system. The bearing consists of a superconducting coil as a stator and bulk superconductors as a rotor. A flywheel disk connected to the bulk superconductors is suspended contactless by superconducting magnetic bearings (SMBs). We have manufactured a small scale device equipped with the SMB. The flywheel was rotated contactless over 2000 rpm which was a frequency between its rigid body mode and elastic mode. The feasibility of this SMB structure was demonstrated.

  6. Fixed-base flywheel storage systems for electric-utility applications: An assessment of economic viability and R and D priorities

    NASA Astrophysics Data System (ADS)

    Olszewski, M.; Steele, R. S.

    1983-02-01

    Electric utility side meter storage options were assessed for the daily 2 h peaking spike application. The storage options considered included compressed air, batteries, and flywheels. The potential role for flywheels in this application was assessed and research and development (R and D) priorities were established for fixed base flywheel systems. Results of the worth cost analysis indicate that where geologic conditions are favorable, compressed air energy storage (CAES) is a strong competitor against combustion turbines. Existing battery and flywheel systems rated about equal, both being, at best, marginally uncompetitive with turbines. Advanced batteries, if existing cost and performance goals are met, could be competitive with CAES. A three task R and D effort for flywheel development appears warranted. The first task, directed at reducing fabrication coss and increasing performance of a chopped fiber, F-glass, solid disk concept, could produce a competitive flywheel system.

  7. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  8. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  9. Surface-Enhanced Raman Optical Data Storage system

    DOEpatents

    Vo-Dinh, T.

    1991-03-12

    A method and apparatus for a Surface-Enhanced Raman Optical Data Storage (SERODS) System are disclosed. A medium which exhibits the Surface Enhanced Raman Scattering (SERS) phenomenon has data written onto its surface of microenvironment by means of a write-on procedure which disturbs the surface or microenvironment of the medium and results in the medium having a changed SERS emission when excited. The write-on procedure is controlled by a signal that corresponds to the data to be stored so that the disturbed regions on the storage device (e.g., disk) represent the data. After the data is written onto the storage device it is read by exciting the surface of the storage device with an appropriate radiation source and detecting changes in the SERS emission to produce a detection signal. The data is then reproduced from the detection signal. 5 figures.

  10. Surface-enhanced raman optical data storage system

    DOEpatents

    Vo-Dinh, Tuan

    1991-01-01

    A method and apparatus for a Surface-Enhanced Raman Optical Data Storage (SERODS) System is disclosed. A medium which exhibits the Surface Enhanced Raman Scattering (SERS) phenomenon has data written onto its surface of microenvironment by means of a write-on procedure which disturbs the surface or microenvironment of the medium and results in the medium having a changed SERS emission when excited. The write-on procedure is controlled by a signal that corresponds to the data to be stored so that the disturbed regions on the storage device (e.g., disk) represent the data. After the data is written onto the storage device it is read by exciting the surface of the storage device with an appropriate radiation source and detecting changes in the SERS emission to produce a detection signal. The data is then reproduced from the detection signal.

  11. Technology and the Online Catalog.

    ERIC Educational Resources Information Center

    Graham, Peter S.

    1983-01-01

    Discusses trends in computer technology and their use for library catalogs, noting the concept of bandwidth (describes quantity of information transmitted per given unit of time); computer hardware differences (micros, minis, maxis); distributed processing systems and databases; optical disk storage; networks; transmission media; and terminals.…

  12. The convertible flywheel

    NASA Astrophysics Data System (ADS)

    Ginsburg, B. R.

    The design and testing of a new twin-disk composite flywheel is described. It is the first flywheel to store 2 kW-hr of energy and the first to successfully combine the advantages of composite materials with metal hubs, thus providing a system-ready flywheel with high energy storage and high torque capabilities. The use of flywheels in space for energy storage in satellites and space stations is examined. The convertibility of the present flywheel to provide the next generation Annular Momentum Control Device or Annular Suspension and Pointing System is discussed.

  13. Evolving Requirements for Magnetic Tape Data Storage Systems

    NASA Technical Reports Server (NTRS)

    Gniewek, John J.

    1996-01-01

    Magnetic tape data storage systems have evolved in an environment where the major applications have been back-up/restore, disaster recovery, and long term archive. Coincident with the rapidly improving price-performance of disk storage systems, the prime requirements for tape storage systems have remained: (1) low cost per MB, (2) a data rate balanced to the remaining system components. Little emphasis was given to configuring the technology components to optimize retrieval of the stored data. Emerging new applications such as network attached high speed memory (HSM), and digital libraries, place additional emphasis and requirements on the retrieval of the stored data. It is therefore desirable to consider the system to be defined both by STorage And Retrieval System (STARS) requirements. It is possible to provide comparative performance analysis of different STARS by incorporating parameters related to (1) device characteristics, and (2) application characteristics in combination with queuing theory analysis. Results of these analyses are presented here in the form of response time as a function of system configuration for two different types of devices and for a variety of applications.

  14. Motivation and Design of the Sirocco Storage System Version 1.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew Leon; Ward, H. Lee; Danielson, Geoffrey Charles

    Sirocco is a massively parallel, high performance storage system for the exascale era. It emphasizes client-to-client coordination, low server-side coupling, and free data movement to improve resilience and performance. Its architecture is inspired by peer-to-peer and victim- cache architectures. By leveraging these ideas, Sirocco natively supports several media types, including RAM, flash, disk, and archival storage, with automatic migration between levels. Sirocco also includes storage interfaces and support that are more advanced than typical block storage. Sirocco enables clients to efficiently use key-value storage or block-based storage with the same interface. It also provides several levels of transactional data updatesmore » within a single storage command, including full ACID-compliant updates. This transaction support extends to updating several objects within a single transaction. Further support is provided for con- currency control, enabling greater performance for workloads while providing safe concurrent modification. By pioneering these and other technologies and techniques in the storage system, Sirocco is poised to fulfill a need for a massively scalable, write-optimized storage system for exascale systems. This is version 1.0 of a document reflecting the current and planned state of Sirocco. Further versions of this document will be accessible at http://www.cs.sandia.gov/Scalable_IO/ sirocco .« less

  15. Simple, Script-Based Science Processing Archive

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle

    2007-01-01

    The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.

  16. SAN/CXFS test report to LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, T M; Eldel, A

    2000-01-01

    The primary objectives of this project were to evaluate the performance of the SGI CXFS File System in a Storage Area Network (SAN) and compare/contrast it to the performance of a locally attached XFS file system on the same computer and storage subsystems. The University of Minnesota participants were asked to verify that the performance of the SAN/CXFS configuration did not fall below 85% of the performance of the XFS local configuration. There were two basic hardware test configurations constructed from the following equipment: Two Onyx 2 computer systems each with two Qlogic-based Fibre Channel/XIO Host Bus Adapter (HBA); Onemore » 8-Port Brocade Silkworm 2400 Fibre Channel Switch; and Four Ciprico RF7000 RAID Disk Arrays populated Seagate Barracuda 50GB disk drives. The Operating System on each of the ONYX 2 computer systems was IRIX 6.5.6. The first hardware configuration consisted of directly connecting the Ciprico arrays to the Qlogic controllers without the Brocade switch. The purpose for this configuration was to establish baseline performance data on the Qlogic controllers / Ciprico disk raw subsystem. This baseline performance data would then be used to demonstrate any performance differences arising from the addition of the Brocade Fibre Channel Switch. Furthermore, the performance of the Qlogic controllers could be compared to that of the older, Adaptec-based XIO dual-channel Fibre Channel adapters previously used on these systems. It should be noted that only raw device tests were performed on this configuration. No file system testing was performed on this configuration. The second hardware configuration introduced the Brocade Fibre Channel Switch. Two FC ports from each of the ONYX2 computer systems were attached to four ports of the switch and the four Ciprico arrays were attached to the remaining four. Raw disk subsystem tests were performed on the SAN configuration in order to demonstrate the performance differences between the direct-connect and the switched configurations. After this testing was completed, the Ciprico arrays were formatted with an XFS file system and performance numbers were gathered to establish a File System Performance Baseline. Finally, the disks were formatted with CXFS and further tests were run to demonstrate the performance of the CXFS file system. A summary of the results of these tests is given.« less

  17. Head-disk Interface Study for Heat Assisted Magnetic Recording (HAMR) and Plasmonic Nanolithography for Patterned Media

    NASA Astrophysics Data System (ADS)

    Xiong, Shaomin

    The magnetic storage areal density keeps increasing every year, and magnetic recording-based hard disk drives provide a very cheap and effective solution to the ever increasing demand for data storage. Heat assisted magnetic recording (HAMR) and bit patterned media have been proposed to increase the magnetic storage density beyond 1 Tb/in2. In HAMR systems, high magnetic anisotropy materials are recommended to break the superparamagnetic limit for further scaling down the size of magnetic bits. However, the current magnetic transducers are not able to generate strong enough field to switch the magnetic orientations of the high magnetic anisotropy material so the data writing is not able to be achieved. So thermal heating has to be applied to reduce the coercivity for the magnetic writing. To provide the heating, a laser is focused using a near field transducer (NFT) to locally heat a ~(25 nm)2 spot on the magnetic disk to the Curie temperature, which is ~ 400 C-600°C, to assist in the data writing process. But this high temperature working condition is a great challenge for the traditional head-disk interface (HDI). The disk lubricant can be depleted by evaporation or decomposition. The protective carbon overcoat can be graphitized or oxidized. The surface quality, such as its roughness, can be changed as well. The NFT structure is also vulnerable to degradation under the large number of thermal load cycles. The changes of the HDI under the thermal conditions could significantly reduce the robustness and reliability of the HAMR products. In bit patterned media systems, instead of using the continuous magnetic granular material, physically isolated magnetic islands are used to store data. The size of the magnetic islands should be about or less than 25 nm in order to achieve the storage areal density beyond 1 Tb/in2. However, the manufacture of the patterned media disks is a great challenge for the current optical lithography technology. Alternative lithography solutions, such as nanoimprint, plasmonic nanolithography, could be potential candidates for the fabrication of patterned disks. This dissertation focuses mainly on: (1) an experimental study of the HDI under HAMR conditions (2) exploration of a plasmonic nanolithography technology. In this work, an experimental HAMR testbed (named "Cal stage") is developed to study different aspects of HAMR systems, including the tribological head-disk interface and heat transfer in the head-disk gap. A temperature calibration method based on magnetization decay is proposed to obtain the relationship between the laser power input and temperature increase on the disk. Furthermore, lubricant depletion tests under various laser heating conditions are performed. The effects of laser heating repetitions, laser power and disk speeds on lubricant depletion are discussed. Lubricant depletion under the optical focused laser beam heating and the NFT heating are compared, revealing that thermal gradient plays an important role for lubricant depletion. Lubricant reflow behavior under various conditions is also studied, and a power law dependency of lubricant depletion on laser heating repetitions is obtained from the experimental results. A conductive-AFM system is developed to measure the electrical properties of thin carbon films. The conductivity or resistivity is a good parameter for characterizing the sp2/sp3 components of the carbon films. Different heating modes are applied to study the degradation of the carbon films, including temperature-controlled electric heater heating, focused laser beam heating and NFT heating. It is revealed that the temperature and heating duration significantly affect the degradation of the carbon films. Surface reflectivity and roughness are changed under certain heating conditions. The failure of the NFT structure during slider flying is investigated using our in-house fabricated sliders. In order to extend the lifetime of the NFT, a two-stage heating scheme is proposed and a numerical simulation has verified the feasibility of this new scheme. The heat dissipated around the NFT structure causes a thermal protrusion. There is a chance for contact to occur between the protrusion and disk which can result in a failure of the NFT. A design method to combine both TFC protrusion and laser induced NFT protrusion is proposed to reduce the fly-height modulation and chance of head-disk contact. Finally, an integrated plasmonic nanolithography machine is introduced to fabricate the master template for patterned disks. The plasmonic nanolithography machine uses a flying slider with a plasmonic lens to expose the thermal resist on a spinning wafer. The system design, optimization and integration have been performed over the past few years. Several sub-systems of the plasmonic nanolithography machine, such as the radial and circumferential direction position control, high speed pattern generation, are presented in this work. The lithography results are shown as well.

  18. Study of data I/O performance on distributed disk system in mask data preparation

    NASA Astrophysics Data System (ADS)

    Ohara, Shuichiro; Odaira, Hiroyuki; Chikanaga, Tomoyuki; Hamaji, Masakazu; Yoshioka, Yasuharu

    2010-09-01

    Data volume is getting larger every day in Mask Data Preparation (MDP). In the meantime, faster data handling is always required. MDP flow typically introduces Distributed Processing (DP) system to realize the demand because using hundreds of CPU is a reasonable solution. However, even if the number of CPU were increased, the throughput might be saturated because hard disk I/O and network speeds could be bottlenecks. So, MDP needs to invest a lot of money to not only hundreds of CPU but also storage and a network device which make the throughput faster. NCS would like to introduce new distributed processing system which is called "NDE". NDE could be a distributed disk system which makes the throughput faster without investing a lot of money because it is designed to use multiple conventional hard drives appropriately over network. NCS studies I/O performance with OASIS® data format on NDE which contributes to realize the high throughput in this paper.

  19. Attention Novices: Friendly Intro to Shiny Disks.

    ERIC Educational Resources Information Center

    Bardes, D'Ellen

    1986-01-01

    Provides an overview of how optical storage technologies--videodisk, Write-Once disks, and CD-ROM CD-I disks are built into and controlled via DEC, Apple, Atari, Amiga, and IBM PC compatible microcomputers. Several available products are noted and a list of producers is included. (EM)

  20. A study of mass data storage technology for rocket engine data

    NASA Technical Reports Server (NTRS)

    Ready, John F.; Benser, Earl T.; Fritz, Bernard S.; Nelson, Scott A.; Stauffer, Donald R.; Volna, William M.

    1990-01-01

    The results of a nine month study program on mass data storage technology for rocket engine (especially the Space Shuttle Main Engine) health monitoring and control are summarized. The program had the objective of recommending a candidate mass data storage technology development for rocket engine health monitoring and control and of formulating a project plan and specification for that technology development. The work was divided into three major technical tasks: (1) development of requirements; (2) survey of mass data storage technologies; and (3) definition of a project plan and specification for technology development. The first of these tasks reviewed current data storage technology and developed a prioritized set of requirements for the health monitoring and control applications. The second task included a survey of state-of-the-art and newly developing technologies and a matrix-based ranking of the technologies. It culminated in a recommendation of optical disk technology as the best candidate for technology development. The final task defined a proof-of-concept demonstration, including tasks required to develop, test, analyze, and demonstrate the technology advancement, plus an estimate of the level of effort required. The recommended demonstration emphasizes development of an optical disk system which incorporates an order-of-magnitude increase in writing speed above the current state of the art.

  1. Design Alternatives to Improve Access Time Performance of Disk Drives Under DOS and UNIX

    NASA Astrophysics Data System (ADS)

    Hospodor, Andy

    For the past 25 years, improvements in CPU performance have overshadowed improvements in the access time performance of disk drives. CPU performance has been slanted towards greater instruction execution rates, measured in millions of instructions per second (MIPS). However, the slant for performance of disk storage has been towards capacity and corresponding increased storage densities. The IBM PC, introduced in 1982, processed only a fraction of a MIP. Follow-on CPUs, such as the 80486 and 80586, sported 5-10 MIPS by 1992. Single user PCs and workstations, with one CPU and one disk drive, became the dominant application, as implied by their production volumes. However, disk drives did not enjoy a corresponding improvement in access time performance, although the potential still exists. The time to access a disk drive improves (decreases) in two ways: by altering the mechanical properties of the drive or by adding cache to the drive. This paper explores the improvement to access time performance of disk drives using cache, prefetch, faster rotation rates, and faster seek acceleration.

  2. I/O performance evaluation of a Linux-based network-attached storage device

    NASA Astrophysics Data System (ADS)

    Sun, Zhaoyan; Dong, Yonggui; Wu, Jinglian; Jia, Huibo; Feng, Guanping

    2002-09-01

    In a Local Area Network (LAN), clients are permitted to access the files on high-density optical disks via a network server. But the quality of read service offered by the conventional server is not satisfied because of the multiple functions on the server and the overmuch caller. This paper develops a Linux-based Network-Attached Storage (NAS) server. The Operation System (OS), composed of an optimized kernel and a miniaturized file system, is stored in a flash memory. After initialization, the NAS device is connected into the LAN. The administrator and users could configure the access the server through the web page respectively. In order to enhance the quality of access, the management of buffer cache in file system is optimized. Some benchmark programs are peformed to evaluate the I/O performance of the NAS device. Since data recorded in optical disks are usually for reading accesses, our attention is focused on the reading throughput of the device. The experimental results indicate that the I/O performance of our NAS device is excellent.

  3. A media maniac's guide to removable mass storage media

    NASA Technical Reports Server (NTRS)

    Kempster, Linda S.

    1996-01-01

    This paper addresses at a high level, the many individual technologies available today in the removable storage arena including removable magnetic tapes, magnetic floppies, optical disks and optical tape. Tape recorders represented below discuss logitudinal, serpantine, logitudinal serpantine,and helical scan technologies. The magnetic floppies discussed will be used for personal electronic in-box applications.Optical disks still fill the role for dense long-term storage. The media capacities quoted are for native data. In some cases, 2 KB ASC2 pages or 50 KB document images will be referenced.

  4. Laser Optical Disk: The Coming Revolution in On-Line Storage.

    ERIC Educational Resources Information Center

    Fujitani, Larry

    1984-01-01

    Review of similarities and differences between magnetic-based and optical disk drives includes a discussion of the electronics necessary for their operation; describes benefits, possible applications, and future trends in development of laser-based drives; and lists manufacturers of laser optical disk drives. (MBR)

  5. Efficient proof of ownership for cloud storage systems

    NASA Astrophysics Data System (ADS)

    Zhong, Weiwei; Liu, Zhusong

    2017-08-01

    Cloud storage system through the deduplication technology to save disk space and bandwidth, but the use of this technology has appeared targeted security attacks: the attacker can deceive the server to obtain ownership of the file by get the hash value of original file. In order to solve the above security problems and the different security requirements of the files in the cloud storage system, an efficient and information-theoretical secure proof of ownership sceme is proposed to support the file rating. Through the K-means algorithm to implement file rating, and use random seed technology and pre-calculation method to achieve safe and efficient proof of ownership scheme. Finally, the scheme is information-theoretical secure, and achieve better performance in the most sensitive areas of client-side I/O and computation.

  6. Embedded optical interconnect technology in data storage systems

    NASA Astrophysics Data System (ADS)

    Pitwon, Richard C. A.; Hopkins, Ken; Milward, Dave; Muggeridge, Malcolm

    2010-05-01

    As both data storage interconnect speeds increase and form factors in hard disk drive technologies continue to shrink, the density of printed channels on the storage array midplane goes up. The dominant interconnect protocol on storage array midplanes is expected to increase to 12 Gb/s by 2012 thereby exacerbating the performance bottleneck in future digital data storage systems. The design challenges inherent to modern data storage systems are discussed and an embedded optical infrastructure proposed to mitigate this bottleneck. The proposed solution is based on the deployment of an electro-optical printed circuit board and active interconnect technology. The connection architecture adopted would allow for electronic line cards with active optical edge connectors to be plugged into and unplugged from a passive electro-optical midplane with embedded polymeric waveguides. A demonstration platform has been developed to assess the viability of embedded electro-optical midplane technology in dense data storage systems and successfully demonstrated at 10.3 Gb/s. Active connectors incorporate optical transceiver interfaces operating at 850 nm and are connected in an in-plane coupling configuration to the embedded waveguides in the midplane. In addition a novel method of passively aligning and assembling passive optical devices to embedded polymer waveguide arrays has also been demonstrated.

  7. Operational characteristics of energy storage high temperature superconducting flywheels considering time dependent processes

    NASA Astrophysics Data System (ADS)

    Vajda, Istvan; Kohari, Zalan; Porjesz, Tamas; Benko, Laszlo; Meerovich, V.; Sokolovsky; Gawalek, W.

    2002-08-01

    Technical and economical feasibilities of short-term energy storage flywheels with high temperature superconducting (HTS) bearing are widely investigated. It is essential to reduce the ac losses caused by magnetic field variations in HTS bulk disks/rings (levitators) used in the magnetic bearings of flywheels. For the HTS bearings the calculation and measurement of the magnetic field distribution were performed. Effects like eccentricity, tilting were measured. Time dependency of the levitation force following a jumpwise movement of the permanent magnet was measured. The results were used to setup an engineering design algorithm for energy storage HTS flywheels. This algorithm was applied to an experimental HTS flywheel model with a disk type permanent magnet motor/generator unit designed and constructed by the authors. A conceptual design of the disk-type motor/generator with radial flux is shown.

  8. Uniform Interfaces for Distributed Systems.

    DTIC Science & Technology

    1980-05-01

    in data str ’.ctures on stable storage (such as disk). The Virtual Terminals associated with a particular user (i.e., rM display terminal) are all...vec MESSAGESIZE let error = nil [S ReceiveAny (msg) // The copy is made so that lower-level routines may // munge the message template without losing

  9. The Dag Hammarskjold Library Reaches Out to the World.

    ERIC Educational Resources Information Center

    Chepesiuk, Ron

    1998-01-01

    Describes services offered at the Dag Hammarskjold Library at the United Nations (UN). Highlights include adopting new technology for a virtual library; the international law collection which is now accessible through the World Wide Web; UN depository libraries; material available on the Internet; the Optical Disk System, a storage/retrieval…

  10. A Layered Solution for Supercomputing Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  11. High-capacity high-speed recording

    NASA Astrophysics Data System (ADS)

    Jamberdino, A. A.

    1981-06-01

    Continuing advances in wideband communications and information handling are leading to extremely large volume digital data systems for which conventional data storage techniques are becoming inadequate. The paper presents an assessment of alternative recording technologies for the extremely wideband, high capacity storage and retrieval systems currently under development. Attention is given to longitudinal and rotary head high density magnetic recording, laser holography in human readable/machine readable devices and a wideband recorder, digital optical disks, and spot recording in microfiche formats. The electro-optical technologies considered are noted to be capable of providing data bandwidths up to 1000 megabits/sec and total data storage capacities in the 10 to the 11th to 10 to the 12th bit range, an order of magnitude improvement over conventional technologies.

  12. Status of international optical disk standards

    NASA Astrophysics Data System (ADS)

    Chen, Di; Neumann, John

    1999-11-01

    Optical technology for data storage offers media removability with unsurpassed reliability. As the media are removable, data interchange between the media and drives from different sources is a major concern. The optical recording community realized, at the inception of this new storage technology development, that international standards for all optical recording disk/cartridge must be established to insure the healthy growth of this industry and for the benefit of the users. Many standards organizations took up the challenge and numerous international standards were established which are now being used world-wide. This paper provides a brief summary of the current status of the international optical disk standards.

  13. Influence of Sous Vide and water immersion processing on polyacetylene content and instrumental color of parsnip (Pastinaca sativa) disks.

    PubMed

    Rawson, Ashish; Koidis, Anastasios; Rai, Dilip K; Tuohy, Maria; Brunton, Nigel

    2010-07-14

    The effect of blanching (95 +/- 3 degrees C) followed by sous vide (SV) processing (90 degrees C for 10 min) on levels of two polyacetylenes in parsnip disks immediately after processing and during chill storage was studied and compared with the effect of water immersion (WI) processing (70 degrees C for 2 min.). Blanching had the greatest influence on the retention of polyacetylenes in sous vide processed parsnip disks resulting in significant decreases of 24.5 and 24% of falcarinol (1) and falcarindiol (2) respectively (p < 0.05). Subsequent SV processing did not result in additional significant losses in polyacetylenes compared to blanched samples. Subsequent anaerobic storage of SV processed samples resulted in a significant decrease in 1 levels (p < 0.05) although no change in 2 levels was observed (p > 0.05). 1 levels in WI processed samples were significantly higher than in SV samples (p

  14. Optical system storage design with diffractive optical elements

    NASA Technical Reports Server (NTRS)

    Kostuk, Raymond K.; Haggans, Charles W.

    1993-01-01

    Optical data storage systems are gaining widespread acceptance due to their high areal density and the ability to remove the high capacity hard disk from the system. In magneto-optical read-write systems, a small rotation of the polarization state in the return signal from the MO media is the signal which must be sensed. A typical arrangement used for detecting these signals and correcting for errors in tracking and focusing on the disk is illustrated. The components required to achieve these functions are listed. The assembly and alignment of this complex system has a direct impact on cost, and also affects the size, weight, and corresponding data access rates. As a result, integrating these optical components and improving packaging techniques is an active area of research and development. Most designs of binary optic elements have been concerned with optimizing grating efficiency. However, rigorous coupled wave models for vector field diffraction from grating surfaces can be extended to determine the phase and polarization state of the diffracted field, and the design of polarization components. A typical grating geometry and the phase and polarization angles associated with the incident and diffracted fields are shown. In our current stage of work, we are examining system configurations which cascade several polarization functions on a single substrate. In this design, the beam returning from the MO disk illuminates a cascaded grating element which first couples light into the substrate, then introduces a quarter wave retardation, then a polarization rotation, and finally separates s- and p-polarized fields through a polarization beam splitter. The input coupler and polarization beam splitter are formed in volume gratings, and the two intermediate elements are zero-order elements.

  15. LVFS: A Big Data File Storage Bridge for the HPC Community

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.

    2015-12-01

    Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.

  16. Field-Deployable Acoustic Digital Systems for Noise Measurement

    NASA Technical Reports Server (NTRS)

    Shams, Qamar A.; Wright, Kenneth D.; Lunsford, Charles B.; Smith, Charlie D.

    2000-01-01

    Langley Research Center (LaRC) has for years been a leader in field acoustic array measurement technique. Two field-deployable digital measurement systems have been developed to support acoustic research programs at LaRC. For several years, LaRC has used the Digital Acoustic Measurement System (DAMS) for measuring the acoustic noise levels from rotorcraft and tiltrotor aircraft. Recently, a second system called Remote Acquisition and Storage System (RASS) was developed and deployed for the first time in the field along with DAMS system for the Community Noise Flight Test using the NASA LaRC-757 aircraft during April, 2000. The test was performed at Airborne Airport in Wilmington, OH to validate predicted noise reduction benefits from alternative operational procedures. The test matrix was composed of various combinations of altitude, cutback power, and aircraft weight. The DAMS digitizes the acoustic inputs at the microphone site and can be located up to 2000 feet from the van which houses the acquisition, storage and analysis equipment. Digitized data from up to 10 microphones is recorded on a Jaz disk and is analyzed post-test by microcomputer system. The RASS digitizes and stores acoustic inputs at the microphone site that can be located up to three miles from the base station and can compose a 3 mile by 3 mile array of microphones. 16-bit digitized data from the microphones is stored on removable Jaz disk and is transferred through a high speed array to a very large high speed permanent storage device. Up to 30 microphones can be utilized in the array. System control and monitoring is accomplished via Radio Frequency (RF) link. This paper will present a detailed description of both systems, along with acoustic data analysis from both systems.

  17. Initial Experience With A Prototype Storage System At The University Of North Carolina

    NASA Astrophysics Data System (ADS)

    Creasy, J. L.; Loendorf, D. D.; Hemminger, B. M.

    1986-06-01

    A prototype archiving system manufactured by the 3M Corporation has been in place at the University of North Carolina for approximately 12 months. The system was installed as a result of a collaboration between 3M and UNC, with 3M seeking testing of their system, and UNC realizing the need for an archiving system as an essential part of their PACS test-bed facilities. System hardware includes appropriate network and disk interface devices as well as media for both short and long term storage of images and their associated information. The system software includes those procedures necessary to communicate with the network interface elements(NIEs) as well as those procedures necessary to interpret the ACR-NEMA header blocks and to store the images. A subset of the total ACR-NEMA header is parsed and stored in a relational database system. The entire header is stored on disk with the completed study. Interactive programs have been developed that allow radiologists to easily retrieve information about the archived images and to send the full images to a viewing console. Initial experience with the system has consisted primarily of hardware and software debugging. Although the system is ACR-NEMA compatable, further objective and subjective assessments of system performance is awaiting the connection of compatable consoles and acquisition devices to the network.

  18. Analyses of requirements for computer control and data processing experiment subsystems: Image data processing system (IDAPS) software description (7094 version), volume 2

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.

  19. Flexible matrix composite laminated disk/ring flywheel

    NASA Technical Reports Server (NTRS)

    Gupta, B. P.; Hannibal, A. J.

    1984-01-01

    An energy storage flywheel consisting of a quasi-isotropic composite disk overwrapped by a circumferentially wound ring made of carbon fiber and a elastometric matrix is proposed. Through analysis it was demonstrated that with an elastomeric matrix to relieve the radial stresses, a laminated disk/ring flywheel can be designed to store a least 80.3 Wh/kg or about 68% more than previous disk/ring designs. at the same time the simple construction is preserved.

  20. 77 FR 6859 - Proposed Collection; Comment Request for Revenue Procedure 97-22

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-09

    ... system that either images their paper books and records or transfers their computerized books and records to an electronic storage media, such as an optical disk. The information requested in the revenue... being made to the revenue procedure at this time. Type of Review: Extension of a currently approved...

  1. The successful of finite element to invent particle cleaning system by air jet in hard disk drive

    NASA Astrophysics Data System (ADS)

    Jai-Ngam, Nualpun; Tangchaichit, Kaitfa

    2018-02-01

    Hard Disk Drive manufacturing has faced very challenging with the increasing demand of high capacity drives for Cloud-based storage. Particle adhesion has also become increasingly important in HDD to gain more reliability of storage capacity. The ability to clean on surfaces is more complicated in removing such particles without damaging the surface. This research is aim to improve the particle cleaning in HSA by using finite element to develop the air flow model then invent the prototype of air cleaning system to remove particle from surface. Surface cleaning by air pressure can be applied as alternative for the removal of solid particulate contaminants that is adhering on a solid surface. These technical and economic challenges have driven the process development from traditional way that chemical solvent cleaning. The focus of this study is to develop alternative way from scrub, ultrasonic, mega sonic on surface cleaning principles to serve as a foundation for the development of new processes to meet current state-of-the-art process requirements and minimize the waste from chemical cleaning for environment safety.

  2. Some emerging applications of lasers

    NASA Astrophysics Data System (ADS)

    Christensen, C. P.

    1982-10-01

    Applications of lasers in photochemistry, advanced instrumentation, and information storage are discussed. Laser microchemistry offers a number of new methods for altering the morphology of a solid surface with high spatial resolution. Recent experiments in material deposition, material removal, and alloying and doping are reviewed. A basic optical disk storage system is described and the problems faced by this application are discussed, in particular those pertaining to recording media. An advanced erasable system based on the magnetooptic effect is described. Applications of lasers for remote sensing are discussed, including various lidar systems, the use of laser-induced fluorescence for oil spill characterization and uranium exploration, and the use of differential absorption for detection of atmospheric constituents, temperature, and humidity.

  3. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE PAGES

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  4. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  5. A performance analysis of advanced I/O architectures for PC-based network file servers

    NASA Astrophysics Data System (ADS)

    Huynh, K. D.; Khoshgoftaar, T. M.

    1994-12-01

    In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.

  6. SANs and Large Scale Data Migration at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2004-01-01

    Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.

  7. Effects of higher order aberrations on beam shape in an optical recording system

    NASA Technical Reports Server (NTRS)

    Wang, Mark S.; Milster, Tom D.

    1992-01-01

    An unexpected irradiance pattern in the detector plane of an optical data storage system was observed. Through wavefront measurement and scalar diffraction modeling, it was discovered that the energy redistribution is due to residual third-order and fifth-order spherical aberration of the objective lens and cover-plate assembly. The amount of residual aberration is small, and the beam focused on the disk would be considered diffraction limited by several criteria. Since the detector is not in the focal plane, even this small amount of aberration has a significant effect on the energy distribution. We show that the energy redistribution can adversely affect focus error signals, which are responsible for maintaining sub-micron spot diameters on the spinning disk.

  8. A composite-flywheel burst-containment study

    NASA Astrophysics Data System (ADS)

    Sapowith, A. D.; Handy, W. E.

    1982-01-01

    A key component impacting total flywheel energy storage system weight is the containment structure. This report addresses the factors that shape this structure and define its design criteria. In addition, containment weight estimates are made for the several composite flywheel designs of interest so that judgements can be made as to the relative weights of their containment structure. The requirements set down for this program were that all containment weight estimates be based on a 1 kWh burst. It should be noted that typical flywheel requirements for regenerative braking of small automobiles call for deliverable energies of 0.25 kWh. This leads to expected maximum burst energies of 0.5 kWh. The flywheels studied are those considered most likely to be carried further for operational design. These are: The pseudo isotropic disk flywheel, sometimes called the alpha ply; the SMC molded disk; either disk with a carbon ring; the subcircular rim with cruciform hub; and Avco's bi-directional circular weave disk.

  9. A brief description of the Medical Information Computer System (MEDICS). [real time minicomputer system

    NASA Technical Reports Server (NTRS)

    Moseley, E. C.

    1974-01-01

    The Medical Information Computer System (MEDICS) is a time shared, disk oriented minicomputer system capable of meeting storage and retrieval needs for the space- or non-space-related applications of at least 16 simultaneous users. At the various commercially available low cost terminals, the simple command and control mechanism and the generalized communication activity of the system permit multiple form inputs, real-time updating, and instantaneous retrieval capability with a full range of options.

  10. Numerical and experimental analysis of heat pipes with application in concentrated solar power systems

    NASA Astrophysics Data System (ADS)

    Mahdavi, Mahboobe

    Thermal energy storage systems as an integral part of concentrated solar power plants improve the performance of the system by mitigating the mismatch between the energy supply and the energy demand. Using a phase change material (PCM) to store energy increases the energy density, hence, reduces the size and cost of the system. However, the performance is limited by the low thermal conductivity of the PCM, which decreases the heat transfer rate between the heat source and PCM, which therefore prolongs the melting, or solidification process, and results in overheating the interface wall. To address this issue, heat pipes are embedded in the PCM to enhance the heat transfer from the receiver to the PCM, and from the PCM to the heat sink during charging and discharging processes, respectively. In the current study, the thermal-fluid phenomenon inside a heat pipe was investigated. The heat pipe network is specifically configured to be implemented in a thermal energy storage unit for a concentrated solar power system. The configuration allows for simultaneous power generation and energy storage for later use. The network is composed of a main heat pipe and an array of secondary heat pipes. The primary heat pipe has a disk-shaped evaporator and a disk-shaped condenser, which are connected via an adiabatic section. The secondary heat pipes are attached to the condenser of the primary heat pipe and they are surrounded by PCM. The other side of the condenser is connected to a heat engine and serves as its heat acceptor. The applied thermal energy to the disk-shaped evaporator changes the phase of working fluid in the wick structure from liquid to vapor. The vapor pressure drives it through the adiabatic section to the condenser where the vapor condenses and releases its heat to a heat engine. It should be noted that the condensed working fluid is returned to the evaporator by the capillary forces of the wick. The extra heat is then delivered to the phase change material through the secondary heat pipes. During the discharging process, secondary heat pipes serve as evaporators and transfer the stored energy to the heat engine. (Abstract shortened by ProQuest.).

  11. Integrated IMA (Information Mission Areas) IC (Information Center) Guide

    DTIC Science & Technology

    1989-06-01

    COMPUTER AIDED DESIGN / COMPUTER AIDED MANUFACTURE 8-8 8.3.7 LIQUID CRYSTAL DISPLAY PANELS 8-8 8.3.8 ARTIFICIAL INTELLIGENCE APPLIED TO VI 8-9 8.4...2 10.3.1 DESKTOP PUBLISHING 10-3 10.3.2 INTELLIGENT COPIERS 10-5 10.3.3 ELECTRONIC ALTERNATIVES TO PRINTED DOCUMENTS 10-5 10.3.4 ELECTRONIC FORMS...Optical Disk LCD Units Storage Image Scanners Graphics Forms Output Generation Copiers Devices Software Optical Disk Intelligent Storage Copiers Work Group

  12. Electromagnetic scattering of large structures in layered earths using integral equations

    NASA Astrophysics Data System (ADS)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  13. Estimation of limit strains in disk-type flywheels made of a compliant elastomeric matrix composite undergoing radial creep

    NASA Astrophysics Data System (ADS)

    Portnov, G. G.; Bakis, Ch. E.

    2000-01-01

    Fiber reinforced elastomeric matrix composites (EMCs) offer several potential advantages for construction of rotors for flywheel energy storage systems. One potential advantage, for safety considerations, is the existence of maximum stresses near the outside radius of thick circumferentially wound EMC disks, which could lead to a desirable self-arresting failure mode at ultimate speeds. Certain unidirectionally reinforced EMCs, however, have been noted to creep readily under the influence of stress transverse to the fibers. In this paper, stress redistribution in a spinning thick disk made of a circumferentially filament wound EMC material on a small rigid hub has been analyzed with the assumption of total radial stress relaxation due to radial creep. It is shown that, following complete relaxation, the circumferential strains and stresses are maximized at the outside radius of the disk. Importantly, the radial tensile strains are three times greater than the circumferential strains at any given radius. Therefore, a unidirectional EMC material system that can safely endure transverse tensile creep strains of at least three times the elastic longitudinal strain capacity of the same material is likely to maintain the theoretically safe failure mode despite complete radial stress relaxation.

  14. Recent evolution of the offline computing model of the NOvA experiment

    DOE PAGES

    Habig, Alec; Norman, A.; Group, Craig

    2015-12-23

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study ν e appearance in a ν μ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files onmore » either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. In addition, the current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.« less

  15. Recent Evolution of the Offline Computing Model of the NOvA Experiment

    NASA Astrophysics Data System (ADS)

    Habig, Alec; Norman, A.

    2015-12-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study νe appearance in a νμ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files on either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. The current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.

  16. Archival storage solutions for PACS

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy

    1997-05-01

    While they are many, one of the inhibitors to the wide spread diffusion of PACS systems has been robust, cost effective digital archive storage solutions. Moreover, an automated Nearline solution is key to a central, sharable data repository, enabling many applications such as PACS, telemedicine and teleradiology, and information warehousing and data mining for research such as patient outcome analysis. Selecting the right solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, configuration architecture and flexibility, subsystem availability and reliability, security requirements, system cost, achievable benefits and cost savings, investment protection, strategic fit and more.This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on storage system throughput will be analyzed. The concept of automated migration of images from high performance, high cost storage devices to high capacity, low cost storage devices will be introduced as a viable way to minimize overall storage costs for an archive. The concept of access density will also be introduced and applied to the selection of the most cost effective archive solution.

  17. The Computer and Its Functions; How to Communicate with the Computer.

    ERIC Educational Resources Information Center

    Ward, Peggy M.

    A brief discussion of why it is important for students to be familiar with computers and their functions and a list of some practical applications introduce this two-part paper. Focusing on how the computer works, the first part explains the various components of the computer, different kinds of memory storage devices, disk operating systems, and…

  18. The amino acid's backup bone - storage solutions for proteomics facilities.

    PubMed

    Meckel, Hagen; Stephan, Christian; Bunse, Christian; Krafzik, Michael; Reher, Christopher; Kohl, Michael; Meyer, Helmut Erich; Eisenacher, Martin

    2014-01-01

    Proteomics methods, especially high-throughput mass spectrometry analysis have been continually developed and improved over the years. The analysis of complex biological samples produces large volumes of raw data. Data storage and recovery management pose substantial challenges to biomedical or proteomic facilities regarding backup and archiving concepts as well as hardware requirements. In this article we describe differences between the terms backup and archive with regard to manual and automatic approaches. We also introduce different storage concepts and technologies from transportable media to professional solutions such as redundant array of independent disks (RAID) systems, network attached storages (NAS) and storage area network (SAN). Moreover, we present a software solution, which we developed for the purpose of long-term preservation of large mass spectrometry raw data files on an object storage device (OSD) archiving system. Finally, advantages, disadvantages, and experiences from routine operations of the presented concepts and technologies are evaluated and discussed. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013. Published by Elsevier B.V.

  19. Automotive dual-mode hydrogen generation system

    NASA Astrophysics Data System (ADS)

    Kelly, D. A.

    The automotive dual mode hydrogen generation system is advocated as a supplementary hydrogen fuel means along with the current metallic hydride hydrogen storage method for vehicles. This system consists of utilizing conventional electrolysis cells with the low voltage dc electrical power supplied by two electrical generating sources within the vehicle. Since the automobile engine exhaust manifold(s) are presently an untapped useful source of thermal energy, they can be employed as the heat source for a simple heat engine/generator arrangement. The second, and minor electrical generating means consists of multiple, miniature air disk generators which are mounted directly under the vehicle's hood and at other convenient locations within the engine compartment. The air disk generators are revolved at a speed which is proportionate to the vehicles forward speed and do not impose a drag on the vehicles motion.

  20. Chaining for Flexible and High-Performance Key-Value Systems

    DTIC Science & Technology

    2012-09-01

    store that is fault tolerant achieves high performance and availability, and offers strong data consistency? We present a new replication protocol...effective high performance data access and analytics, many sites use simpler data model “ NoSQL ” systems. ese systems store and retrieve data only by...DRAM, Flash, and disk-based storage; can act as an unreliable cache or a durable store ; and can offer strong or weak data consistency. e value of

  1. 45 CFR 160.103 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., the following definitions apply to this subchapter: Act means the Social Security Act. Administrative..., statements, and other required documents. Electronic media means: (1) Electronic storage material on which...) and any removable/transportable digital memory medium, such as magnetic tape or disk, optical disk, or...

  2. 45 CFR 160.103 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., the following definitions apply to this subchapter: Act means the Social Security Act. Administrative..., statements, and other required documents. Electronic media means: (1) Electronic storage material on which...) and any removable/transportable digital memory medium, such as magnetic tape or disk, optical disk, or...

  3. Achieving cost/performance balance ratio using tiered storage caching techniques: A case study with CephFS

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.

    2017-10-01

    As demand for widely accessible storage capacity increases and usage is on the rise, steady IO performance is desired but tends to suffer within multi-user environments. Typical deployments use standard hard drives as the cost per/GB is quite low. On the other hand, HDD based solutions for storage is not known to scale well with process concurrency and soon enough, high rate of IOPs create a “random access” pattern killing performance. Though not all SSDs are alike, SSDs are an established technology often used to address this exact “random access” problem. In this contribution, we will first discuss the IO performance of many different SSD drives (tested in a comparable and standalone manner). We will then be discussing the performance and integrity of at least three low-level disk caching techniques (Flashcache, dm-cache, and bcache) including individual policies, procedures, and IO performance. Furthermore, the STAR online computing infrastructure currently hosts a POSIX-compliant Ceph distributed storage cluster - while caching is not a native feature of CephFS (only exists in the Ceph Object store), we will show how one can implement a caching mechanism profiting from an implementation at a lower level. As our illustration, we will present our CephFS setup, IO performance tests, and overall experience from such configuration. We hope this work will service the community’s interest for using disk-caching mechanisms with applicable uses such as distributed storage systems and seeking an overall IO performance gain.

  4. Improved memory loading techniques for the TSRV display system

    NASA Technical Reports Server (NTRS)

    Easley, W. C.; Lynn, W. A.; Mcluer, D. G.

    1986-01-01

    A recent upgrade of the TSRV research flight system at NASA Langley Research Center retained the original monochrome display system. However, the display memory loading equipment was replaced requiring design and development of new methods of performing this task. This paper describes the new techniques developed to load memory in the display system. An outdated paper tape method for loading the BOOTSTRAP control program was replaced by EPROM storage of the characters contained on the tape. Rather than move a tape past an optical reader, a counter was implemented which steps sequentially through EPROM addresses and presents the same data to the loader circuitry. A cumbersome cassette tape method for loading the applications software was replaced with a floppy disk method using a microprocessor terminal installed as part of the upgrade. The cassette memory image was transferred to disk and a specific software loader was written for the terminal which duplicates the function of the cassette loader.

  5. Long-Term file activity patterns in a UNIX workstation environment

    NASA Technical Reports Server (NTRS)

    Gibson, Timothy J.; Miller, Ethan L.

    1998-01-01

    As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.

  6. IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM

    NASA Technical Reports Server (NTRS)

    Martin, M. D.

    1994-01-01

    The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.

  7. Evolution of magnetic disk subsystems

    NASA Astrophysics Data System (ADS)

    Kaneko, Satoru

    1994-06-01

    The higher recording density of magnetic disk realized today has brought larger storage capacity per unit and smaller form factors. If the required access performance per MB is constant, the performance of large subsystems has to be several times better. This article describes mainly the technology for improving the performance of the magnetic disk subsystems and the prospects of their future evolution. Also considered are 'crosscall pathing' which makes the data transfer channel more effective, 'disk cache' which improves performance coupling with solid state memory technology, and 'RAID' which improves the availability and integrity of disk subsystems by organizing multiple disk drives in a subsystem. As a result, it is concluded that since the performance of the subsystem is dominated by that of the disk cache, maximation of the performance of the disk cache subsystems is very important.

  8. Evaluation of Optical Disk Jukebox Software.

    ERIC Educational Resources Information Center

    Ranade, Sanjay; Yee, Fonald

    1989-01-01

    Discusses software that is used to drive and access optical disk jukeboxes, which are used for data storage. Categories of the software are described, user categories are explained, the design of implementation approaches is discussed, and representative software products are reviewed. (eight references) (LRW)

  9. 40 CFR 94.509 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  10. 40 CFR 94.509 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  11. 40 CFR 94.509 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  12. 40 CFR 94.509 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  13. The Photorefractive Effect and its Application in Optical Computing

    NASA Astrophysics Data System (ADS)

    Li, Guo

    This Ph.D dissertation includes the fanning effect and the temperature dependence of the diffraction efficiency and response time using different addressing configurations, and evaluation of the limitations and capacity of a holographic storage in BaTiO_3 crystals. Also, we designed a digital holographic optical disk and made an associate memory. The beam fanning effect in a BaTiO_3 crystal was investigated in detail. The effect depends on the crystal faces illuminated. In particular, for the +c face of illumination we found that the fanning effect strongly depends on angle of incidence, polarization and wavelength of the incident light, crystal temperature, laser beam profile, but only weakly depends on input laser power. In the case of the -c face and a-face illumination dependence of the ring angle on wavelength and input power was observed. We found that the intensity of the reflected beam in NDFWM, the intensity of self phase conjugate beam and the response time of the fanning effect decrease with temperature exponentially and there being a major change around 60 ^circ-80^circ C. A random bistability and oscillation of the SPPC occur around 80^circC. We also present a theoretical analysis for the dependence of the photorefractive effect on temperature. We experimentally evaluate the capacity and limitation of optical storage in BaTiO_3 crystals using self-pumped phase conjugation (SPPC) and two-wave mixing. The storage capacity is different with different face of illumination, polarization, beam profile and input power. We demonstrate that using two wave mixing, three dimensional volume holograms can be stored. The information -bearing beam diameter for storage and recall can be about 0.25mm or less. By these techniques we demonstrate that at least 10^5 holograms can be stored in a 3.5 inch photorefractive disk. We evaluate an optimal optical architecture for exploiting the photorefractive effect for digital holographic disk storage. An image with many pixels was used for this experimental evaluation. By using a raytracing program, we traced a beam with a Gaussian profile through our optical system. We also estimated the Seidel aberration of our optical system in order to determine the quality of the stored digital data.

  14. Software for Optical Archive and Retrieval (SOAR) user's guide, version 4.2

    NASA Technical Reports Server (NTRS)

    Davis, Charles

    1991-01-01

    The optical disk is an emerging technology. Because it is not a magnetic medium, it offers a number of distinct advantages over the established form of storage, advantages that make it extremely attractive. They are as follows: (1) the ability to store much more data within the same space; (2) the random access characteristics of the Write Once Read Many optical disk; (3) a much longer life than that of traditional storage media; and (4) much greater data access rate. Software for Optical Archive and Retrieval (SOAR) user's guide is presented.

  15. Recent Cooperative Research Activities of HDD and Flexible Media Transport Technologies in Japan

    NASA Astrophysics Data System (ADS)

    Ono, Kyosuke

    This paper presents the recent status of industry-university cooperative research activities in Japan on the mechatronics of information storage and input/output equipment. There are three research committees for promoting information exchange on technical problems and research topics of head-disk interface in hard disk drives (HDD), flexible media transport and image printing processes which are supported by the Japan Society of Mechanical Engineering (JSME), the Japanese Society of Tribologists (JAST) and the Japan Society of Precision Engineering (JSPE). For hard disk drive technology, the Storage Research Consortium (SRC) is supporting more than 40 research groups in various different universities to perform basic research for future HDD technology. The past and present statuses of these activities are introduced, particularly focusing on HDD and flexible media transport mechanisms.

  16. Free Factories: Unified Infrastructure for Data Intensive Web Services

    PubMed Central

    Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.

    2010-01-01

    We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356

  17. Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression

    NASA Astrophysics Data System (ADS)

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-08-01

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.

  18. Reducing Disk Storage of Full-3D Seismic Waveform Tomography (F3DT) Through Lossy Online Compression

    DOE PAGES

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-05-05

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithmmore » into our F3DT SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.« less

  19. ICI optical data storage tape

    NASA Technical Reports Server (NTRS)

    Mclean, Robert A.; Duffy, Joseph F.

    1991-01-01

    Optical data storage tape is now a commercial reality. The world's first successful development of a digital optical tape system is complete. This is based on the Creo 1003 optical tape recorder with ICI 1012 write-once optical tape media. Several other optical tape drive development programs are underway, including one using the IBM 3480 style cartridge at LaserTape Systems. In order to understand the significance and potential of this step change in recording technology, it is useful to review the historical progress of optical storage. This has been slow to encroach on magnetic storage, and has not made any serious dent on the world's mountains of paper and microfilm. Some of the reasons for this are the long time needed for applications developers, systems integrators, and end users to take advantage of the potential storage capacity; access time and data transfer rate have traditionally been too slow for high-performance applications; and optical disk media has been expensive compared with magnetic tape. ICI's strategy in response to these concerns was to concentrate its efforts on flexible optical media; in particular optical tape. The manufacturing achievements, media characteristics, and media lifetime of optical media are discussed.

  20. Data Management, the Victorian era child of the 21st century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farber, Rob

    2007-03-30

    Do you remember when a gigabyte disk drive was “a lot” of storage in that by-gone age of the 20th century? Still in our first decade of the 21st century, major supercomputer sites now speak of storage in terms of petabytes, 1015 bytes, or six orders of magnitude increase in capacity over a gigabyte! Unlike our archaic “big” disk drive where all the data was in one place, HPC storage is now distributed across many machines and even across the Internet. Collaborative research engages many scientists who need to find and use each others data, preferably in an automated fashion,more » which complicates an already muddled problem.« less

  1. MICE data handling on the Grid

    NASA Astrophysics Data System (ADS)

    Martyniak, J.; Mice Collaboration

    2014-06-01

    The international Muon Ionisation Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time, for application to a future Neutrino factory or Muon Collider. The experiment is currently under construction at the ISIS synchrotron at the Rutherford Appleton Laboratory (RAL), UK. In this paper we present a system - the Raw Data Mover, which allows us to store and distribute MICE raw data - and a framework for offline reconstruction and data management. The aim of the Raw Data Mover is to upload raw data files onto a safe tape storage as soon as the data have been written out by the DAQ system and marked as ready to be uploaded. Internal integrity of the files is verified and they are uploaded to the RAL Tier-1 Castor Storage Element (SE) and placed on two tapes for redundancy. We also make another copy at a separate disk-based SE at this stage to make it easier for users to access data quickly. Both copies are check-summed and the replicas are registered with an instance of the LCG File Catalog (LFC). On success a record with basic file properties is added to the MICE Metadata DB. The reconstruction process is triggered by new raw data records filled in by the mover system described above. Off-line reconstruction jobs for new raw files are submitted to RAL Tier-1 and the output is stored on tape. Batch reprocessing is done at multiple MICE enabled Grid sites and output files are shipped to central tape or disk storage at RAL using a custom File Transfer Controller.

  2. Storage media pipelining: Making good use of fine-grained media

    NASA Technical Reports Server (NTRS)

    Vanmeter, Rodney

    1993-01-01

    This paper proposes a new high-performance paradigm for accessing removable media such as tapes and especially magneto-optical disks. In high-performance computing the striping of data across multiple devices is a common means of improving data transfer rates. Striping has been used very successfully for fixed magnetic disks improving overall system reliability as well as throughput. It has also been proposed as a solution for providing improved bandwidth for tape and magneto-optical subsystems. However, striping of removable media has shortcomings, particularly in the areas of latency to data and restricted system configurations, and is suitable primarily for very large I/Os. We propose that for fine-grained media, an alternative access method, media pipelining, may be used to provide high bandwidth for large requests while retaining the flexibility to support concurrent small requests and different system configurations. Its principal drawback is high buffering requirements in the host computer or file server. This paper discusses the possible organization of such a system including the hardware conditions under which it may be effective, and the flexibility of configuration. Its expected performance is discussed under varying workloads including large single I/O's and numerous smaller ones. Finally, a specific system incorporating a high-transfer-rate magneto-optical disk drive and autochanger is discussed.

  3. A report on the ST ScI optical disk workstation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The STScI optical disk project was designed to explore the options, opportunities and problems presented by the optical disk technology, and to see if optical disks are a viable, and inexpensive, means of storing the large amount of data which are found in astronomical digital imagery. A separate workstation was purchased on which the development can be done and serves as an astronomical image processing computer, incorporating the optical disks into the solution of standard image processing tasks. It is indicated that small workstations can be powerful tools for image processing, and that astronomical image processing may be more conveniently and cost-effectively performed on microcomputers than on the mainframe and super-minicomputers. The optical disks provide unique capabilities in data storage.

  4. 40 CFR 91.504 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...

  5. 40 CFR 91.504 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...

  6. 40 CFR 90.704 - Maintenance of records; submission of information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  7. 40 CFR 90.704 - Maintenance of records; submission of information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  8. 40 CFR 90.704 - Maintenance of records; submission of information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  9. 40 CFR 90.704 - Maintenance of records; submission of information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  10. 40 CFR 91.504 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...

  11. 40 CFR 91.504 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...

  12. Manufacturing and testing of a magnetically suspended composite flywheel energy storage system

    NASA Technical Reports Server (NTRS)

    Wells, Stephen; Pang, Da-Chen

    1994-01-01

    This paper presents the work performed to develop a multiring composite material flywheel and improvements of a magnetically suspended energy storage system. The flywheel is constructed of filament would graphite/epoxy and is interference assembled for better stress distribution to obtain higher speeds. The stationary stack in the center of the disk supports the flywheel with two magnetic bearings and provides power transfer to the flywheel with a motor/generator. The system operates under a 10(exp -4) torr environment and has been demonstrated to 20,000 rpm with a total stored energy of 15.9 Wh. When this flywheel cycles between its design speeds (45,000 to 90,000 rpm), it will deliver 242 Wh and have a usable specific energy density of 42.6 Wh/kg.

  13. Analysis Report for Exascale Storage Requirements for Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, Thomas M.

    Over the next 10 years, the Department of Energy will be transitioning from Petascale to Exascale Computing resulting in data storage, networking, and infrastructure requirements to increase by three orders of magnitude. The technologies and best practices used today are the result of a relatively slow evolution of ancestral technologies developed in the 1950s and 1960s. These include magnetic tape, magnetic disk, networking, databases, file systems, and operating systems. These technologies will continue to evolve over the next 10 to 15 years on a reasonably predictable path. Experience with the challenges involved in transitioning these fundamental technologies from Terascale tomore » Petascale computing systems has raised questions about how these will scale another 3 or 4 orders of magnitude to meet the requirements imposed by Exascale computing systems. This report is focused on the most concerning scaling issues with data storage systems as they relate to High Performance Computing- and presents options for a path forward. Given the ability to store exponentially increasing amounts of data, far more advanced concepts and use of metadata will be critical to managing data in Exascale computing systems.« less

  14. Magnetic field sources and their threat to magnetic media

    NASA Technical Reports Server (NTRS)

    Jewell, Steve

    1993-01-01

    Magnetic storage media (tapes, disks, cards, etc.) may be damaged by external magnetic fields. The potential for such damage has been researched, but no objective standard exists for the protection of such media. This paper summarizes a magnetic storage facility standard, Publication 933, that ensures magnetic protection of data storage media.

  15. Emerging Network Storage Management Standards for Intelligent Data Storage Subsystems

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    This paper discusses the need for intelligent storage devices and subsystems that can provide data integrity metadata, the content of the existing data integrity standard for optical disks and techniques and metadata to verify stored data on optical tapes developed by the Association for Information and Image Management (AIIM) Optical Tape Committee.

  16. Inverted Signature Trees and Text Searching on CD-ROMs.

    ERIC Educational Resources Information Center

    Cooper, Lorraine K. D.; Tharp, Alan L.

    1989-01-01

    Explores the new storage technology of optical data disks and introduces a data structure, the inverted signature tree, for storing data on optical data disks for efficient text searching. The inverted signature tree approach is compared to the use of text signatures and the B+ tree. (22 references) (Author/CLB)

  17. Blue laser inorganic write-once media

    NASA Astrophysics Data System (ADS)

    Chen, Bing-Mau; Yeh, Ru-Lin

    2004-09-01

    With the advantages of low cost, portability and compliance with ROM disc, write once disk has become the most popular storage media for computer and audio/video application. In addition, write once media, like CD-R and DVD-/+ R, are used to store permanent or nonalterable information, such as financial data transitions, legal documentation, and medical data. Several write once recording materials, such as TeO[1], TeOPd[2] and Si/Cu [3] have been proposed to realize inorganic write once media. Moreover, we propose AlSi alloy [4] to be used for recording layer of write once media. It had good recording properties in DVD system although the reflectivity is too low to be used for DVD-R disk. In this paper, we report the further results in blue laser system, such as the static and dynamic characteristics of write once media.

  18. Document image archive transfer from DOS to UNIX

    NASA Technical Reports Server (NTRS)

    Hauser, Susan E.; Gill, Michael J.; Thoma, George R.

    1994-01-01

    An R&D division of the National Library of Medicine has developed a prototype system for automated document image delivery as an adjunct to the labor-intensive manual interlibrary loan service of the library. The document image archive is implemented by a PC controlled bank of optical disk drives which use 12 inch WORM platters containing bitmapped images of over 200,000 pages of medical journals. Following three years of routine operation which resulted in serving patrons with articles both by mail and fax, an effort is underway to relocate the storage environment from the DOS-based system to a UNIX-based jukebox whose magneto-optical erasable 5 1/4 inch platters hold the images. This paper describes the deficiencies of the current storage system, the design issues of modifying several modules in the system, the alternatives proposed and the tradeoffs involved.

  19. Demonstration of fully enabled data center subsystem with embedded optical interconnect

    NASA Astrophysics Data System (ADS)

    Pitwon, Richard; Worrall, Alex; Stevens, Paul; Miller, Allen; Wang, Kai; Schmidtke, Katharine

    2014-03-01

    The evolution of data storage communication protocols and corresponding in-system bandwidth densities is set to impose prohibitive cost and performance constraints on future data storage system designs, fuelling proposals for hybrid electronic and optical architectures in data centers. The migration of optical interconnect into the system enclosure itself can substantially mitigate the communications bottlenecks resulting from both the increase in data rate and internal interconnect link lengths. In order to assess the viability of embedding optical links within prevailing data storage architectures, we present the design and assembly of a fully operational data storage array platform, in which all internal high speed links have been implemented optically. This required the deployment of mid-board optical transceivers, an electro-optical midplane and proprietary pluggable optical connectors for storage devices. We present the design of a high density optical layout to accommodate the midplane interconnect requirements of a data storage enclosure with support for 24 Small Form Factor (SFF) solid state or rotating disk drives and the design of a proprietary optical connector and interface cards, enabling standard drives to be plugged into an electro-optical midplane. Crucially, we have also modified the platform to accommodate longer optical interconnect lengths up to 50 meters in order to investigate future datacenter architectures based on disaggregation of modular subsystems. The optically enabled data storage system has been fully validated for both 6 Gb/s and 12 Gb/s SAS data traffic conveyed along internal optical links.

  20. Evolution of Archival Storage (from Tape to Memory)

    NASA Technical Reports Server (NTRS)

    Ramapriyan, Hampapuram K.

    2015-01-01

    Over the last three decades, there has been a significant evolution in storage technologies supporting archival of remote sensing data. This section provides a brief survey of how these technologies have evolved. Three main technologies are considered - tape, hard disk and solid state disk. Their historical evolution is traced, summarizing how reductions in cost have helped being able to store larger volumes of data on faster media. The cost per GB of media is only one of the considerations in determining the best approach to archival storage. Active archives generally require faster response to user requests for data than permanent archives. The archive costs have to consider facilities and other capital costs, operations costs, software licenses, utilities costs, etc. For meeting requirements in any organization, typically a mix of technologies is needed.

  1. How to Use Removable Mass Storage Memory Devices

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2004-01-01

    Mass storage refers to the variety of ways to keep large amounts of information that are used on a computer. Over the years, the removable storage devices have grown smaller, increased in capacity, and transferred the information to the computer faster. The 8" floppy disk of the 1960s stored 100 kilobytes, or about 60 typewritten, double-spaced…

  2. Maintaining cultures of wood-rotting fungi.

    Treesearch

    E.E. Nelson; H.A. Fay

    1985-01-01

    Phellinus weirii cultures were stored successfully for 10 years in small alder (Alnus rubra Bong.) disks at 2 °C. The six isolates tested appeared morphologically identical and after 10 years varied little in growth rate from those stored on malt agar slants. Long-term storage on alder disks reduces the time required for...

  3. Holographic Compact Disk Read-Only Memories

    NASA Technical Reports Server (NTRS)

    Liu, Tsuen-Hsi

    1996-01-01

    Compact disk read-only memories (CD-ROMs) of proposed type store digital data in volume holograms instead of in surface differentially reflective elements. Holographic CD-ROM consist largely of parts similar to those used in conventional CD-ROMs. However, achieves 10 or more times data-storage capacity and throughput by use of wavelength-multiplexing/volume-hologram scheme.

  4. Optimising LAN access to grid enabled storage elements

    NASA Astrophysics Data System (ADS)

    Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.

    2008-07-01

    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.

  5. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    NASA Astrophysics Data System (ADS)

    Bauerdick, L. A. T.; Bloom, K.; Bockelman, B.; Bradley, D. C.; Dasu, S.; Dost, J. M.; Sfiligoi, I.; Tadel, A.; Tadel, M.; Wuerthwein, F.; Yagil, A.; Cms Collaboration

    2014-06-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  6. Converting information from paper to optical media

    NASA Technical Reports Server (NTRS)

    Deaton, Timothy N.; Tiller, Bruce K.

    1990-01-01

    The technology of converting large amounts of paper into electronic form is described for use in information management systems based on optical disk storage. The space savings and photographic nature of microfiche are combined in these systems with the advantages of computerized data (fast and flexible retrieval of graphics and text, simultaneous instant access for multiple users, and easy manipulation of data). It is noted that electronic imaging systems offer a unique opportunity to dramatically increase the productivity and profitability of information systems. Particular attention is given to the CALS (Computer-aided Aquisition and Logistic Support) system.

  7. [Development and evaluation of the medical imaging distribution system with dynamic web application and clustering technology].

    PubMed

    Yokohama, Noriya; Tsuchimoto, Tadashi; Oishi, Masamichi; Itou, Katsuya

    2007-01-20

    It has been noted that the downtime of medical informatics systems is often long. Many systems encounter downtimes of hours or even days, which can have a critical effect on daily operations. Such systems remain especially weak in the areas of database and medical imaging data. The scheme design shows the three-layer architecture of the system: application, database, and storage layers. The application layer uses the DICOM protocol (Digital Imaging and Communication in Medicine) and HTTP (Hyper Text Transport Protocol) with AJAX (Asynchronous JavaScript+XML). The database is designed to decentralize in parallel using cluster technology. Consequently, restoration of the database can be done not only with ease but also with improved retrieval speed. In the storage layer, a network RAID (Redundant Array of Independent Disks) system, it is possible to construct exabyte-scale parallel file systems that exploit storage spread. Development and evaluation of the test-bed has been successful in medical information data backup and recovery in a network environment. This paper presents a schematic design of the new medical informatics system that can be accommodated from a recovery and the dynamic Web application for medical imaging distribution using AJAX.

  8. [PVFS 2000: An operational parallel file system for Beowulf

    NASA Technical Reports Server (NTRS)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  9. HEP Data Grid Applications in Korea

    NASA Astrophysics Data System (ADS)

    Cho, Kihyeon; Oh, Youngdo; Son, Dongchul; Kim, Bockjoo; Lee, Sangsan

    2003-04-01

    We will introduce the national HEP Data Grid applications in Korea. Through a five-year HEP Data Grid project (2002-2006) for CMS, AMS, CDF, PHENIX, K2K and Belle experiments in Korea, the Center for High Energy Physics, Kyungpook National University in Korea will construct the 1,000 PC cluster and related storage system such as 1,200 TByte Raid disk system. This project includes one of the master plan to construct Asia Regional Data Center by 2006 for the CMS and AMS Experiments and DCAF(DeCentralized Analysis Farm) for the CDF Experiments. During the first year of the project, we have constructed a cluster of around 200 CPU's with a 50 TBytes of a storage system. We will present our first year's experience of the software and hardware applications for HEP Data Grid of EDG and SAM Grid testbeds.

  10. An overview of the education and training component of RICIS

    NASA Technical Reports Server (NTRS)

    Freedman, Glenn B.

    1987-01-01

    Research in education and training according to RICIS (Research Institute for Computing and Information Systems) program focuses on means to disseminate knowledge, skills, and technological advances rapidly, accurately, and effectively. A range of areas for study include: artificial intelligence, hypermedia and full-text retrieval strategies, use of mass storage and retrieval options such as CD-ROM and laser disks, and interactive video and interactive media presentations.

  11. A two-stage heating scheme for heat assisted magnetic recording

    NASA Astrophysics Data System (ADS)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  12. Implementation of system intelligence in a 3-tier telemedicine/PACS hierarchical storage management system

    NASA Astrophysics Data System (ADS)

    Chao, Woodrew; Ho, Bruce K. T.; Chao, John T.; Sadri, Reza M.; Huang, Lu J.; Taira, Ricky K.

    1995-05-01

    Our tele-medicine/PACS archive system is based on a three-tier distributed hierarchical architecture, including magnetic disk farms, optical jukebox, and tape jukebox sub-systems. The hierarchical storage management (HSM) architecture, built around a low cost high performance platform [personal computers (PC) and Microsoft Windows NT], presents a very scaleable and distributed solution ideal for meeting the needs of client/server environments such as tele-medicine, tele-radiology, and PACS. These image based systems typically require storage capacities mirroring those of film based technology (multi-terabyte with 10+ years storage) and patient data retrieval times at near on-line performance as demanded by radiologists. With the scaleable architecture, storage requirements can be easily configured to meet the needs of the small clinic (multi-gigabyte) to those of a major hospital (multi-terabyte). The patient data retrieval performance requirement was achieved by employing system intelligence to manage migration and caching of archived data. Relevant information from HIS/RIS triggers prefetching of data whenever possible based on simple rules. System intelligence embedded in the migration manger allows the clustering of patient data onto a single tape during data migration from optical to tape medium. Clustering of patient data on the same tape eliminates multiple tape loading and associated seek time during patient data retrieval. Optimal tape performance can then be achieved by utilizing the tape drives high performance data streaming capabilities thereby reducing typical data retrieval delays associated with streaming tape devices.

  13. ToF-SIMS images and spectra of biomimetic calcium silicate-based cements after storage in solutions simulating the effects of human biological fluids

    NASA Astrophysics Data System (ADS)

    Torrisi, A.; Torrisi, V.; Tuccitto, N.; Gandolfi, M. G.; Prati, C.; Licciardello, A.

    2010-01-01

    ToF-SIMS images were obtained from a section of a tooth, obturated by means of a new calcium-silicate based cement (wTCF) after storage for 1 month in a saline solutions (DPBS), in order to simulate the body fluid effects on the obturation. Afterwards, ToF-SIMS spectra were obtained from model samples, prepared by using the same cement paste, after storage for 1 month and 8 months in two different saline solutions (DPBS and HBSS). ToF-SIMS spectra were also obtained from fluorine-free cement (wTC) samples after storage in HBSS for 1 month and 8 months and used for comparison. It was found that the composition of both the saline solution and the cement influenced the composition of the surface of disks and that longer is the storage greater are the differences. Segregation phenomena occur both on the cement obturation of the tooth and on the surface of the disks prepared by using the same cement. Indirect evidences of formation of new crystalline phases are supplied.

  14. Upper Atmosphere Research Satellite (UARS) trade analysis

    NASA Technical Reports Server (NTRS)

    Fox, M. M.; Nebb, J.

    1983-01-01

    The Upper Atmosphere Research Satellite (UARS) which will collect data pertinent to the Earth's upper atmosphere is described. The collected data will be sent to the central data handling facility (CDHF) via the UARS ground system and the data will be processed and distributed to the remote analysis computer systems (RACS). An overview of the UARS ground system is presented. Three configurations were developed for the CDHF-RACS system. The CDHF configurations are discussed. The IBM CDHF configuration, the UNIVAC CDHF configuration and the vax cluster CDHF configuration are presented. The RACS configurations, the IBM RACS configurations, UNIVAC RACS and VAX RACS are detailed. Due to the large on-line data estimate to approximately 100 GB, a mass storage system is considered essential to the UARS CDHF. Mass storage systems were analyzed and the Braegan ATL, the RCA optical disk, the IBM 3850 and the MASSTOR M860 are discussed. It is determined that the type of mass storage system most suitable to UARS is the automated tape/cartridge device. Two devices of this type, the IBM 3850 and the MASSTOR MSS are analyzed and the applicable tape/cartridge device is incorporated into the three CDHF-RACS configurations.

  15. Influence of technology on magnetic tape storage device characteristics

    NASA Technical Reports Server (NTRS)

    Gniewek, John J.; Vogel, Stephen M.

    1994-01-01

    There are available today many data storage devices that serve the diverse application requirements of the consumer, professional entertainment, and computer data processing industries. Storage technologies include semiconductors, several varieties of optical disk, optical tape, magnetic disk, and many varieties of magnetic tape. In some cases, devices are developed with specific characteristics to meet specification requirements. In other cases, an existing storage device is modified and adapted to a different application. For magnetic tape storage devices, examples of the former case are 3480/3490 and QIC device types developed for the high end and low end segments of the data processing industry respectively, VHS, Beta, and 8 mm formats developed for consumer video applications, and D-1, D-2, D-3 formats developed for professional video applications. Examples of modified and adapted devices include 4 mm, 8 mm, 12.7 mm and 19 mm computer data storage devices derived from consumer and professional audio and video applications. With the conversion of the consumer and professional entertainment industries from analog to digital storage and signal processing, there have been increasing references to the 'convergence' of the computer data processing and entertainment industry technologies. There has yet to be seen, however, any evidence of convergence of data storage device types. There are several reasons for this. The diversity of application requirements results in varying degrees of importance for each of the tape storage characteristics.

  16. Economic impact of off-line PC viewer for private folder management

    NASA Astrophysics Data System (ADS)

    Song, Koun-Sik; Shin, Myung J.; Lee, Joo Hee; Auh, Yong H.

    1999-07-01

    We developed a PC-based clinical workstation and implemented at Asan Medical Center in Seoul, Korea, Hardwares used were Pentium-II, 8M video memory, 64-128 MB RAM, 19 inch color monitor, and 10/100Mbps network adaptor. One of the unique features of this workstation is management tool for folders reside both in PACS short-term storage unit and local hard disk. Users can copy the entire study or part of the study to local hard disk, removable storages, or CD recorder. Even the images in private folders in PACS short-term storage can be copied to local storage devices. All images are saved as DICOM 3.0 file format with 2:1 lossless compression. We compared the prices of copy films and storage medias considering the possible savings of expensive PACS short- term storage and network traffic. Price savings of copy film is most remarkable in MR exam. Price savings arising from minimal use of short-term unit was 50,000 dollars. It as hard to calculate the price savings arising from the network usage. Off-line PC viewer is a cost-effective way of handling private folder management under the PACS environment.

  17. An efficient, modular and simple tape archiving solution for LHC Run-3

    NASA Astrophysics Data System (ADS)

    Murray, S.; Bahyl, V.; Cancio, G.; Cano, E.; Kotlyar, V.; Kruse, D. F.; Leduc, J.

    2017-10-01

    The IT Storage group at CERN develops the software responsible for archiving to tape the custodial copy of the physics data generated by the LHC experiments. Physics run 3 will start in 2021 and will introduce two major challenges for which the tape archive software must be evolved. Firstly the software will need to make more efficient use of tape drives in order to sustain the predicted data rate of 150 petabytes per year as opposed to the current 50 petabytes per year. Secondly the software will need to be seamlessly integrated with EOS, which has become the de facto disk storage system provided by the IT Storage group for physics data. The tape storage software for LHC physics run 3 is code named CTA (the CERN Tape Archive). This paper describes how CTA will introduce a pre-emptive drive scheduler to use tape drives more efficiently, will encapsulate all tape software into a single module that will sit behind one or more EOS systems, and will be simpler by dropping support for obsolete backwards compatibility.

  18. Ceph-based storage services for Run2 and beyond

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel C.; Lamanna, Massimo; Mascetti, Luca; Peters, Andreas J.; Rousseau, Hervé

    2015-12-01

    In 2013, CERN IT evaluated then deployed a petabyte-scale Ceph cluster to support OpenStack use-cases in production. With now more than a year of smooth operations, we will present our experience and tuning best-practices. Beyond the cloud storage use-cases, we have been exploring Ceph-based services to satisfy the growing storage requirements during and after Run2. First, we have developed a Ceph back-end for CASTOR, allowing this service to deploy thin disk server nodes which act as gateways to Ceph; this feature marries the strong data archival and cataloging features of CASTOR with the resilient and high performance Ceph subsystem for disk. Second, we have developed RADOSFS, a lightweight storage API which builds a POSIX-like filesystem on top of the Ceph object layer. When combined with Xrootd, RADOSFS can offer a scalable object interface compatible with our HEP data processing applications. Lastly the same object layer is being used to build a scalable and inexpensive NFS service for several user communities.

  19. ACStor: Optimizing Access Performance of Virtual Disk Images in Clouds

    DOE PAGES

    Wu, Song; Wang, Yihong; Luo, Wei; ...

    2017-03-02

    In virtualized data centers, virtual disk images (VDIs) serve as the containers in virtual environment, so their access performance is critical for the overall system performance. Some distributed VDI chunk storage systems have been proposed in order to alleviate the I/O bottleneck for VM management. As the system scales up to a large number of running VMs, however, the overall network traffic would become unbalanced with hot spots on some VMs inevitably, leading to I/O performance degradation when accessing the VMs. Here, we propose an adaptive and collaborative VDI storage system (ACStor) to resolve the above performance issue. In comparisonmore » with the existing research, our solution is able to dynamically balance the traffic workloads in accessing VDI chunks, based on the run-time network state. Specifically, compute nodes with lightly loaded traffic will be adaptively assigned more chunk access requests from remote VMs and vice versa, which can effectively eliminate the above problem and thus improves the I/O performance of VMs. We also implement a prototype based on our ACStor design, and evaluate it by various benchmarks on a real cluster with 32 nodes and a simulated platform with 256 nodes. Experiments show that under different network traffic patterns of data centers, our solution achieves up to 2-8 performance gain on VM booting time and VM’s I/O throughput, in comparison with the other state-of-the-art approaches.« less

  20. Hybrid RAID With Dual Control Architecture for SSD Reliability

    NASA Astrophysics Data System (ADS)

    Chatterjee, Santanu

    2010-10-01

    The Solid State Devices (SSD) which are increasingly being adopted in today's data storage Systems, have higher capacity and performance but lower reliability, which leads to more frequent rebuilds and to a higher risk. Although SSD is very energy efficient compared to Hard Disk Drives but Bit Error Rate (BER) of an SSD require expensive erase operations between successive writes. Parity based RAID (for Example RAID4,5,6)provides data integrity using parity information and supports losing of any one (RAID4, 5)or two drives(RAID6), but the parity blocks are updated more often than the data blocks due to random access pattern so SSD devices holding more parity receive more writes and consequently age faster. To address this problem, in this paper we propose a Model based System of hybrid disk array architecture in which we plan to use RAID 4(Stripping with Parity) technique and SSD drives as Data drives while any fastest Hard disk drives of same capacity can be used as dedicated parity drives. By this proposed architecture we can open the door to using commodity SSD's past their erasure limit and it can also reduce the need for expensive hardware Error Correction Code (ECC) in the devices.

  1. Nano-optical information storage induced by the nonlinear saturable absorption effect

    NASA Astrophysics Data System (ADS)

    Wei, Jingsong; Liu, Shuang; Geng, Yongyou; Wang, Yang; Li, Xiaoyi; Wu, Yiqun; Dun, Aihuan

    2011-08-01

    Nano-optical information storage is very important in meeting information technology requirements. However, obtaining nanometric optical information recording marks by the traditional optical method is difficult due to diffraction limit restrictions. In the current work, the nonlinear saturable absorption effect is used to generate a subwavelength optical spot and to induce nano-optical information recording and readout. Experimental results indicate that information marks below 100 nm are successfully recorded and read out by a high-density digital versatile disk dynamic testing system with a laser wavelength of 405 nm and a numerical aperture of 0.65. The minimum marks of 60 nm are realized, which is only about 1/12 of the diffraction-limited theoretical focusing spot. This physical scheme is very useful in promoting the development of optical information storage in the nanoscale field.

  2. Towards the Interoperability of Web, Database, and Mass Storage Technologies for Petabyte Archives

    NASA Technical Reports Server (NTRS)

    Moore, Reagan; Marciano, Richard; Wan, Michael; Sherwin, Tom; Frost, Richard

    1996-01-01

    At the San Diego Supercomputer Center, a massive data analysis system (MDAS) is being developed to support data-intensive applications that manipulate terabyte sized data sets. The objective is to support scientific application access to data whether it is located at a Web site, stored as an object in a database, and/or storage in an archival storage system. We are developing a suite of demonstration programs which illustrate how Web, database (DBMS), and archival storage (mass storage) technologies can be integrated. An application presentation interface is being designed that integrates data access to all of these sources. We have developed a data movement interface between the Illustra object-relational database and the NSL UniTree archival storage system running in a production mode at the San Diego Supercomputer Center. With this interface, an Illustra client can transparently access data on UniTree under the control of the Illustr DBMS server. The current implementation is based on the creation of a new DBMS storage manager class, and a set of library functions that allow the manipulation and migration of data stored as Illustra 'large objects'. We have extended this interface to allow a Web client application to control data movement between its local disk, the Web server, the DBMS Illustra server, and the UniTree mass storage environment. This paper describes some of the current approaches successfully integrating these technologies. This framework is measured against a representative sample of environmental data extracted from the San Diego Ba Environmental Data Repository. Practical lessons are drawn and critical research areas are highlighted.

  3. Russian-US collaboration on implementation of the active well coincidence counter (AWCC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mozhajev, V.; Pshakin, G.; Stewart, J.

    The feasibility of using a standard AWCC at the Obninsk IPPE has been demonstrated through active measurements of single UO{sub 2} (36% enriched) disks and through passive measurements of plutonium metal disks used for simulating reactor cores. The role of the measurements is to verify passport values assigned to the disks by the facility, and thereby facilitate the mass accountability procedures developed for the very large inventory of fuel disks at the facility. The AWCC is a very flexible instrument for verification measurements of the large variety of nuclear material items at the Obninsk IPPE and other Russian facilities. Futuremore » work at the IPPE will include calibration and verification measurements for other materials, both in individual disks and in multi-disk storage tubes; it will also include training in the use of the AWCC.« less

  4. Voltage assisted asymmetric nanoscale wear on ultra-smooth diamond like carbon thin films at high sliding speeds

    PubMed Central

    Rajauria, Sukumar; Schreck, Erhard; Marchon, Bruno

    2016-01-01

    The understanding of tribo- and electro-chemical phenomenons on the molecular level at a sliding interface is a field of growing interest. Fundamental chemical and physical insights of sliding surfaces are crucial for understanding wear at an interface, particularly for nano or micro scale devices operating at high sliding speeds. A complete investigation of the electrochemical effects on high sliding speed interfaces requires a precise monitoring of both the associated wear and surface chemical reactions at the interface. Here, we demonstrate that head-disk interface inside a commercial magnetic storage hard disk drive provides a unique system for such studies. The results obtained shows that the voltage assisted electrochemical wear lead to asymmetric wear on either side of sliding interface. PMID:27150446

  5. Voltage assisted asymmetric nanoscale wear on ultra-smooth diamond like carbon thin films at high sliding speeds

    NASA Astrophysics Data System (ADS)

    Rajauria, Sukumar; Schreck, Erhard; Marchon, Bruno

    2016-05-01

    The understanding of tribo- and electro-chemical phenomenons on the molecular level at a sliding interface is a field of growing interest. Fundamental chemical and physical insights of sliding surfaces are crucial for understanding wear at an interface, particularly for nano or micro scale devices operating at high sliding speeds. A complete investigation of the electrochemical effects on high sliding speed interfaces requires a precise monitoring of both the associated wear and surface chemical reactions at the interface. Here, we demonstrate that head-disk interface inside a commercial magnetic storage hard disk drive provides a unique system for such studies. The results obtained shows that the voltage assisted electrochemical wear lead to asymmetric wear on either side of sliding interface.

  6. Tuning HDF5 subfiling performance on parallel file systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less

  7. Curriculum Bank for Individualized Electronic Instruction. Final Report.

    ERIC Educational Resources Information Center

    Williamson, Bert; Pedersen, Joe F.

    Objectives of this project were to update and convert to disk storage appropriate handout materials for courses for the electronic technology open classroom. Project activities were an ERIC search for computer-managed instructional materials; updating of the course outline, lesson outlines, information handouts, and unit tests; and storage of the…

  8. The Stoner-Wohlfarth Model of Ferromagnetism

    ERIC Educational Resources Information Center

    Tannous, C.; Gieraltowski, J.

    2008-01-01

    The Stoner-Wohlfarth (SW) model is the simplest model that describes adequately the physics of fine magnetic grains, the magnetization of which can be used in digital magnetic storage (floppies, hard disks and tapes). Magnetic storage density is presently increasing steadily in almost the same way as electronic device size and circuitry are…

  9. The mass storage testing laboratory at GSFC

    NASA Technical Reports Server (NTRS)

    Venkataraman, Ravi; Williams, Joel; Michaud, David; Gu, Heng; Kalluri, Atri; Hariharan, P. C.; Kobler, Ben; Behnke, Jeanne; Peavey, Bernard

    1998-01-01

    Industry-wide benchmarks exist for measuring the performance of processors (SPECmarks), and of database systems (Transaction Processing Council). Despite storage having become the dominant item in computing and IT (Information Technology) budgets, no such common benchmark is available in the mass storage field. Vendors and consultants provide services and tools for capacity planning and sizing, but these do not account for the complete set of metrics needed in today's archives. The availability of automated tape libraries, high-capacity RAID systems, and high- bandwidth interconnectivity between processor and peripherals has led to demands for services which traditional file systems cannot provide. File Storage and Management Systems (FSMS), which began to be marketed in the late 80's, have helped to some extent with large tape libraries, but their use has introduced additional parameters affecting performance. The aim of the Mass Storage Test Laboratory (MSTL) at Goddard Space Flight Center is to develop a test suite that includes not only a comprehensive check list to document a mass storage environment but also benchmark code. Benchmark code is being tested which will provide measurements for both baseline systems, i.e. applications interacting with peripherals through the operating system services, and for combinations involving an FSMS. The benchmarks are written in C, and are easily portable. They are initially being aimed at the UNIX Open Systems world. Measurements are being made using a Sun Ultra 170 Sparc with 256MB memory running Solaris 2.5.1 with the following configuration: 4mm tape stacker on SCSI 2 Fast/Wide; 4GB disk device on SCSI 2 Fast/Wide; and Sony Petaserve on Fast/Wide differential SCSI 2.

  10. Hardware and software facilities for the J-PAS and J-PLUS surveys archiving, processing and data publication

    NASA Astrophysics Data System (ADS)

    Cristóbal-Hornillos, D.; Varela, J.; Ederoclite, A.; Vázquez Ramió, H.; López-Sainz, A.; Hernández-Fuertes, J.; Civera, T.; Muniesa, D.; Moles, M.; Cenarro, A. J.; Marín-Franch, A.; Yanes-Díaz, A.

    2015-05-01

    The Observatorio Astrofísico de Javalambre consists of two main telescopes: JST/T250, a 2.5 m telescope with a FoV of 3 deg, and JAST/T80, a 83 cm with a 2 deg FoV. JST/T250 will be devoted to complete the Javalambre-PAU Astronomical Survey (J-PAS). It is a photometric survey with a system of 54 narrow-band plus 3 broad-band filters covering an area of 8500°^2. The JAST/T80 will perform the J-PLUS survey, covering the same area in a system of 12 filters. This contribution presents the software and hardware architecture designed to store and process the data. The processing pipeline runs daily and it is devoted to correct instrumental signature on the science images, to perform astrometric and photometric calibration, and the computation of individual image catalogs. In a second stage, the pipeline performs the combination of the tile mosaics and the computation of final catalogs. The catalogs are ingested in as Scientific database to be provided to the community. The processing software is connected with a management database to store persistent information about the pipeline operations done on each frame. The processing pipeline is executed in a computing cluster under a batch queuing system. Regarding the storage system, it will combine disk and tape technologies. The disk storage system will have capacity to store the data that is accessed by the pipeline. The tape library will store and archive the raw data and earlier data releases with lower access frequency.

  11. The raw disk i/o performance of compaq storage works RAID arrays under tru64 unix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselton, A C

    2000-10-19

    We report on the raw disk i/o performance of a set of Compaq StorageWorks RAID arrays connected to our cluster of Compaq ES40 computers via Fibre Channel. The best cumulative peak sustained data rate is l17MB/s per node for reads and 77MB/s per node for writes. This value occurs for a configuration in which a node has two Fibre Channel interfaces to a switch, which in turn has two connections to each of two Compaq StorageWorks RAID arrays. Each RAID array has two HSG80 RAID controllers controlling (together) two 5+p RAID chains. A 10% more space efficient arrangement using amore » single 1l+p RAID chain in place of the two 5+P chains is 25% slower for reads and 40% slower for writes.« less

  12. Safety Aspects of Big Cryogenic Systems Design

    NASA Astrophysics Data System (ADS)

    Chorowski, M.; Fydrych, J.; Poliński, J.

    2010-04-01

    Superconductivity and helium cryogenics are key technologies in the construction of large scientific instruments, like accelerators, fusion reactors or free electron lasers. Such cryogenic systems may contain more than hundred tons of helium, mostly in cold and high-density phases. In spite of the high reliability of the systems, accidental loss of the insulation vacuum, pipe rupture or rapid energy dissipation in the cold helium can not be overlooked. To avoid the danger of over-design pressure rise in the cryostats, they need to be equipped with a helium relief system. Such a system is comprised of safety valves, bursting disks and optionally cold or warm quench lines, collectors and storage tanks. Proper design of the helium safety relief system requires a good understanding of worst case scenarios. Such scenarios will be discussed, taking into account different possible failures of the cryogenic system. In any case it is necessary to estimate heat transfer through degraded vacuum superinsulation and mass flow through the valves and safety disks. Even if the design of the helium relief system does not foresee direct helium venting into the environment, an occasional emergency helium spill may happen. Helium propagation in the atmosphere and the origins of oxygen-deficiency hazards will be discussed.

  13. Towards Transparent Throughput Elasticity for IaaS Cloud Storage: Exploring the Benefits of Adaptive Block-Level Caching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicolae, Bogdan; Riteau, Pierre; Keahey, Kate

    Storage elasticity on IaaS clouds is a crucial feature in the age of data-intensive computing, especially when considering fluctuations of I/O throughput. This paper provides a transparent solution that automatically boosts I/O bandwidth during peaks for underlying virtual disks, effectively avoiding over-provisioning without performance loss. The authors' proposal relies on the idea of leveraging short-lived virtual disks of better performance characteristics (and thus more expensive) to act during peaks as a caching layer for the persistent virtual disks where the application data is stored. Furthermore, they introduce a performance and cost prediction methodology that can be used both independently tomore » estimate in advance what trade-off between performance and cost is possible, as well as an optimization technique that enables better cache size selection to meet the desired performance level with minimal cost. The authors demonstrate the benefits of their proposal both for microbenchmarks and for two real-life applications using large-scale experiments.« less

  14. Multi-terabyte EIDE disk arrays running Linux RAID5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case ofmore » multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.« less

  15. A novel anti-piracy optical disk with photochromic diarylethene

    NASA Astrophysics Data System (ADS)

    Liu, Guodong; Cao, Guoqiang; Huang, Zhen; Wang, Shenqian; Zou, Daowen

    2005-09-01

    Diarylethene is one of photochromic material with many advantages and one of the most promising recording materials for huge optical data storage. Diarylethene has two forms, which can be converted to each other by laser beams of different wavelength. The material has been researched for rewritable optical disks. Volatile data storage is one of its properties, which was always considered as an obstacle to utility. Many researches have been done for combating the obstacle for a long time. In fact, volatile data storage is very useful for anti-piracy optical data storage. Piracy is a social and economical problem. One technology of anti-piracy optical data storage is to limit readout of the data recorded in the material by encryption software. By the development of computer technologies, this kind of software is more and more easily cracked. Using photochromic diarylethene as the optical recording material, the signals of the data recorded in the material are degraded when it is read, and readout of the data is limited. Because the method uses hardware to realize anti-piracy, it is impossible cracked. In this paper, we will introduce this usage of the material. Some experiments are presented for proving its feasibility.

  16. Digital image archiving: challenges and choices.

    PubMed

    Dumery, Barbara

    2002-01-01

    In the last five years, imaging exam volume has grown rapidly. In addition to increased image acquisition, there is more patient information per study. RIS-PACS integration and information-rich DICOM headers now provide us with more patient information relative to each study. The volume of archived digital images is increasing and will continue to rise at a steeper incline than film-based storage of the past. Many filmless facilities have been caught off guard by this increase, which has been stimulated by many factors. The most significant factor is investment in new digital and DICOM-compliant modalities. A huge volume driver is the increase in images per study from multi-slice technology. Storage requirements also are affected by disaster recovery initiatives and state retention mandates. This burgeoning rate of imaging data volume presents many challenges: cost of ownership, data accessibility, storage media obsolescence, database considerations, physical limitations, reliability and redundancy. There are two basic approaches to archiving--single tier and multi-tier. Each has benefits. With a single-tier approach, all the data is stored on a single media that can be accessed very quickly. A redundant copy of the data is then stored onto another less expensive media. This is usually a removable media. In this approach, the on-line storage is increased incrementally as volume grows. In a multi-tier approach, storage levels are set up based on access speed and cost. In other words, all images are stored at the deepest archiving level, which is also the least expensive. Images are stored on or moved back to the intermediate and on-line levels if they will need to be accessed more quickly. It can be difficult to decide what the best approach is for your organization. The options include RAIDs (redundant array of independent disks), direct attached RAID storage (DAS), network storage using RAIDs (NAS and SAN), removable media such as different types of tape, compact disks (CDs and DVDs) and magneto-optical disks (MODs). As you evaluate the various options for storage, it is important to consider both performance and cost. For most imaging enterprises, a single-tier archiving approach is the best solution. With the cost of hard drives declining, NAS is a very feasible solution today. It is highly reliable, offers immediate access to all exams, and easily scales as imaging volume grows. Best of all, media obsolescence challenges need not be of concern. For back-up storage, removable media can be implemented, with a smaller investment needed as it will only be used for a redundant copy of the data. There is no need to keep it online and available. If further system redundancy is desired, multiple servers should be considered. The multi-tier approach still has its merits for smaller enterprises, but with a detailed long-term cost of ownership analysis, NAS will probably still come out on top as the solution of choice for many imaging facilities.

  17. A study of application of remote sensing to river forecasting. Volume 2: Detailed technical report, NASA-IBM streamflow forecast model user's guide

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Model is described along with data preparation, determining model parameters, initializing and optimizing parameters (calibration) selecting control options and interpreting results. Some background information is included, and appendices contain a dictionary of variables, a source program listing, and flow charts. The model was operated on an IBM System/360 Model 44, using a model 2250 keyboard/graphics terminal for interactive operation. The model can be set up and operated in a batch processing mode on any System/360 or 370 that has the memory capacity. The model requires 210K bytes of core storage, and the optimization program, OPSET (which was used previous to but not in this study), requires 240K bytes. The data band for one small watershed requires approximately 32 tracks of disk storage.

  18. Implementation of an Enterprise Information Portal (EIP) in the Loyola University Health System

    PubMed Central

    Price, Ronald N.; Hernandez, Kim

    2001-01-01

    Loyola University Chicago Stritch School of Medicine and Loyola University Medical Center have long histories in the development of applications to support the institutions' missions of education, research and clinical care. In late 1998, the institutions' application development group undertook an ambitious program to re-architecture more than 10 years of legacy application development (30+ core applications) into a unified World Wide Web (WWW) environment. The primary project objectives were to construct an environment that would support the rapid development of n-tier, web-based applications while providing standard methods for user authentication/validation, security/access control and definition of a user's organizational context. The project's efforts resulted in Loyola's Enterprise Information Portal (EIP), which meets the aforementioned objectives. This environment: 1) allows access to other vertical Intranet portals (e.g., electronic medical record, patient satisfaction information and faculty effort); 2) supports end-user desktop customization; and 3) provides a means for standardized application “look and feel.” The portal was constructed utilizing readily available hardware and software. Server hardware consists of multiprocessor (Intel Pentium 500Mhz) Compaq 6500 servers with one gigabyte of random access memory and 75 gigabytes of hard disk storage. Microsoft SQL Server was selected to house the portal's internal or security data structures. Netscape Enterprise Server was selected for the web server component of the environment and Allaire's ColdFusion was chosen for access and application tiers. Total costs for the portal environment was less than $40,000. User data storage is accomplished through two Microsoft SQL Servers and an existing SUN Microsystems enterprise server with eight processors, 750 gigabytes of disk storage operating Sybase relational database manager. Total storage capacity for all system exceeds one terabyte. In the past 12 months, the EIP has supported development of more than 88 applications and is utilized by more than 2,200 users.

  19. Effect of cleaning methods after reduced-pressure air abrasion on bonding to zirconia ceramic.

    PubMed

    Attia, Ahmed; Kern, Matthias

    2011-12-01

    To evaluate in vitro the influence of different cleaning methods after low-pressure air abrasion on the bond strength of a phosphate monomer-containing luting resin to zirconia ceramic. A total of 112 zirconia ceramic disks were divided into 7 groups (n = 16). In the test groups, disks were air abraded at low pressure (L) 0.05 MPa using 50-μm alumina particles. Prior to bonding, the disks were ultrasonically (U) cleaned either in isopropanol alcohol (AC), hydrofluoric acid (HF), demineralized water (DW), or tap water (TW), or they were used without ultrasonic cleaning. Disks air abraded at a high (H) pressure of 0.25 MPa and cleaned ultrasonically in isopropanol served as positive control; original (O) milled disks used without air abrasion served as the negative control group. Plexiglas tubes filled with composite resin were bonded with the adhesive luting resin Panavia 21 to the ceramic disks. Prior to testing tensile bond strength (TBS), each main group was further subdivided into 2 subgroups (n=8) which were stored in distilled water either at 37°C for 3 days or for 30 days with 7500 thermal cycles. Statistical analyses were conducted with two- and one-way analyses of variance (ANOVA) and Tukey's HSD test. Initial tensile bond strength (TBS) ranged from 32.6 to 42.8 MPa. After 30 days storage in water with thermocycling, TBS ranged from 21.9 to 36.3 MPa. Storage in water and thermocycling significantly decreased the TBS of test groups which were not air abraded (p = 0.05) or which were air abraded but cleaned in tap water (p = 0.002), but not the TBS of the other groups (p > 0.05). Also, the TBS of air-abraded groups were significantly higher than the TBS of the original milled (p < 0.01). Cleaning procedures did not significantly affect TBS either after 3 days or 30 days storage in water and thermocycling (p > 0.05). Air abrasion at 0.05 MPa and ultrasonic cleaning are important factors for improving bonding to zirconia ceramic.

  20. High Density Ion Implanted Contiguous Disk Bubble Technology.

    DTIC Science & Technology

    1987-10-31

    magnetic garnet films were grown by liquid phase epitaxy ( LPE ) from a Bi 20 3-PbO flux system. Films were grown with a 600C to 700C supercooling at...Matsutera, "Large Magnetic Anisotropy Change Induced By Hydrogen Ion Implantation In Europium Iron Garnet LPE Films ", J. of Magnetism and Magnetic...summarizes the design, development and growth of various bubble garnet films in our facility, to be used in the fabrication of high density bubble storage

  1. Digital Photography and Its Impact on Instruction.

    ERIC Educational Resources Information Center

    Lantz, Chris

    Today the chemical processing of film is being replaced by a virtual digital darkroom. Digital image storage makes new levels of consistency possible because its nature is less volatile and more mutable than traditional photography. The potential of digital imaging is great, but issues of disk storage, computer speed, camera sensor resolution,…

  2. Alternative treatment technology information center computer database system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, D.

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less

  3. The ATLAS Tier-3 in Geneva and the Trigger Development Facility

    NASA Astrophysics Data System (ADS)

    Gadomski, S.; Meunier, Y.; Pasche, P.; Baud, J.-P.; ATLAS Collaboration

    2011-12-01

    The ATLAS Tier-3 farm at the University of Geneva provides storage and processing power for analysis of ATLAS data. In addition the facility is used for development, validation and commissioning of the High Level Trigger of ATLAS [1]. The latter purpose leads to additional requirements on the availability of latest software and data, which will be presented. The farm is also a part of the WLCG [2], and is available to all members of the ATLAS Virtual Organization. The farm currently provides 268 CPU cores and 177 TB of storage space. A grid Storage Element, implemented with the Disk Pool Manager software [3], is available and integrated with the ATLAS Distributed Data Management system [4]. The batch system can be used directly by local users, or with a grid interface provided by NorduGrid ARC middleware [5]. In this article we will present the use cases that we support, as well as the experience with the software and the hardware we are using. Results of I/O benchmarking tests, which were done for our DPM Storage Element and for the NFS servers we are using, will also be presented.

  4. High-Speed Data Recorder for Space, Geodesy, and Other High-Speed Recording Applications

    NASA Technical Reports Server (NTRS)

    Taveniku, Mikael

    2013-01-01

    A high-speed data recorder and replay equipment has been developed for reliable high-data-rate recording to disk media. It solves problems with slow or faulty disks, multiple disk insertions, high-altitude operation, reliable performance using COTS hardware, and long-term maintenance and upgrade path challenges. The current generation data recor - ders used within the VLBI community are aging, special-purpose machines that are both slow (do not meet today's requirements) and are very expensive to maintain and operate. Furthermore, they are not easily upgraded to take advantage of commercial technology development, and are not scalable to multiple 10s of Gbit/s data rates required by new applications. The innovation provides a softwaredefined, high-speed data recorder that is scalable with technology advances in the commercial space. It maximally utilizes current technologies without being locked to a particular hardware platform. The innovation also provides a cost-effective way of streaming large amounts of data from sensors to disk, enabling many applications to store raw sensor data and perform post and signal processing offline. This recording system will be applicable to many applications needing realworld, high-speed data collection, including electronic warfare, softwaredefined radar, signal history storage of multispectral sensors, development of autonomous vehicles, and more.

  5. Electrochemical Studies of Redox Systems for Energy Storage

    NASA Technical Reports Server (NTRS)

    Wu, C. D.; Calvo, E. J.; Yeager, E.

    1983-01-01

    Particular attention was paid to the Cr(II)/Cr(III) redox couple in aqueous solutions in the presence of Cl(-) ions. The aim of this research has been to unravel the electrode kinetics of this redox couple and the effect of Cl(1) and electrode substrate. Gold and silver were studied as electrodes and the results show distinctive differences; this is probably due to the role Cl(-) ion may play as a mediator in the reaction and the difference in state of electrical charge on these two metals (difference in the potential of zero charge, pzc). The competition of hydrogen evolution with CrCl3 reduction on these surfaces was studied by means of the rotating ring disk electrode (RRDE). The ring downstream measures the flux of chromous ions from the disk and therefore separation of both Cr(III) and H2 generation can be achieved by analyzing ring and disk currents. The conditions for the quantitative detection of Cr(2+) at the ring electrode were established. Underpotential deposition of Pb on Ag and its effect on the electrokinetics of Cr(II)/Cr(III) reaction was studied.

  6. Using dCache in Archiving Systems oriented to Earth Observation

    NASA Astrophysics Data System (ADS)

    Garcia Gil, I.; Perez Moreno, R.; Perez Navarro, O.; Platania, V.; Ozerov, D.; Leone, R.

    2012-04-01

    The object of LAST activity (Long term data Archive Study on new Technologies) is to perform an independent study on best practices and assessment of different archiving technologies mature for operation in the short and mid-term time frame, or available in the long-term with emphasis on technologies better suited to satisfy the requirements of ESA, LTDP and other European and Canadian EO partners in terms of digital information preservation and data accessibility and exploitation. During the last phase of the project, a testing of several archiving solutions has been performed in order to evaluate their suitability. In particular, dCache, aimed to provide a file system tree view of the data repository exchanging this data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, dCache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user Dcache was created to solve the requirements of big computer centers and universities with big amounts of data, putting their efforts together and founding EMI (European Middleware Initiative). At the moment being, Dcache is mature enough to be implemented, being used by several research centers of relevance (e.g. LHC storing up to 50TB/day). This solution has been not used so far in Earth Observation and the results of the study are summarized in this article, focusing on the capacities over a simulated environment to get in line with the ESA requirements for a geographically distributed storage. The challenge of a geographically distributed storage system can be summarized as the way to provide a maximum quality for storage and dissemination services with the minimum cost.

  7. Clinical experience with PACS at Northwestern: year two

    NASA Astrophysics Data System (ADS)

    Channin, David S.; Hawkins, Rodney C.; Enzmann, Dieter R.

    2001-08-01

    We have previously described the PACS configuration at Northwestern Memorial Hospital (NMH). As opposed to an imaging modality, PACS is an evolving system that continuously grows and changes to meet the needs of the institution. The NMH PACS has grown significantly in the past year and has undergone significant architectural enhancements. This growth and evolutionary change will be described and discussed. The system now contains over 339,000 studies consisting of over 13 million images. There are now two short-term RAID storage units that provide for twice as much fast storage. There are also two magneto-optical disk jukeboxes providing long-term archive. We have deployed a redundant database to improve reliability of the system in the event of database failure. The number of modalities connected to the system has increased and will be summarized. Statistics describing utilization of the PACS will be shown. Lastly, we will discuss our plans for exploiting the application service provider model in our PACS environment.

  8. Joining the petabyte club with direct attached storage

    NASA Astrophysics Data System (ADS)

    Haupt, Andreas; Leffhalm, Kai; Wegner, Peter; Wiesand, Stephan

    2011-12-01

    Our site successfully runs more than a Petabyte of online disk, using nothing but Direct Attached Storage. The bulk of this capacity is grid-enabled and served by dCache, but sizable amounts are provided by traditional AFS or modern Lustre filesystems as well. While each of these storage flavors has a different purpose, owing to their respective strengths and weaknesses for certain use cases, their instances are all built from the same universal storage bricks. These are managed using the same scale-out techniques used for compute nodes, and run the same operating system as those, thus fully leveraging the existing know-how and infrastructure. As a result, this storage is cost effective especially regarding total cost of ownership. It is also competitive in terms of aggregate performance, performance per capacity, and - due to the possibility to make use of the latest technology early - density and power efficiency. Further advantages include a high degree of flexibility and complete avoidance of vendor lock-in. Availability and reliability in practice turn out to be more than adequate for a HENP site's major tasks. We present details about this Ansatz for online storage, hardware and software used, tweaking and tuning, lessons learned, and the actual result in practice.

  9. Improvement in HPC performance through HIPPI RAID storage

    NASA Technical Reports Server (NTRS)

    Homan, Blake

    1993-01-01

    In 1986, RAID (redundant array of inexpensive (or independent) disks) technology was introduced as a viable solution to the I/O bottleneck. A number of different RAID levels were defined in 1987 by the Computer Science Division (EECS) University of California, Berkeley, each with specific advantages and disadvantages. With multiple RAID options available, taking advantage of RAID technology required matching particular RAID levels with specific applications. It was not possible to use one RAID device to address all applications. Maximum Strategy's Gen 4 Storage Server addresses this issue with a new capability called programmable RAID level partitioning. This capability enables users to have multiple RAID levels coexist on the same disks, thereby providing the versatility necessary for multiple concurrent applications.

  10. The influence of co-formers on the dissolution rates of co-amorphous sulfamerazine/excipient systems.

    PubMed

    Gniado, Katarzyna; Löbmann, Korbinian; Rades, Thomas; Erxleben, Andrea

    2016-05-17

    A comprehensive study on the dissolution properties of three co-amorphous sulfamerazine/excipient systems, namely sulfamerazine/deoxycholic acid, sulfamerazine/citric acid and sulfamerazine/sodium taurocholate (SMZ/DA, SMZ/CA and SMZ/NaTC; 1:1 molar ratio), is reported. While all three co-formers stabilize the amorphous state during storage, only co-amorphization with NaTC provides a dissolution advantage over crystalline SMZ and the reasons for this were analyzed. In the case of SMZ/DA extensive gelation of DA protects the amorphous phase from crystallization upon contact with buffer, but at the same time prevents the release of SMZ into solution. Disk dissolution studies showed an improved dissolution behavior of SMZ/CA compared to crystalline SMZ. However, enhanced dissolution properties were not seen in powder dissolution testing due to poor dispersibility. Co-amorphization of SMZ and NaTC resulted in a significant increase in dissolution rate, both in powder and disk dissolution studies. Copyright © 2016. Published by Elsevier B.V.

  11. UTDallas Offline Computing System for B Physics with the Babar Experiment at SLAC

    NASA Astrophysics Data System (ADS)

    Benninger, Tracy L.

    1998-10-01

    The University of Texas at Dallas High Energy Physics group is building a high performance, large storage computing system for B physics research with the BaBar experiment (``factory'') at the Stanford Linear Accelerator Center. The goal of this system is to analyze one terabyte of complex Event Store data from BaBar in one to two days. The foundation of the computing system is a Sun E6000 Enterprise multiprocessor system, with additions of a Sun StorEdge L1800 Tape Library, a Sun Workstation for processing batch jobs, staging disks and interface cards. The design considerations, current status, projects underway, and possible upgrade paths will be discussed.

  12. Low temperature Grüneisen parameter of cubic ionic crystals

    NASA Astrophysics Data System (ADS)

    Batana, Alicia; Monard, María C.; Rosario Soriano, María

    1987-02-01

    Title of program: CAROLINA Catalogue number: AATG Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland (see application form in this issue) Computer: IBM/370, Model 158; Installation: Centro de Tecnología y Ciencia de Sistemas, Universidad de Buenos Aires Operating system: VM/370 Programming language used: FORTRAN High speed storage required: 3 kwords No. of bits in a word: 32 Peripherals used: disk IBM 3340/70 MB No. of lines in combined program and test deck: 447

  13. C3I (Command, Control, Communications and Intelligence) Teradata Study.

    DTIC Science & Technology

    1986-03-01

    data storage capacity of one trillion bytes. The largest configuration currently built consists of 60 processors and 60 disks. .--. ,[ -... "I i The DBC... FMEA ) was developed to l indicate potential points of failure in the configuration and their - effects on total system operation. -"ince the contract did...number or IrPs and AMPs Int is the Integer function Thus, for a maximum configuration (see Section 3.3) of 1024 processors, there are ten tiers in Uhe

  14. Digital Holographic Memories

    NASA Astrophysics Data System (ADS)

    Hesselink, Lambertus; Orlov, Sergei S.

    Optical data storage is a phenomenal success story. Since its introduction in the early 1980s, optical data storage devices have evolved from being focused primarily on music distribution, to becoming the prevailing data distribution and recording medium. Each year, billions of optical recordable and prerecorded disks are sold worldwide. Almost every computer today is shipped with a CD or DVD drive installed.

  15. X-window-based 2K display workstation

    NASA Astrophysics Data System (ADS)

    Weinberg, Wolfram S.; Hayrapetian, Alek S.; Cho, Paul S.; Valentino, Daniel J.; Taira, Ricky K.; Huang, H. K.

    1991-07-01

    A high-definition, high-performance display station for reading and review of digital radiological images is introduced. The station is based on a Sun SPARC Station 4 and employs X window system for display and manipulation of images. A mouse-operated graphic user interface is implemented utilizing Motif-style tools. The system supports up to four MegaScan gray-scale 2560 X 2048 monitors. A special configuration of frame and video buffer yields a data transfer of 50 M pixels/s. A magnetic disk array supplies a storage capacity of 2 GB with a data transfer rate of 4-6 MB/s. The system has access to the central archive through an ultrahigh-speed fiber-optic network and patient studies are automatically transferred to the local disk. The available image processing functions include change of lookup table, zoom and pan, and cine. Future enhancements will provide for manual contour tracing, length, area, and density measurements, text and graphic overlay, as well as composition of selected images. Additional preprocessing procedures under development will optimize the initial lookup table and adjust the images to a standard orientation.

  16. An Improved B+ Tree for Flash File Systems

    NASA Astrophysics Data System (ADS)

    Havasi, Ferenc

    Nowadays mobile devices such as mobile phones, mp3 players and PDAs are becoming evermore common. Most of them use flash chips as storage. To store data efficiently on flash, it is necessary to adapt ordinary file systems because they are designed for use on hard disks. Most of the file systems use some kind of search tree to store index information, which is very important from a performance aspect. Here we improved the B+ search tree algorithm so as to make flash devices more efficient. Our implementation of this solution saves 98%-99% of the flash operations, and is now the part of the Linux kernel.

  17. Magnetic Thin Films for Perpendicular Magnetic Recording Systems

    NASA Astrophysics Data System (ADS)

    Sugiyama, Atsushi; Hachisu, Takuma; Osaka, Tetsuya

    In the advanced information society of today, information storage technology, which helps to store a mass of electronic data and offers high-speed random access to the data, is indispensable. Against this background, hard disk drives (HDD), which are magnetic recording devices, have gained in importance because of their advantages in capacity, speed, reliability, and production cost. These days, the uses of HDD extend not only to personal computers and network servers but also to consumer electronics products such as personal video recorders, portable music players, car navigation systems, video games, video cameras, and personal digital assistances.

  18. Storage quality-of-service in cloud-based scientific environments: a standardization approach

    NASA Astrophysics Data System (ADS)

    Millar, Paul; Fuhrmann, Patrick; Hardt, Marcus; Ertl, Benjamin; Brzezniak, Maciej

    2017-10-01

    When preparing the Data Management Plan for larger scientific endeavors, PIs have to balance between the most appropriate qualities of storage space along the line of the planned data life-cycle, its price and the available funding. Storage properties can be the media type, implicitly determining access latency and durability of stored data, the number and locality of replicas, as well as available access protocols or authentication mechanisms. Negotiations between the scientific community and the responsible infrastructures generally happen upfront, where the amount of storage space, media types, like: disk, tape and SSD and the foreseeable data life-cycles are negotiated. With the introduction of cloud management platforms, both in computing and storage, resources can be brokered to achieve the best price per unit of a given quality. However, in order to allow the platform orchestrator to programmatically negotiate the most appropriate resources, a standard vocabulary for different properties of resources and a commonly agreed protocol to communicate those, has to be available. In order to agree on a basic vocabulary for storage space properties, the storage infrastructure group in INDIGO-DataCloud together with INDIGO-associated and external scientific groups, created a working group under the umbrella of the Research Data Alliance (RDA). As communication protocol, to query and negotiate storage qualities, the Cloud Data Management Interface (CDMI) has been selected. Necessary extensions to CDMI are defined in regular meetings between INDIGO and the Storage Network Industry Association (SNIA). Furthermore, INDIGO is contributing to the SNIA CDMI reference implementation as the basis for interfacing the various storage systems in INDIGO to the agreed protocol and to provide an official Open-Source skeleton for systems not being maintained by INDIGO partners.

  19. Up-to-date state of storage techniques used for large numerical data files

    NASA Technical Reports Server (NTRS)

    Chlouba, V.

    1975-01-01

    Methods for data storage and output in data banks and memory files are discussed along with a survey of equipment available for this. Topics discussed include magnetic tapes, magnetic disks, Terabit magnetic tape memory, Unicon 690 laser memory, IBM 1360 photostore, microfilm recording equipment, holographic recording, film readers, optical character readers, digital data storage techniques, and photographic recording. The individual types of equipment are summarized in tables giving the basic technical parameters.

  20. Automated Camouflage Pattern Generation Technology Survey.

    DTIC Science & Technology

    1985-08-07

    supported by high speed data communications? Costs: 9 What are your rates? $/CPU hour: $/MB disk storage/day: S/connect hour: other charges: What are your... data to the workstation, tape drives are needed for backing up and archiving completed patterns, 256 megabytes of on-line hard disk space as a minimum...is needed to support multiple processes and data files, and 4 megabytes of actual or virtual memory is needed to process the largest expected single

  1. Demonstration of NICT Space Weather Cloud --Integration of Supercomputer into Analysis and Visualization Environment--

    NASA Astrophysics Data System (ADS)

    Watari, S.; Morikawa, Y.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Kato, H.; Shimojo, S.; Murata, K. T.

    2010-12-01

    In the Solar-Terrestrial Physics (STP) field, spatio-temporal resolution of computer simulations is getting higher and higher because of tremendous advancement of supercomputers. A more advanced technology is Grid Computing that integrates distributed computational resources to provide scalable computing resources. In the simulation research, it is effective that a researcher oneself designs his physical model, performs calculations with a supercomputer, and analyzes and visualizes for consideration by a familiar method. A supercomputer is far from an analysis and visualization environment. In general, a researcher analyzes and visualizes in the workstation (WS) managed at hand because the installation and the operation of software in the WS are easy. Therefore, it is necessary to copy the data from the supercomputer to WS manually. Time necessary for the data transfer through long delay network disturbs high-accuracy simulations actually. In terms of usefulness, integrating a supercomputer and an analysis and visualization environment seamlessly with a researcher's familiar method is important. NICT has been developing a cloud computing environment (NICT Space Weather Cloud). In the NICT Space Weather Cloud, disk servers are located near its supercomputer and WSs for data analysis and visualization. They are connected to JGN2plus that is high-speed network for research and development. Distributed virtual high-capacity storage is also constructed by Grid Datafarm (Gfarm v2). Huge-size data output from the supercomputer is transferred to the virtual storage through JGN2plus. A researcher can concentrate on the research by a familiar method without regard to distance between a supercomputer and an analysis and visualization environment. Now, total 16 disk servers are setup in NICT headquarters (at Koganei, Tokyo), JGN2plus NOC (at Otemachi, Tokyo), Okinawa Subtropical Environment Remote-Sensing Center, and Cybermedia Center, Osaka University. They are connected on JGN2plus, and they constitute 1PB (physical size) virtual storage by Gfarm v2. These disk servers are connected with supercomputers of NICT and Osaka University. A system that data output from the supercomputers are automatically transferred to the virtual storage had been built up. Transfer rate is about 50 GB/hrs by actual measurement. It is estimated that the performance is reasonable for a certain simulation and analysis for reconstruction of coronal magnetic field. This research is assumed an experiment of the system, and the verification of practicality is advanced at the same time. Herein we introduce an overview of the space weather cloud system so far we have developed. We also demonstrate several scientific results using the space weather cloud system. We also introduce several web applications of the cloud as a service of the space weather cloud, which is named as "e-SpaceWeather" (e-SW). The e-SW provides with a variety of space weather online services from many aspects.

  2. LVFS: A Scalable Petabye/Exabyte Data Storage System

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.

    2013-12-01

    Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system. The LVFS system replaces the NFS disk mounting approach of LAADS and utilizes the already existing highly optimized metadata database server, which is applicable to most scientific big data intensive compute systems. Thus, LVFS ties the existing storage system with the existing metadata infrastructure system which we believe leads to a scalable exabyte virtual file system. The uniqueness of the implemented design is not limited to LAADS but can be employed with most scientific data processing systems. By utilizing the Filesystem In Userspace (FUSE), a kernel module available in many operating systems, LVFS was able to replace the NFS system while staying POSIX compliant. As a result, the LVFS system becomes scalable to exabyte sizes owing to the use of highly scalable database servers optimized for metadata storage. The flexibility of the LVFS design allows it to organize data on the fly in different ways, such as by region, date, instrument or product without the need for duplication, symbolic links, or any other replication methods. We proposed here a strategic reference architecture that addresses the inefficiencies of scientific petabyte/exabyte file system access through the dynamic integration of the observing system's large metadata file.

  3. CD-ROM source data uploaded to the operating and storage devices of an IBM 3090 mainframe through a PC terminal.

    PubMed

    Boros, L G; Lepow, C; Ruland, F; Starbuck, V; Jones, S; Flancbaum, L; Townsend, M C

    1992-07-01

    A powerful method of processing MEDLINE and CINAHL source data uploaded to the IBM 3090 mainframe computer through an IBM/PC is described. Data are first downloaded from the CD-ROM's PC devices to floppy disks. These disks then are uploaded to the mainframe computer through an IBM/PC equipped with WordPerfect text editor and computer network connection (SONNGATE). Before downloading, keywords specifying the information to be accessed are typed at the FIND prompt of the CD-ROM station. The resulting abstracts are downloaded into a file called DOWNLOAD.DOC. The floppy disks containing the information are simply carried to an IBM/PC which has a terminal emulation (TELNET) connection to the university-wide computer network (SONNET) at the Ohio State University Academic Computing Services (OSU ACS). The WordPerfect (5.1) processes and saves the text into DOS format. Using the File Transfer Protocol (FTP, 130,000 bytes/s) of SONNET, the entire text containing the information obtained through the MEDLINE and CINAHL search is transferred to the remote mainframe computer for further processing. At this point, abstracts in the specified area are ready for immediate access and multiple retrieval by any PC having network switch or dial-in connection after the USER ID, PASSWORD and ACCOUNT NUMBER are specified by the user. The system provides the user an on-line, very powerful and quick method of searching for words specifying: diseases, agents, experimental methods, animals, authors, and journals in the research area downloaded. The user can also copy the TItles, AUthors and SOurce with optional parts of abstracts into papers under edition. This arrangement serves the special demands of a research laboratory by handling MEDLINE and CINAHL source data resulting after a search is performed with keywords specified for ongoing projects. Since the Ohio State University has a centrally founded mainframe system, the data upload, storage and mainframe operations are free.

  4. Disk space and load time requirements for eye movement biometric databases

    NASA Astrophysics Data System (ADS)

    Kasprowski, Pawel; Harezlak, Katarzyna

    2016-06-01

    Biometric identification is a very popular area of interest nowadays. Problems with the so-called physiological methods like fingerprints or iris recognition resulted in increased attention paid to methods measuring behavioral patterns. Eye movement based biometric (EMB) identification is one of the interesting behavioral methods and due to the intensive development of eye tracking devices it has become possible to define new methods for the eye movement signal processing. Such method should be supported by an efficient storage used to collect eye movement data and provide it for further analysis. The aim of the research was to check various setups enabling such a storage choice. There were various aspects taken into consideration, like disk space usage, time required for loading and saving whole data set or its chosen parts.

  5. Archiving and Distributing Seismic Data at the Southern California Earthquake Data Center (SCEDC)

    NASA Astrophysics Data System (ADS)

    Appel, V. L.

    2002-12-01

    The Southern California Earthquake Data Center (SCEDC) archives and provides public access to earthquake parametric and waveform data gathered by the Southern California Seismic Network and since January 1, 2001, the TriNet seismic network, southern California's earthquake monitoring network. The parametric data in the archive includes earthquake locations, magnitudes, moment-tensor solutions and phase picks. The SCEDC waveform archive prior to TriNet consists primarily of short-period, 100-samples-per-second waveforms from the SCSN. The addition of the TriNet array added continuous recordings of 155 broadband stations (20 samples per second or less), and triggered seismograms from 200 accelerometers and 200 short-period instruments. Since the Data Center and TriNet use the same Oracle database system, new earthquake data are available to the seismological community in near real-time. Primary access to the database and waveforms is through the Seismogram Transfer Program (STP) interface. The interface enables users to search the database for earthquake information, phase picks, and continuous and triggered waveform data. Output is available in SAC, miniSEED, and other formats. Both the raw counts format (V0) and the gain-corrected format (V1) of COSMOS (Consortium of Organizations for Strong-Motion Observation Systems) are now supported by STP. EQQuest is an interface to prepackaged waveform data sets for select earthquakes in Southern California stored at the SCEDC. Waveform data for large-magnitude events have been prepared and new data sets will be available for download in near real-time following major events. The parametric data from 1981 to present has been loaded into the Oracle 9.2.0.1 database system and the waveforms for that time period have been converted to mSEED format and are accessible through the STP interface. The DISC optical-disk system (the "jukebox") that currently serves as the mass-storage for the SCEDC is in the process of being replaced with a series of inexpensive high-capacity (1.6 Tbyte) magnetic-disk RAIDs. These systems are built with PC-technology components, using 16 120-Gbyte IDE disks, hot-swappable disk trays, two RAID controllers, dual redundant power supplies and a Linux operating system. The system is configured over a private gigabit network that connects to the two Data Center servers and spans between the Seismological Lab and the USGS. To ensure data integrity, each RAID disk system constantly checks itself against its twin and verifies file integrity using 128-bit MD5 file checksums that are stored separate from the system. The final level of data protection is a Sony AIT-3 tape backup of the files. The primary advantage of the magnetic-disk approach is faster data access because magnetic disk drives have almost no latency. This means that the SCEDC can provide better "on-demand" interactive delivery of the seismograms in the archive.

  6. Dual-probe near-field fiber head with gap servo control for data storage applications.

    PubMed

    Fang, Jen-Yu; Tien, Chung-Hao; Shieh, Han-Ping D

    2007-10-29

    We present a novel fiber-based near-field optical head consisting of a straw-shaped writing probe and a flat gap sensing probe. The straw-shaped probe with a C-aperture on the end face exhibits enhanced transmission by a factor of 3 orders of magnitude over a conventional fiber probe due to a hybrid effect that excites both propagation modes and surface plasmon waves. In the gap sensing probe, the spacing between the probe and the disk surface functions as an external cavity. The high sensitivity of the output power to the change in the gap width is used as a feedback control signal. We characterize and design the straw-shaped writing probe and the flat gap sensing probe. The dual-probe system is installed on a conventional biaxial actuator to demonstrate the capability of flying over a disk surface with nanometer position precision.

  7. System and method for manipulating domain pinning and reversal in ferromagnetic materials

    DOEpatents

    Silevitch, Daniel M.; Rosenbaum, Thomas F.; Aeppli, Gabriel

    2013-10-15

    A method for manipulating domain pinning and reversal in a ferromagnetic material comprises applying an external magnetic field to a uniaxial ferromagnetic material comprising a plurality of magnetic domains, where each domain has an easy axis oriented along a predetermined direction. The external magnetic field is applied transverse to the predetermined direction and at a predetermined temperature. The strength of the magnetic field is varied at the predetermined temperature, thereby isothermally regulating pinning of the domains. A magnetic storage device for controlling domain dynamics includes a magnetic hard disk comprising a uniaxial ferromagnetic material, a magnetic recording head including a first magnet, and a second magnet. The ferromagnetic material includes a plurality of magnetic domains each having an easy axis oriented along a predetermined direction. The second magnet is positioned adjacent to the magnetic hard disk and is configured to apply a magnetic field transverse to the predetermined direction.

  8. Fast scintillation counter system and performance

    NASA Technical Reports Server (NTRS)

    Sasaki, H.; Nishioka, A.; Ohmori, N.; Kusumose, M.; Nakatsuka, T.; Horiki, T.; Hatano, Y.

    1985-01-01

    An experimental study of the fast scintillation counter (FS) system to observe a shower disk structure at Mt. Norikura is described, especially the system performance and a pulse wave-form by a single charge particles. The photomultiplier tube (PT) pulse appears at the leading edge of the main pulse. To remove this PT-pulse from the main pulse, the frame of the scintillator vessel was changed. The fast triggering system was made to decrease the dead time which came from the use of the function of the self triggering of the storage oscilloscope (OSC). To provide a new field on the multi-parameter study of the cosmic ray showers, the system response of the FS system also improved as a result of many considerations.

  9. Closet to Cloud: The online archiving of tape-based continuous NCSN seismic data from 1993-2005

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Aranha, M. A.; Kohler, W. M.; Oppenheimer, D.

    2016-12-01

    As earthquake monitoring systems in the 1980s moved from analog to digital recording systems, most seismic networks only archived digital waveforms from detected events due to lack of affordable online digital storage for continuous high-rate (100 sps) data. The Northern California Earthquake Data Center (NCEDC), established in 1991 by UC Berkeley and the USGS Menlo Park, archived 20 sps continuous data and triggerd high-rate from the sparse Berkeley seismic network, but could not afford the online storage for continuous high-rate data from the 300+ stations of the USGS Northern California Seismic Network (NCSN). The discovery of non-volcanic tremor and the use of continuous waveform correlation techniques for detecting repeating earthquakes combined with the increase in disk capacity capacity and significant reduction in disk costs led the Northern California Earthquake Data Center (NCEDC) to begin archiving continuous high-rate waveforms in 2004-2005. The USGS Menlo Park NCSN network had backup tapes of continuous high-rate waveform data since 1993 on the shelf, and the USGS and NCEDC embarked on a project to restore and archive all continuous NCSN data from 1993 through 2005. We will discuss the procedures and problems encountered when reading, transcribing, converting data formats, SEED channel naming, and archiving the 1993-2005 continuous NCSN waveforms. We will also illustrate new science enabled by these data. These and other northern California seismic and geophysical data are available via web services at http://service.ncedc.org

  10. Effect of storage temperature on survival and recovery of thermal and extrusion injured Escherichia coli K-12 in whey protein concentrate and corn meal.

    PubMed

    Ukuku, Dike O; Mukhopadhyay, Sudarsan; Onwulata, Charles

    2013-01-01

    Previously, we reported inactivation of Escherichia coli populations in corn product (CP) and whey protein product (WPP) extruded at different temperatures. However, information on the effect of storage temperatures on injured bacterial populations was not addressed. In this study, the effect of storage temperatures on the survival and recovery of thermal death time (TDT) disks and extrusion injured E. coli populations in CP and WPP was investigated. CP and WPP inoculated with E. coli bacteria at 7.8 log(10) CFU/g were conveyed separately into the extruder with a series 6300 digital type T-35 twin screw volumetric feeder set at a speed of 600 rpm and extruded at 35°C, 55°C, 75°C, and 95°C, or thermally treated with TDT disks submerged into water bath set at 35°C, 55°C, 75°C, and 95°C for 120 s. Populations of surviving bacteria including injured cells in all treated samples were determined immediately and every day for 5 days, and up to 10 days for untreated samples during storage at 5°C, 10°C, and 23°C. TDT disks treatment at 35°C and 55°C did not cause significant changes in the population of the surviving bacteria including injured populations. Extrusion treatment at 35°C and 55°C led to significant (p<0.05) reduction of E. coli populations in WPP as opposed to CP. The injured populations among the surviving E. coli cells in CP and WPP extruded at all temperatures tested were inactivated during storage. Population of E. coli inactivated in samples extruded at 75°C was significantly (p<0.05) different than 55°C during storage. Percent injured population could not be determined in samples extruded at 95°C due to absence of colony forming units on the agar plates. The results of this study showed that further inactivation of the injured populations occurred during storage at 5°C for 5 days suggesting the need for immediate storage of 75°C extruded CP and WPP at 5°C for at least 24 h to enhance their microbial safety.

  11. RIS integrated IMAC system

    NASA Astrophysics Data System (ADS)

    Angelhed, Jan-Erik; Carlsson, Goeran; Gustavsson, Staffan; Karlsson, Anders; Larsson, Lars E. G.; Svensson, Sune; Tylen, Ulf

    1998-07-01

    An Image Management And Communication (IMAC) system adapted to the X-ray department at Sahlgrenska University Hospital has been developed using standard components. Two user demands have been considered primary: Rapid access to (display of) images and an efficient worklist management. To fulfil these demands a connection between the IMAC system and the existing Radiological Information System (RIS) has been implemented. The functional modules are: check of information consistency in data exported from image sources, a (logically) central storage of image data, viewing facility for high speed-, large volume-, clinical work, and an efficient interface to the RIS. Also, an image related database extension has been made to the RIS. The IMAC system has a strictly modular design with a simple structure. The image archive and short term storage are logically the same and acts as a huge disk. Through NFS all image data is available to all the connected workstations. All patient selection for viewing is through worklists, which are created by selection criteria in the RIS, by the use of barcodes, or, in singular cases, by entering the patient ID by hand.

  12. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.

  13. A high-speed network for cardiac image review.

    PubMed Central

    Elion, J. L.; Petrocelli, R. R.

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  14. Online data handling and storage at the CMS experiment

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gómez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, RK; Morovic, S.; Nuñez-Barranco-Fernández, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

  15. Online Data Handling and Storage at the CMS Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J. M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced bymore » the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.« less

  16. Evaluation of the effectiveness of different brands' disks in antimicrobial disk susceptibility tests.

    PubMed

    Lam, C P; Tsai, W C

    1989-08-01

    A total of 813 routine isolates of aerobic and facultatively anaerobic bacteria were employed to determine the efficacy of different branded (Oxoid, Difco, BBL) antimicrobial disks, using disk antimicrobial susceptibility tests, for a total of 22 kinds of antimicrobial disks and 10,740 antibiotic-organism comparisons. Major positive and major negative discrepancies in results were defined as a change from "susceptible" to "both resistant", and a change from "resistant" to "both susceptible" according to the National Committee for Clinical Laboratory Standards' interpretive standards for zone diameters. Minor positive and minor negative discrepancies were defined as a change from "susceptible" to "both intermediate", or "intermediate" to "both resistant"; and a change from "resistant" to "both intermediate", or "intermediate" to "both susceptible". The overall agreements of Oxoid, Difco, and BBL systems were 98%, 98.7%, and 98.4% respectively, and their differences are not statistically significant. Different kinds of antimicrobial disks' representative patterns of these three brands are further analyzed: (A) In the Oxoid series, there were 220 discrepancies. Minor negative discrepancy is predominant, most frequently related to carbenicillin (25), gentamicin (13) and cephalothin (10). Besides minor negative discrepancy, carbenicillin also had six minor positive discrepancies. Tetracyclin had ten minor positive discrepancies. (B) In the Difco series, there were 137 discrepancies. The majority of them are minor positive discrepancies. Moxalactam (11) and cefotaxime (10) are the most common antibiotics involved. (C) In the BBL series, there were 170 discrepancies. Minor positive discrepancy was the predominant one, which mostly related to carbenicillin (24), amikacin (13), and ceftizoxime (12). In addition, tetracyclin had 24 times minor negative discrepancies. Laboratory workers must pay attention to these different patterns of representation. In order to evaluate the quality of 11 pairs of the give-away and the purchased BBL disks, we also compared the results for these 813 routine isolates (a total of 5,482 antibiotic-organism comparisons). The giveaway disks demonstrated 99.1% overall agreement with the purchased disks. There were 48 minor discrepancies [26 (0.47%) minor positive discrepancies and 22 (0.4%) minor negative discrepancies]. These results allow this study to emphasize the followings in order to raise the awareness of the laboratory workers: (i) alteration of disk efficacy during transportation and storage; (ii) major considerations in choosing different brands' antimicrobial disks, and (iii) the important roles played by salespersons and pharmaceutical companies in achieving sound results.

  17. Performance of the engineering analysis and data system 2 common file system

    NASA Technical Reports Server (NTRS)

    Debrunner, Linda S.

    1993-01-01

    The Engineering Analysis and Data System (EADS) was used from April 1986 to July 1993 to support large scale scientific and engineering computation (e.g. computational fluid dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP in June 1991, after which a contract was awarded to Cray Grumman. EADS II was installed in February 1993, and by July 1993 most users were migrated. EADS II is a network of heterogeneous computer systems supporting scientific and engineering applications. The Common File System (CFS) is a key component of this system. The CFS provides a seamless, integrated environment to the users of EADS II including both disk and tape storage. UniTree software is used to implement this hierarchical storage management system. The performance of the CFS suffered during the early months of the production system. Several of the performance problems were traced to software bugs which have been corrected. Other problems were associated with hardware. However, the use of NFS in UniTree UCFM software limits the performance of the system. The performance issues related to the CFS have led to a need to develop a greater understanding of the CFS organization. This paper will first describe the EADS II with emphasis on the CFS. Then, a discussion of mass storage systems will be presented, and methods of measuring the performance of the Common File System will be outlined. Finally, areas for further study will be identified and conclusions will be drawn.

  18. Investigation of selected disk systems

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The large disk systems offered by IBM, UNIVAC, Digital Equipment Corporation, and Data General were examined. In particular, these disk systems were analyzed in terms of how well available operating systems take advantage of the respective disk controller's transfer rates, and to what degree all available data for optimizing disk usage is effectively employed. In the course of this analysis, generic functions and components of disk systems were defined and the capabilities of the surveyed disk system were investigated.

  19. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment.

    PubMed

    Keller, Mark; Naue, Jana; Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols.

  20. The EOSDIS software challenge

    NASA Astrophysics Data System (ADS)

    Jaworski, Allan

    1993-08-01

    The Earth Observing System (EOS) Data and Information System (EOSDIS) will serve as a major resource for the earth science community, supporting both command and control of complex instruments onboard the EOS spacecraft and the archiving, distribution, and analysis of data. The scale of EOSDIS and the volume of multidisciplinary research to be conducted using EOSDIS resources will produce unparalleled needs for technology transparency, data integration, and system interoperability. The scale of this effort far outscopes any previous scientific data system in its breadth or operational and performance needs. Modern hardware technology can meet the EOSDIS technical challenge. Multiprocessing speeds of many giga-flops are being realized by modern computers. Online storage disk, optical disk, and videocassette libraries with storage capacities of many terabytes are now commercially available. Radio frequency and fiber optics communications networks with gigabit rates are demonstrable today. It remains, of course, to perform the system engineering to establish the requirements, architectures, and designs that will implement the EOSDIS systems. Software technology, however, has not enjoyed the price/performance advances of hardware. Although we have learned to engineer hardware systems which have several orders of magnitude greater complexity and performance than those built in the 1960's, we have not made comparable progress in dramatically reducing the cost of software development. This lack of progress may significantly reduce our capabilities to achieve economically the types of highly interoperable, responsive, integraded, and productive environments which are needed by the earth science community. This paper describes some of the EOSDIS software requirements and current activities in the software community which are applicable to meeting the EOSDIS challenge. Some of these areas include intelligent user interfaces, software reuse libraries, and domain engineering. Also included are discussions of applicable standards in the areas of operating systems interfaces, user interfaces, communications interfaces, data transport, and science algorithm support, and their role in supporting the software development process.

  1. The advantage of an alternative substrate over Al/NiP disks

    NASA Astrophysics Data System (ADS)

    Jiaa, Chi L.; Eltoukhy, Atef

    1994-02-01

    Compact-size disk drives with high storage densities are in high demand due to the popularity of portable computers and workstations. The contact-start-stop (CSS) endurance performance must improve in order to accomodate the higher number of on/off cycles. In this paper, we looked at 65 mm thin-film canasite substrate disks and evaluated their mechanical performance. We compared them with conventional aluminum NiP-plated disks in surface topography, take-off time with changes of skew angles and radius, CSS, drag test and glide height performance, and clamping effect. In addition, a new post-sputter process aimed at the improvement of take-off and glide as well as CSS performances was investigated and demonstrated for the canasite disks. From the test results, it is indicated that canasite achieved a lower take-off velocity, higher clamping resistance, and better glide height and CSS endurance performance. This study concludes that a new generation disk drive equipped with canasite substrate disks will consume less power from the motor due to faster take-off and lighter weight, achieve higher recording density since the head flies lower, can better withstand damage from sliding friction during the CSS operations, and will be less prone to disk distortion from clamping due to its superior mechanical properties.

  2. Design and implementation of scalable tape archiver

    NASA Technical Reports Server (NTRS)

    Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio

    1996-01-01

    In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed cassettes, thus balancing the load of each archiver. Background migration occurs the robotics are idle. Both migration algorithms are based on access frequency and space utility of each element archiver. To normalize these parameters according to the number of drives in each element archiver, it is possible to maintain high performance even if some tape drives fail. We found that the foreground migration is efficient at reducing access response time. Beside the foreground migration, the background migration makes it possible to track the transition of spatial access locality quickly.

  3. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1992-01-01

    In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.

  4. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1993-01-01

    In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.

  5. Standards on the permanence of recording materials

    NASA Astrophysics Data System (ADS)

    Adelstein, Peter Z.

    1996-02-01

    The permanence of recording materials is dependent upon many factors, and these differ for photographic materials, magnetic tape and optical disks. Photographic permanence is affected by the (1) stability of the material, (2) the photographic processing and (3) the storage conditions. American National Standards on the material and the processing have been published for different types of film and standard test methods have been established for color film. The third feature of photographic permanence is the storage requirements and these have been established for photographic film, prints and plates. Standardization on the permanence of electronic recording materials is more complicated. As with photographic materials, stability is dependent upon (1) the material itself and (2) the storage environment. In addition, retention of the necessary (3) hardware and (4) software is also a prerequisite. American National Standards activity in these areas has been underway for the past six years. A test method for the material which determines the life expectancy of CD-ROMs has been standardized. The problems of determining the expected life of magnetic tape have been more formidable but the critical physical properties have been determined. A specification for the storage environment of magnetic tape has been finalized and one on the storage of optical disks is being worked on. Critical but unsolved problems are the obsolescence of both the hardware and the software necessary to read digital images.

  6. Standards on the permanence of recording materials

    NASA Astrophysics Data System (ADS)

    Adelstein, Peter Z.

    1996-01-01

    The permanence of recording materials is dependent upon many factors, and these differ for photographic materials, magnetic tape and optical disks. Photographic permanence is affected by the (1) stability of the material, (2) the photographic processing, and (3) the storage conditions. American National Standards on the material and the processing have been published for different types of film and standard test methods have been established for color film. The third feature of photographic permanence is the storage requirements and these have been established for photographic film, prints, and plates. Standardization on the permanence of electronic recording materials is more complicated. As with photographic materials, stability is dependent upon (1) the material itself and (2) the storage environment. In addition, retention of the necessary (3) hardware and (4) software is also a prerequisite. American National Standards activity in these areas has been underway for the past six years. A test method for the material which determines the life expectancy of CD-ROMs has been standardized. The problems of determining the expected life of magnetic tape have been more formidable but the critical physical properties have been determined. A specification for the storage environment of magnetic tapes has been finalized and one on the storage of optical disks is being worked on. Critical but unsolved problems are the obsolescence of both the hardware and the software necessary to read digital images.

  7. VMOMS — A computer code for finding moment solutions to the Grad-Shafranov equation

    NASA Astrophysics Data System (ADS)

    Lao, L. L.; Wieland, R. M.; Houlberg, W. A.; Hirshman, S. P.

    1982-08-01

    Title of program: VMOMS Catalogue number: ABSH Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland (See application form in this issue) Computer: PDP-10/KL10; Installation: ORNL Fusion Energy Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA Operating system: TOPS 10 Programming language used: FORTRAN High speed storage required: 9000 words No. of bits in a word: 36 Overlay structure: none Peripherals used: line printer, disk drive No. of cards in combined program and test deck: 2839 Card punching code: ASCII

  8. Proceedings of the Federal Acquisition Research Symposium with Theme: Government, Industry, Academe: Synergism for Acquisition Improvement, Held at the Williamsburg Hilton and National Conference Center, Williamsburg, Virginia on 7-9 December 1983

    DTIC Science & Technology

    1983-12-01

    storage included room for not only the video display incompatibilties which have been plaguing the terminal (VDT), but also for the disk drive, the...once at system implementation time. This sample Video Display Terminal - ---------------------------------- O(VT) screen shows the Appendix N Code...override theavalue with a different data value. Video Display Terminal (VDT): A cathode ray tube or gas plasma tube display screen terminal that allows

  9. Efficient micromagnetics for magnetic storage devices

    NASA Astrophysics Data System (ADS)

    Escobar Acevedo, Marco Antonio

    Micromagnetics is an important component for advancing the magnetic nanostructures understanding and design. Numerous existing and prospective magnetic devices rely on micromagnetic analysis, these include hard disk drives, magnetic sensors, memories, microwave generators, and magnetic logic. The ability to examine, describe, and predict the magnetic behavior, and macroscopic properties of nanoscale magnetic systems is essential for improving the existing devices, for progressing in their understanding, and for enabling new technologies. This dissertation describes efficient micromagnetic methods as required for magnetic storage analysis. Their performance and accuracy is demonstrated by studying realistic, complex, and relevant micromagnetic system case studies. An efficient methodology for dynamic micromagnetics in large scale simulations is used to study the writing process in a full scale model of a magnetic write head. An efficient scheme, tailored for micromagnetics, to find the minimum energy state on a magnetic system is presented. This scheme can be used to calculate hysteresis loops. An efficient scheme, tailored for micromagnetics, to find the minimum energy path between two stable states on a magnetic system is presented. This minimum energy path is intimately related to the thermal stability.

  10. System and Method for High-Speed Data Recording

    NASA Technical Reports Server (NTRS)

    Taveniku, Mikael B. (Inventor)

    2017-01-01

    A system and method for high speed data recording includes a control computer and a disk pack unit. The disk pack is provided within a shell that provides handling and protection for the disk packs. The disk pack unit provides cooling of the disks and connection for power and disk signaling. A standard connection is provided between the control computer and the disk pack unit. The disk pack units are self sufficient and able to connect to any computer. Multiple disk packs are connected simultaneously to the system, so that one disk pack can be active while one or more disk packs are inactive. To control for power surges, the power to each disk pack is controlled programmatically for the group of disks in a disk pack.

  11. Integration experiences and performance studies of A COTS parallel archive systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Bary

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less

  12. Integration experiments and performance studies of a COTS parallel archive system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Gary

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less

  13. Grid data access on widely distributed worker nodes using scalla and SRM

    NASA Astrophysics Data System (ADS)

    Jakl, P.; Lauret, J.; Hanushevsky, A.; Shoshani, A.; Sim, A.; Gu, J.

    2008-07-01

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.

  14. Battery voltage-balancing applications of disk-type radial mode Pb(Zr • Ti)O3 ceramic resonator

    NASA Astrophysics Data System (ADS)

    Thenathayalan, Daniel; Lee, Chun-gu; Park, Joung-hu

    2017-10-01

    In this paper, we propose a novel technique to build a charge-balancing circuit for series-connected battery strings using various kinds of disk-type ceramic Pb(Zr • Ti)O3 piezoelectric resonators (PRs). The use of PRs replaces the whole external battery voltage-balancer circuit, which consists mainly of a bulky magnetic element. The proposed technique is validated using different ceramic PRs and the results are analyzed in terms of their physical properties. A series-connected battery string with a voltage rating of 61.5 V is set as a hardware prototype under test, then the power transfer efficiency of the system is measured at different imbalance voltages. The performance of the proposed battery voltage-balancer circuit employed with a PR is also validated through hardware implementation. Furthermore, the temperature distribution image of the PR is obtained to compare power transfer efficiency and thermal stress under different operating conditions. The test results show that the battery voltage-balancer circuit can be successfully implemented using PRs with the maximum power conversion efficiency of over 96% for energy storage systems.

  15. Effects of Disk Warping on the Inclination Evolution of Star-Disk-Binary Systems

    NASA Astrophysics Data System (ADS)

    Zanazzi, J. J.; Lai, Dong

    2018-04-01

    Several recent studies have suggested that circumstellar disks in young stellar binaries may be driven into misalignement with their host stars due to secular gravitational interactions between the star, disk and the binary companion. The disk in such systems is twisted/warped due to the gravitational torques from the oblate central star and the external companion. We calculate the disk warp profile, taking into account of bending wave propagation and viscosity in the disk. We show that for typical protostellar disk parameters, the disk warp is small, thereby justifying the "flat-disk" approximation adopted in previous theoretical studies. However, the viscous dissipation associated with the small disk warp/twist tends to drive the disk toward alignment with the binary or the central star. We calculate the relevant timescales for the alignment. We find the alignment is effective for sufficiently cold disks with strong external torques, especially for systems with rapidly rotating stars, but is ineffective for the majority of star-disk-binary systems. Viscous warp driven alignment may be necessary to account for the observed spin-orbit alignment in multi-planet systems if these systems are accompanied by an inclined binary companion.

  16. Data Elevator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BYNA, SUNRENDRA; DONG, BIN; WU, KESHENG

    Data Elevator: Efficient Asynchronous Data Movement in Hierarchical Storage Systems Multi-layer storage subsystems, including SSD-based burst buffers and disk-based parallel file systems (PFS), are becoming part of HPC systems. However, software for this storage hierarchy is still in its infancy. Applications may have to explicitly move data among the storage layers. We propose Data Elevator for transparently and efficiently moving data between a burst buffer and a PFS. Users specify the final destination for their data, typically on PFS, Data Elevator intercepts the I/O calls, stages data on burst buffer, and then asynchronously transfers the data to their final destinationmore » in the background. This system allows extensive optimizations, such as overlapping read and write operations, choosing I/O modes, and aligning buffer boundaries. In tests with large-scale scientific applications, Data Elevator is as much as 4.2X faster than Cray DataWarp, the start-of-art software for burst buffer, and 4X faster than directly writing to PFS. The Data Elevator library uses HDF5's Virtual Object Layer (VOL) for intercepting parallel I/O calls that write data to PFS. The intercepted calls are redirected to the Data Elevator, which provides a handle to write the file in a faster and intermediate burst buffer system. Once the application finishes writing the data to the burst buffer, the Data Elevator job uses HDF5 to move the data to final destination in an asynchronous manner. Hence, using the Data Elevator library is currently useful for applications that call HDF5 for writing data files. Also, the Data Elevator depends on the HDF5 VOL functionality.« less

  17. Virtual file system for PSDS

    NASA Technical Reports Server (NTRS)

    Runnels, Tyson D.

    1993-01-01

    This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

  18. Data reduction programs for a laser radar system

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Copeland, G. E.

    1984-01-01

    The listing and description of software routines which were used to analyze the analog data obtained from LIDAR - system are given. All routines are written in FORTRAN - IV on a HP - 1000/F minicomputer which serves as the heart of the data acquisition system for the LIDAR program. This particular system has 128 kilobytes of highspeed memory and is equipped with a Vector Instruction Set (VIS) firmware package, which is used in all the routines, to handle quick execution of different long loops. The system handles floating point arithmetic in hardware in order to enhance the speed of execution. This computer is a 2177 C/F series version of HP - 1000 RTE-IVB data acquisition computer system which is designed for real time data capture/analysis and disk/tape mass storage environment.

  19. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    NASA Technical Reports Server (NTRS)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  20. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    NASA Astrophysics Data System (ADS)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16 channels with 240MB/s network throughput.When it is integrated,sharing system can provide 1020MB/s write speed simultaneously.When the master storage server fails, the backup storage server takes over the normal service.The literacy of client will not be affected,in which switching time is less than 5s.The design and integrated storage system meet users requirements. Anyway, all-fiber way is too expensive in SAN; SCSI hard disk transfer rate may still be the bottleneck in the development of the entire storage system. Stornext can provide users with efficient sharing, management, automatic archiving of large numbers of files and hardware solutions. It occupies a leading position in big data management. Storage is the most popular sharing shareware, and there are drawbacks in Stornext: Firstly, Stornext software is expensive, in which charge by the sites. When the network scale is large, the purchase cost will be very high. Secondly, the parameters of Stornext software are more demands on the skills of technical staff. If there is a problem, it is difficult to exclude.

  1. In-Storage Embedded Accelerator for Sparse Pattern Processing

    DTIC Science & Technology

    2016-08-13

    performance of RAM disk. Since this configuration offloads most of processing onto the FPGA, the host software consists of only two threads for...more. Fig. 13. Document Processed vs CPU Threads Note that BlueDBM efficiency comes from our in-store processing paradigm that uses the FPGA...In-Storage Embedded Accelerator for Sparse Pattern Processing Sang-Woo Jun*, Huy T. Nguyen#, Vijay Gadepally#*, and Arvind* #MIT Lincoln Laboratory

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amerio, S.; Behari, S.; Boyd, J.

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less

  3. Compression molded energy storage flywheels

    NASA Astrophysics Data System (ADS)

    Burdick, P. A.

    Materials choices, manufacturing processes, and benefits of flywheels as an effective energy storage device are discussed. Tests at the LL Laboratories have indicated that compressing molding of plies of structural sheet molding compound (SMC) filled with randomly oriented fibers produces a laminated disk with transversely isotropic properties. Good performance has been realized with a carbon/epoxy system, which displays satisfactory stiffness and strength in flywheel applications. A core profile has been selected, consisting of a uniform 1 in cross sectional thickness and a 21 in diam. Test configurations using three different resin paste formulations were compared after being mounted elastomerically on aluminum hubs. Further development was found necessary on accurate balancing and hub bonding. It was concluded that the SMC flywheels display the low-cost, sufficient energy densities, suitable dynamic stability characteristics, and acceptably benign failure modes for automotive applications.

  4. Data preservation at the Fermilab Tevatron

    NASA Astrophysics Data System (ADS)

    Amerio, S.; Behari, S.; Boyd, J.; Brochmann, M.; Culbertson, R.; Diesburg, M.; Freeman, J.; Garren, L.; Greenlee, H.; Herner, K.; Illingworth, R.; Jayatilaka, B.; Jonckheere, A.; Li, Q.; Naymola, S.; Oleynik, G.; Sakumoto, W.; Varnes, E.; Vellidis, C.; Watts, G.; White, S.

    2017-04-01

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. These efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.

  5. An object-oriented approach to data display and storage: 3 years experience, 25,000 cases.

    PubMed

    Sainsbury, D A

    1993-11-01

    Object-oriented programming techniques were used to develop computer based data display and storage systems. These have been operating in the 8 anaesthetising areas of the Adelaide Children's Hospital for 3 years. The analogue and serial outputs from an array of patient monitors are connected to IBM compatible PC-XT computers. The information is displayed on a colour screen as wave-form and trend graphs and digital format in 'real time'. The trend data is printed simultaneously on a dot matrix printer. This data is also stored for 24 hours on 'hard' disk. The major benefit has been the provision of a single visual focus for all monitored variables. The automatic logging of data has been invaluable in the analysis of critical incidents. The systems were made possible by recent, rapid improvements in computer hardware and software. This paper traces the development of the program and demonstrates the advantages of object-oriented programming techniques.

  6. Operating characteristics of a 0.87 kW-hr flywheel energy storage module

    NASA Technical Reports Server (NTRS)

    Loewenthal, S. H.; Scibbe, H. W.; Parker, R. D.; Zaretsky, E. V.

    1985-01-01

    Discussion is given of the design and loss characteristics of 0.87 kW-hr (peak) flywheel energy storage module suitable for aerospace and automotive applications. The maraging steel flywheel rotor, a 46-cm- (18-in-) diameter, 58-kg (128-lb) tapered disk, delivers 0.65 kW-hr of usable energy between operating speeds of 10,000 and 20,000 rpm. The rotor is supported by 20- and 25-mm bore diameter, deep-groove ball bearings, lubricated by a self-replenishing wick type lubrication system. To reduce aerodynamic losses, the rotor housing was evacuated to vacuum levels from 40 to 200 millitorr. Dynamic rotor instabilities uncovered during testing necessitated the use of an elastometric-bearing damper to limit shaft excursions. Spindown losses from bearing, seal, and aerodynamic drag at 50 millitorr typically ranged from 64 to 193 W at 10,000 and 20,000 rpm, respectively. Discharge efficiency of the flywheel system exceeded 96 percent at torque levels greater than 21 percent of rated torque.

  7. Data oriented job submission scheme for the PHENIX user analysis in CCJ

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; En'yo, H.; Ichihara, T.; Watanabe, Y.; Yokkaichi, S.

    2011-12-01

    The RIKEN Computing Center in Japan (CCJ) has been developed to make it possible analyzing huge amount of data corrected by the PHENIX experiment at RHIC. The corrected raw data or reconstructed data are transferred via SINET3 with 10 Gbps bandwidth from Brookheaven National Laboratory (BNL) by using GridFTP. The transferred data are once stored in the hierarchical storage management system (HPSS) prior to the user analysis. Since the size of data grows steadily year by year, concentrations of the access request to data servers become one of the serious bottlenecks. To eliminate this I/O bound problem, 18 calculating nodes with total 180 TB local disks were introduced to store the data a priori. We added some setup in a batch job scheduler (LSF) so that user can specify the requiring data already distributed to the local disks. The locations of data are automatically obtained from a database, and jobs are dispatched to the appropriate node which has the required data. To avoid the multiple access to a local disk from several jobs in a node, techniques of lock file and access control list are employed. As a result, each job can handle a local disk exclusively. Indeed, the total throughput was improved drastically as compared to the preexisting nodes in CCJ, and users can analyze about 150 TB data within 9 hours. We report this successful job submission scheme and the feature of the PC cluster.

  8. Assessment of disk MHD generators for a base load powerplant

    NASA Technical Reports Server (NTRS)

    Chubb, D. L.; Retallick, F. D.; Lu, C. L.; Stella, M.; Teare, J. D.; Loubsky, W. J.; Louis, J. F.; Misra, B.

    1981-01-01

    Results from a study of the disk MHD generator are presented. Both open and closed cycle disk systems were investigated. Costing of the open cycle disk components (nozzle, channel, diffuser, radiant boiler, magnet and power management) was done. However, no detailed costing was done for the closed cycle systems. Preliminary plant design for the open cycle systems was also completed. Based on the system study results, an economic assessment of the open cycle systems is presented. Costs of the open cycle disk conponents are less than comparable linear generator components. Also, costs of electricity for the open cycle disk systems are competitive with comparable linear systems. Advantages of the disk design simplicity are considered. Improvements in the channel availability or a reduction in the channel lifetime requirement are possible as a result of the disk design.

  9. Testing an Open Source installation and server provisioning tool for the INFN CNAF Tierl Storage system

    NASA Astrophysics Data System (ADS)

    Pezzi, M.; Favaro, M.; Gregori, D.; Ricci, P. P.; Sapunenko, V.

    2014-06-01

    In large computing centers, such as the INFN CNAF Tier1 [1], is essential to be able to configure all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor[2], a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation and configuration features and also offer a proper full customizable solution as an alternative to Quattor. Our choice at the moment fell on integration between two tools: Cobbler [3] for the installation phase and Puppet [4] for the server provisioning and management operation. The tool should provide the following properties in order to replicate and gradually improve the current system features: implement a system check for storage specific constraints such as kernel modules black list at boot time to avoid undesired SAN (Storage Area Network) access during disk partitioning; a simple and effective mechanism for kernel upgrade and downgrade; the ability of setting package provider using yum, rpm or apt; easy to use Virtual Machine installation support including bonding and specific Ethernet configuration; scalability for managing thousands of nodes and parallel installations. This paper describes the results of the comparison and the tests carried out to verify the requirements and the new system suitability in the INFN-T1 environment.

  10. Electrodeposited Co-Pt thin films for magnetic hard disks

    NASA Astrophysics Data System (ADS)

    Bozzini, B.; De Vita, D.; Sportoletti, A.; Zangari, G.; Cavallotti, P. L.; Terrenzio, E.

    1993-03-01

    ew baths for Co-Pt electrodeposition have been developed and developed and ECD thin films (≤0.3μm) have been prepared and characterized structurally (XRD), morphologically (SEM), chemically (EDS) and magnetically (VSM); their improved corrosion, oxidation and wear resistance have been ascertained. Such alloys appear suitable candidates for magnetic storage systems, from all technological viewpoints. The originally formulated baths contain Co-NH 3-citrate complexes and Pt-p salt (Pt(NH 3) 2(NO 2) 2). Co-Pt thin films of fcc structure are deposited obtaining microcrystallites of definite composition. At Pt ⋍ 30 at% we obtain fcc films with a=0.369 nm, HC=80 kA m, and high squareness; increasing Co and decreasing Pt content in the bath it is possible to reduce the Pt content of the deposit, obtaining fcc structures containing two types of microcrystals with a = 0.3615 nm and a = 0.369 nm deposited simultaneously. NaH 2PO 2 additions to the bath have a stabilizing influence on the fcc structure of a = 0.3615 nm, Pt ⋍ 20 at% and HC as high as 200 kA/m, with hysteresis loops suitable for both longitudinal or perpendicular recording, depending on the thickness. We have prepared 2.5 in. hard disks for magnetic recording with ECD Co-Pt 20 at% with a polished and texturized ACD Ni-P underlayer. Pulse response, 1F & 2F frequency and frequency sweep response behaviour, as well as noise and overwrite characteristics have been measured for both our disks and high-standard sputtered Co-Cr-Ta production disks, showin improved D50 for Co-Pt ECD disks. The signal-to-noise ratio could be improved by pulse electrodeposition and etching post-treatments.

  11. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele

    2012-12-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  12. Multiplexed Holographic Data Storage in Bacteriorhodopsin

    NASA Technical Reports Server (NTRS)

    Mehrl, David J.; Krile, Thomas F.

    1997-01-01

    High density optical data storage, driven by the information revolution, remains at the forefront of current research areas. Much of the current research has focused on photorefractive materials (SBN and LiNbO3) and polymers, despite various problems with expense, durability, response time and retention periods. Photon echo techniques, though promising, are questionable due to the need for cryogenic conditions. Bacteriorhodopsin (BR) films are an attractive alternative recording medium. Great strides have been made in refining BR, and materials with storage lifetimes as long as 100 days have recently become available. The ability to deposit this robust polycrystalline material as high quality optical films suggests the use of BR as a recording medium for commercial optical disks. Our own recent research has demonstrated the suitability of BR films for real time spatial filtering and holography. We propose to fully investigate the feasibility of performing holographic mass data storage in BR. Important aspects of the problem to be investigated include various data multiplexing techniques (e.g. angle- amplitude- and phase-encoded multiplexing, and in particular shift-multiplexing), multilayer recording techniques, SLM selection and data readout using crossed polarizers for noise rejection. Systems evaluations of storage parameters, including access times, memory refresh constraints, erasure, signal-to-noise ratios and bit error rates, will be included in our investigations.

  13. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment

    PubMed Central

    Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols. PMID:26147196

  14. General consumer communication tools for improved image management and communication in medicine.

    PubMed

    Rosset, Chantal; Rosset, Antoine; Ratib, Osman

    2005-12-01

    We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.

  15. Non-volatile main memory management methods based on a file system.

    PubMed

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  16. Performances of multiprocessor multidisk architectures for continuous media storage

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.

    1996-03-01

    Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.

  17. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  18. Effect of silica coating on fracture strength of glass-infiltrated alumina ceramic cemented to dentin.

    PubMed

    Xie, Haifeng; Zhu, Ye; Chen, Chen; Gu, Ning; Zhang, Feimin

    2011-10-01

    To examine the availability of sol-gel processed silica coating for alumina-based ceramic bonding, and determine which silica sol concentration was appropriate for silica coating. Sixty disks of In-Ceram alumina ceramic were fabricated and randomly divided into 5 main groups. The disks received 5 different surface conditioning treatments: Group Al, sandblasted; Group AlC, sandblasted + silane coupling agent applied; Groups Al20C, Al30C, and Al40C, sandblasted, silica coating via sol-gel process prepared using 20 wt%, 30 wt%, and 40 wt% silica sols, and then silane coupling agent applied. Before bonding, one-step adhesives were applied on pre-prepared ceramic surfaces of all groups. Then, 60 dentin specimens were prepared and conditioned with phosphoric acid and one-step adhesive. Ceramic disks of all groups were cemented to dentin specimens with dual-curing resin cements. Fracture strength was determined at 24 h and after 20 days of storage in water. Groups Al20C, Al30C, and Al40C revealed significantly higher fracture strength than groups Al and AlC. No statistically significant difference in fracture strength was found between groups Al and AlC, or among groups Al20C, Al30C, and Al40C. Fracture strength values of all the groups did not change after 20 days of water storage. Sol-gel processed silica coating can enhance fracture strength of In-Ceram alumina ceramic after bonding to dentin, and different silica sol concentrations produced the same effects. Twenty days of water storage did not decrease the fracture strength.

  19. Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakl, Pavel; /Prague, Inst. Phys.; Lauret, Jerome

    2011-11-10

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of themore » largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.« less

  20. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1993-01-01

    We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X DataSlice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.

  1. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1992-01-01

    We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X Data Slice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  3. Research Studies on Advanced Optical Module/Head Designs for Optical Disk Recording Devices

    NASA Technical Reports Server (NTRS)

    Burke, James J.; Seery, Bernard D.

    1993-01-01

    The Annual Report of the Optical Data Storage Center of the University of Arizona is presented. Summary reports on continuing projects are presented. Research areas include: magneto-optic media, optical heads, and signal processing.

  4. Faster, Better, Cheaper: A Decade of PC Progress.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1997-01-01

    Reviews the development of personal computers and how computer components have changed in price and value. Highlights include disk drives; keyboards; displays; memory; color graphics; modems; CPU (central processing unit); storage; direct mail vendors; and future possibilities. (LRW)

  5. Holographic optical disc

    NASA Astrophysics Data System (ADS)

    Zhou, Gan; An, Xin; Pu, Allen; Psaltis, Demetri; Mok, Fai H.

    1999-11-01

    The holographic disc is a high capacity, disk-based data storage device that can provide the performance for next generation mass data storage needs. With a projected capacity approaching 1 terabit on a single 12 cm platter, the holographic disc has the potential to become a highly efficient storage hardware for data warehousing applications. The high readout rate of holographic disc makes it especially suitable for generating multiple, high bandwidth data streams such as required for network server computers. Multimedia applications such as interactive video and HDTV can also potentially benefit from the high capacity and fast data access of holographic memory.

  6. $ANBA; a rapid, combined data acquisition and correction program for the SEMQ electron microprobe

    USGS Publications Warehouse

    McGee, James J.

    1983-01-01

    $ANBA is a program developed for rapid data acquisition and correction on an automated SEMQ electron microprobe. The program provides increased analytical speed and reduced disk read/write operations compared with the manufacturer's software, resulting in a doubling of analytical throughput. In addition, the program provides enhanced analytical features such as averaging, rapid and compact data storage, and on-line plotting. The program is described with design philosophy, flow charts, variable names, a complete program listing, and system requirements. A complete operating example and notes to assist in running the program are included.

  7. The interactive astronomical data analysis facility - image enhancement techniques to Comet Halley

    NASA Astrophysics Data System (ADS)

    Klinglesmith, D. A.

    1981-10-01

    PDP 11/40 computer is at the heart of a general purpose interactive data analysis facility designed to permit easy access to data in both visual imagery and graphic representations. The major components consist of: the 11/40 CPU and 256 K bytes of 16-bit memory; two TU10 tape drives; 20 million bytes of disk storage; three user terminals; and the COMTAL image processing display system. The application of image enhancement techniques to two sequences of photographs of Comet Halley taken in Egypt in 1910 provides evidence for eruptions from the comet's nucleus.

  8. A self-configuring control system for storage and computing departments at INFN-CNAF Tierl

    NASA Astrophysics Data System (ADS)

    Gregori, Daniele; Dal Pra, Stefano; Ricci, Pier Paolo; Pezzi, Michele; Prosperini, Andrea; Sapunenko, Vladimir

    2015-05-01

    The storage and farming departments at the INFN-CNAF Tier1[1] manage approximately thousands of computing nodes and several hundreds of servers that provides access to the disk and tape storage. In particular, the storage server machines should provide the following services: an efficient access to about 15 petabytes of disk space with different cluster of GPFS file system, the data transfers between LHC Tiers sites (Tier0, Tier1 and Tier2) via GridFTP cluster and Xrootd protocol and finally the writing and reading data operations on magnetic tape backend. One of the most important and essential point in order to get a reliable service is a control system that can warn if problems arise and which is able to perform automatic recovery operations in case of service interruptions or major failures. Moreover, during daily operations the configurations can change, i.e. if the GPFS cluster nodes roles can be modified and therefore the obsolete nodes must be removed from the control system production, and the new servers should be added to the ones that are already present. The manual management of all these changes is an operation that can be somewhat difficult in case of several changes, it can also take a long time and is easily subject to human error or misconfiguration. For these reasons we have developed a control system with the feature of self-configure itself if any change occurs. Currently, this system has been in production for about a year at the INFN-CNAF Tier1 with good results and hardly any major drawback. There are three major key points in this system. The first is a software configurator service (e.g. Quattor or Puppet) for the servers machines that we want to monitor with the control system; this service must ensure the presence of appropriate sensors and custom scripts on the nodes to check and should be able to install and update software packages on them. The second key element is a database containing information, according to a suitable format, on all the machines in production and able to provide for each of them the principal information such as the type of hardware, the network switch to which the machine is connected, if the machine is real (physical) or virtual, the possible hypervisor to which it belongs and so on. The last key point is a control system software (in our implementation we choose the Nagios software), capable of assessing the status of the servers and services, and that can attempt to restore the working state, restart or inhibit software services and send suitable alarm messages to the site administrators. The integration of these three elements was made by appropriate scripts and custom implementation that allow the self-configuration of the system according to a decisional logic and the whole combination of all the above-mentioned components will be deeply discussed in this paper.

  9. Processing of Bulk YBa2Cu3O(7-x) High Temperature Superconductor Materials for Gravity Modification Experiments and Performance Under AC Levitation

    NASA Technical Reports Server (NTRS)

    Koczor, Ronald; Noever, David; Hiser, Robert

    1999-01-01

    We have previously reported results using a high precision gravimeter to probe local gravity changes in the neighborhood of bulk-processed high temperature superconductor disks. Others have indicated that large annular disks (on the order of 25cm diameter) and AC levitation fields play an essential role in their observed experiments. We report experiments in processing such large bulk superconductors. Successful results depend on material mechanical characteristics, and pressure and heat treat protocols. Annular disks having rough dimensions of 30cm O.D., 7cm I.D. and 1 cm thickness have been routinely fabricated and tested under AC levitation fields ranging from 45 to 300OHz. Implications for space transportation initiatives and power storage flywheel technology will be discussed.

  10. Time-resolved scanning Kerr microscopy of flux beam formation in hard disk write heads

    NASA Astrophysics Data System (ADS)

    Valkass, Robert A. J.; Spicer, Timothy M.; Burgos Parra, Erick; Hicken, Robert J.; Bashir, Muhammad A.; Gubbins, Mark A.; Czoschke, Peter J.; Lopusnik, Radek

    2016-06-01

    To meet growing data storage needs, the density of data stored on hard disk drives must increase. In pursuit of this aim, the magnetodynamics of the hard disk write head must be characterized and understood, particularly the process of "flux beaming." In this study, seven different configurations of perpendicular magnetic recording (PMR) write heads were imaged using time-resolved scanning Kerr microscopy, revealing their detailed dynamic magnetic state during the write process. It was found that the precise position and number of driving coils can significantly alter the formation of flux beams during the write process. These results are applicable to the design and understanding of current PMR and next-generation heat-assisted magnetic recording devices, as well as being relevant to other magnetic devices.

  11. THE KOZAI–LIDOV MECHANISM IN HYDRODYNAMICAL DISKS. II. EFFECTS OF BINARY AND DISK PARAMETERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Wen; Lubow, Stephen H.; Martin, Rebecca G., E-mail: wf5@rice.edu

    2015-07-01

    Martin et al. showed that a substantially misaligned accretion disk around one component of a binary system can undergo global damped Kozai–Lidov (KL) oscillations. During these oscillations, the inclination and eccentricity of the disk are periodically exchanged. However, the robustness of this mechanism and its dependence on the system parameters were unexplored. In this paper, we use three-dimensional hydrodynamical simulations to analyze how various binary and disk parameters affect the KL mechanism in hydrodynamical disks. The simulations include the effect of gas pressure and viscosity, but ignore the effects of disk self-gravity. We describe results for different numerical resolutions, binarymore » mass ratios and orbital eccentricities, initial disk sizes, initial disk surface density profiles, disk sound speeds, and disk viscosities. We show that the KL mechanism can operate for a wide range of binary-disk parameters. We discuss the applications of our results to astrophysical disks in various accreting systems.« less

  12. The Kozai-Lidov mechanism in hydrodynamical disks. II. Effects of binary and disk parameters

    DOE PAGES

    Fu, Wen; Lubow, Stephen H.; Martin, Rebecca G.

    2015-07-01

    Martin et al. (2014b) showed that a substantially misaligned accretion disk around one component of a binary system can undergo global damped Kozai–Lidov (KL) oscillations. During these oscillations, the inclination and eccentricity of the disk are periodically exchanged. However, the robustness of this mechanism and its dependence on the system parameters were unexplored. In this paper, we use three-dimensional hydrodynamical simulations to analyze how various binary and disk parameters affect the KL mechanism in hydrodynamical disks. The simulations include the effect of gas pressure and viscosity, but ignore the effects of disk self-gravity. We describe results for different numerical resolutions,more » binary mass ratios and orbital eccentricities, initial disk sizes, initial disk surface density profiles, disk sound speeds, and disk viscosities. We show that the KL mechanism can operate for a wide range of binary-disk parameters. We discuss the applications of our results to astrophysical disks in various accreting systems.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, J.; Herner, K.; Jayatilaka, B.

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less

  14. Data preservation at the Fermilab Tevatron

    DOE PAGES

    Amerio, S.; Behari, S.; Boyd, J.; ...

    2017-01-22

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less

  15. Data preservation at the Fermilab Tevatron

    DOE PAGES

    Boyd, J.; Herner, K.; Jayatilaka, B.; ...

    2015-12-23

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less

  16. Crystal gazing. Part 2: Implications of advanced in digital data storage technology

    NASA Technical Reports Server (NTRS)

    Wells, D. C.

    1984-01-01

    During the next 5-10 years it is likely that the bit density available in digital mass storage systems (magnetic tapes, optical and magnetic disks) will be increased to such an extent that it will greatly exceed that of the conventional photographic emulsions like IIIaJ which are used in astronomy. These developments imply that it will soon be advantageous for astronomers to use microdensitometers to completely digitize all photographic plates soon after they are developed. Distribution of digital copies of sky surveys and the contents of plate vaults will probably become feasible within ten years. Copies of other astronomical archieves (e.g., Space Telescope) could also be distributed with the same techniques. The implications for designers of future microdensitometers are: (1) there will be a continuing need for precision digitization of large-format photographic imagery, and (2) that the need for real-time analysis of the output of microdensitometers will decrease.

  17. High Curie temperature drive layer materials for ion-implanted magnetic bubble devices

    NASA Technical Reports Server (NTRS)

    Fratello, V. J.; Wolfe, R.; Blank, S. L.; Nelson, T. J.

    1984-01-01

    Ion implantation of bubble garnets can lower the Curie temperature by 70 C or more, thus limiting high temperature operation of devices with ion-implanted propagation patterns. Therefore, double-layer materials were made with a conventional 2-micron bubble storage layer capped by an ion-implantable drive layer of high Curie temperature, high magnetostriction material. Contiguous disk test patterns were implanted with varying doses of a typical triple implant. Quality of propagation was judged by quasistatic tests on 8-micron period major and minor loops. Variations of magnetization, uniaxial anisotropy, implant dose, and magnetostriction were investigated to ensure optimum flux matching, good charged wall coupling, and wide operating margins. The most successful drive layer compositions were in the systems (SmDyLuCa)3(FeSi)5O12 and (BiGdTmCa)3(FeSi)5O12 and had Curie temperatures 25-44 C higher than the storage layers.

  18. Data preservation at the Fermilab Tevatron

    NASA Astrophysics Data System (ADS)

    Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.

    2015-12-01

    The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.

  19. DICOM-compliant PACS with CD-based image archival

    NASA Astrophysics Data System (ADS)

    Cox, Robert D.; Henri, Christopher J.; Rubin, Richard K.; Bret, Patrice M.

    1998-07-01

    This paper describes the design and implementation of a low- cost PACS conforming to the DICOM 3.0 standard. The goal was to provide an efficient image archival and management solution on a heterogeneous hospital network as a basis for filmless radiology. The system follows a distributed, client/server model and was implemented at a fraction of the cost of a commercial PACS. It provides reliable archiving on recordable CD and allows access to digital images throughout the hospital and on the Internet. Dedicated servers have been designed for short-term storage, CD-based archival, data retrieval and remote data access or teleradiology. The short-term storage devices provide DICOM storage and query/retrieve services to scanners and workstations and approximately twelve weeks of 'on-line' image data. The CD-based archival and data retrieval processes are fully automated with the exception of CD loading and unloading. The system employs lossless compression on both short- and long-term storage devices. All servers communicate via the DICOM protocol in conjunction with both local and 'master' SQL-patient databases. Records are transferred from the local to the master database independently, ensuring that storage devices will still function if the master database server cannot be reached. The system features rules-based work-flow management and WWW servers to provide multi-platform remote data access. The WWW server system is distributed on the storage, retrieval and teleradiology servers allowing viewing of locally stored image data directly in a WWW browser without the need for data transfer to a central WWW server. An independent system monitors disk usage, processes, network and CPU load on each server and reports errors to the image management team via email. The PACS was implemented using a combination of off-the-shelf hardware, freely available software and applications developed in-house. The system has enabled filmless operation in CT, MR and ultrasound within the radiology department and throughout the hospital. The use of WWW technology has enabled the development of an intuitive we- based teleradiology and image management solution that provides complete access to image data.

  20. The structure and dynamics of interactive documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocha, J.T.

    1999-04-01

    Advances in information technology continue to accelerate as the new millennium approaches. With these advances, electronic information management is becoming increasingly important and is now supported by a seemingly bewildering array of hardware and software whose sole purpose is the design and implementation of interactive documents employing multimedia applications. Multimedia memory and storage applications such as Compact Disk-Read Only Memory (CD-ROMs) are already a familiar interactive tool in both the entertainment and business sectors. Even home enthusiasts now have the means at their disposal to design and produce CD-ROMs. More recently, Digital Video Disk (DVD) technology is carving its ownmore » niche in these markets and may (once application bugs are corrected and prices are lowered) eventually supplant CD-ROM technology. CD-ROM and DVD are not the only memory and storage applications capable of supporting interactive media. External, high-capacity drives and disks such as the Iomega{copyright} zip{reg_sign} and jaz{reg_sign} are also useful platforms for launching interactive documents without the need for additional hardware such as CD-ROM burners and copiers. The main drawback here, however, is the relatively high unit price per disk when compared to the unit cost of CD-ROMs. Regardless of the application chosen, there are fundamental structural characteristics that must be considered before effective interactive documents can be created. Additionally, the dynamics of interactive documents employing hypertext links are unique and bear only slight resemblance to those of their traditional hard-copy counterparts. These two considerations form the essential content of this paper.« less

  1. Fiber Optic Communication System For Medical Images

    NASA Astrophysics Data System (ADS)

    Arenson, Ronald L.; Morton, Dan E.; London, Jack W.

    1982-01-01

    This paper discusses a fiber optic communication system linking ultrasound devices, Computerized tomography scanners, Nuclear Medicine computer system, and a digital fluoro-graphic system to a central radiology research computer. These centrally archived images are available for near instantaneous recall at various display consoles. When a suitable laser optical disk is available for mass storage, more extensive image archiving will be added to the network including digitized images of standard radiographs for comparison purposes and for remote display in such areas as the intensive care units, the operating room, and selected outpatient departments. This fiber optic system allows for a transfer of high resolution images in less than a second over distances exceeding 2,000 feet. The advantages of using fiber optic cables instead of typical parallel or serial communication techniques will be described. The switching methodology and communication protocols will also be discussed.

  2. Advanced Satellite Workstation - An integrated workstation environment for operational support of satellite system planning and analysis

    NASA Astrophysics Data System (ADS)

    Hamilton, Marvin J.; Sutton, Stewart A.

    A prototype integrated environment, the Advanced Satellite Workstation (ASW), which was developed and delivered for evaluation and operator feedback in an operational satellite control center, is described. The current ASW hardware consists of a Sun Workstation and Macintosh II Workstation connected via an ethernet Network Hardware and Software, Laser Disk System, Optical Storage System, and Telemetry Data File Interface. The central objective of ASW is to provide an intelligent decision support and training environment for operator/analysis of complex systems such as satellites. Compared to the many recent workstation implementations that incorporate graphical telemetry displays and expert systems, ASW provides a considerably broader look at intelligent, integrated environments for decision support, based on the premise that the central features of such an environment are intelligent data access and integrated toolsets.

  3. Spin-Valve and Spin-Tunneling Devices: Read Heads, MRAMs, Field Sensors

    NASA Astrophysics Data System (ADS)

    Freitas, P. P.

    Hard disk magnetic data storage is increasing at a steady state in terms of units sold, with 144 million drives sold in 1998 (107 million for desktops, 18 million for portables, and 19 million for enterprise drives), corresponding to a total business of 34 billion US [1]. The growing need for storage coming from new PC operating systems, INTERNET applications, and a foreseen explosion of applications connected to consumer electronics (digital TV, video, digital cameras, GPS systems, etc.), keep the magnetics community actively looking for new solutions, concerning media, heads, tribology, and system electronics. Current state of the art disk drives (January 2000), using dual inductive-write, magnetoresistive-read (MR) integrated heads reach areal densities of 15 to 23 bit/μm2, capable of putting a full 20 GB in one platter (a 2 hour film occupies 10 GB). Densities beyond 80 bit/μm2 have already been demonstrated in the laboratory (Fujitsu 87 bit/μm2-Intermag 2000, Hitachi 81 bit/μm2, Read-Rite 78 bit/μ m2, Seagate 70 bit/μ m2 - all the last three demos done in the first 6 months of 2000, with IBM having demonstrated 56 bit/μ m2 already at the end of 1999). At densities near 60 bit/μm2, the linear bit size is sim 43 nm, and the width of the written tracks is sim 0.23 μm. Areal density in commercial drives is increasing steadily at a rate of nearly 100% per year [1], and consumer products above 60 bit/μm2 are expected by 2002. These remarkable achievements are only possible by a stream of technological innovations, in media [2], write heads [3], read heads [4], and system electronics [5]. In this chapter, recent advances on spin valve materials and spin valve sensor architectures, low resistance tunnel junctions and tunnel junction head architectures will be addressed.

  4. Petabyte mass memory system using the Newell Opticel(TM)

    NASA Technical Reports Server (NTRS)

    Newell, Chester W.

    1994-01-01

    A random access system is proposed for digital storage and retrieval of up to a Petabyte of user data. The system is comprised of stacked memory modules using laser heads writing to an optical medium, in a new shirt-pocket-sized optical storage device called the Opticel. The Opticel described is a completely sealed 'black box' in which an optical medium is accelerated and driven at very high rates to accommodate the desired transfer rates, yet in such a manner that wear is virtually eliminated. It essentially emulates a disk, but with storage area up to several orders of magnitude higher. Access time to the first bit can range from a few milliseconds to a fraction of a second, with time to the last bit within a fraction of a second to a few seconds. The actual times are dependent on the capacity of each Opticel, which ranges from 72 Gigabytes to 1.25 Terabytes. Data transfer rate is limited strictly by the head and electronics, and is 15 Megabits per second in the first version. Independent parallel write/read access to each Opticel is provided using dedicated drives and heads. A Petabyte based on the present Opticel and drive design would occupy 120 cubic feet on a footprint of 45 square feet; with further development, it could occupy as little as 9 cubic feet.

  5. High speed superconducting flywheel system for energy storage

    NASA Astrophysics Data System (ADS)

    Bornemann, H. J.; Urban, C.; Boegler, P.; Ritter, T.; Zaitsev, O.; Weber, K.; Rietschel, H.

    1994-12-01

    A prototype of a flywheel system with auto stable high temperature superconducting bearings was built and tested. The bearings offered good vertical and lateral stability. A metallic flywheel disk, ø 190 mm x 30 mm, was safely rotated at speeds up to 15000 rpm. The disk was driven by a 3 phase synchronous homopolar motor/generator. Maximum energy capacity was 3.8 Wh, maximum power was 1.5 KW. The dynamic behavior of the prototype was tested, characterized and evaluated with respect to axial and lateral stiffness, decay torques (bearing drag), vibrational modes and critical speeds. The bearings supports a maximum weight of 65 N at zero gap, axial and lateral stiffness at 1 mm gap were 440 N/cm and 130 N/cm, respectively. Spin down experiments were performed to investigate the energy efficiency of the system. The decay rate was found to depend upon background pressure in the vacuum chamber and upon the gap width in the bearing. At a background pressure of 5x10 -4 Torr, the coefficient of friction (drag-to-lift ratio) was measured to be 0.000009 at low speeds for 6 mm gap width in the bearing. Our results indicate that further refinement of this technology will allow operation of higly efficient superconducting flywheels in the kWh range.

  6. High-speed data duplication/data distribution: An adjunct to the mass storage equation

    NASA Technical Reports Server (NTRS)

    Howard, Kevin

    1993-01-01

    The term 'mass storage' invokes the image of large on-site disk and tape farms which contain huge quantities of low- to medium-access data. Although the cost of such bulk storage is recognized, the cost of the bulk distribution of this data rarely is given much attention. Mass data distribution becomes an even more acute problem if the bulk data is part of a national or international system. If the bulk data distribution is to travel from one large data center to another large data center then fiber-optic cables or the use of satellite channels is feasible. However, if the distribution must be disseminated from a central site to a number of much smaller, and, perhaps varying sites, then cost prohibits the use of fiber-optic cable or satellite communication. Given these cost constraints much of the bulk distribution of data will continue to be disseminated via inexpensive magnetic tape using the various next day postal service options. For non-transmitted bulk data, our working hypotheses are that the desired duplication efficiency of the total bulk data should be established before selecting any particular data duplication system; and, that the data duplication algorithm should be determined before any bulk data duplication method is selected.

  7. 45 CFR 160.103 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., the following definitions apply to this subchapter: Act means the Social Security Act. ANSI stands for... required documents. Electronic media means: (1) Electronic storage media including memory devices in computers (hard drives) and any removable/transportable digital memory medium, such as magnetic tape or disk...

  8. 45 CFR 160.103 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., the following definitions apply to this subchapter: Act means the Social Security Act. ANSI stands for... required documents. Electronic media means: (1) Electronic storage media including memory devices in computers (hard drives) and any removable/transportable digital memory medium, such as magnetic tape or disk...

  9. 45 CFR 160.103 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., the following definitions apply to this subchapter: Act means the Social Security Act. ANSI stands for... required documents. Electronic media means: (1) Electronic storage media including memory devices in computers (hard drives) and any removable/transportable digital memory medium, such as magnetic tape or disk...

  10. Interactive display of molecular models using a microcomputer system

    NASA Technical Reports Server (NTRS)

    Egan, J. T.; Macelroy, R. D.

    1980-01-01

    A simple, microcomputer-based, interactive graphics display system has been developed for the presentation of perspective views of wire frame molecular models. The display system is based on a TERAK 8510a graphics computer system with a display unit consisting of microprocessor, television display and keyboard subsystems. The operating system includes a screen editor, file manager, PASCAL and BASIC compilers and command options for linking and executing programs. The graphics program, written in USCD PASCAL, involves the centering of the coordinate system, the transformation of centered model coordinates into homogeneous coordinates, the construction of a viewing transformation matrix to operate on the coordinates, clipping invisible points, perspective transformation and scaling to screen coordinates; commands available include ZOOM, ROTATE, RESET, and CHANGEVIEW. Data file structure was chosen to minimize the amount of disk storage space. Despite the inherent slowness of the system, its low cost and flexibility suggests general applicability.

  11. User and group storage management the CMS CERN T2 centre

    NASA Astrophysics Data System (ADS)

    Cerminara, G.; Franzoni, G.; Pfeiffer, A.

    2015-12-01

    A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user/group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.

  12. Careers and people

    NASA Astrophysics Data System (ADS)

    2009-09-01

    IBM scientist wins magnetism prizes Stuart Parkin, an applied physicist at IBM's Almaden Research Center, has won the European Geophysical Society's Néel Medal and the Magnetism Award from the International Union of Pure and Applied Physics (IUPAP) for his fundamental contributions to nanodevices used in information storage. Parkin's research on giant magnetoresistance in the late 1980s led IBM to develop computer hard drives that packed 1000 times more data onto a disk; his recent work focuses on increasing the storage capacity of solid-state electronic devices.

  13. A Science Cloud: OneSpaceNet

    NASA Astrophysics Data System (ADS)

    Morikawa, Y.; Murata, K. T.; Watari, S.; Kato, H.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Shimojo, S.

    2010-12-01

    Main methodologies of Solar-Terrestrial Physics (STP) so far are theoretical, experimental and observational, and computer simulation approaches. Recently "informatics" is expected as a new (fourth) approach to the STP studies. Informatics is a methodology to analyze large-scale data (observation data and computer simulation data) to obtain new findings using a variety of data processing techniques. At NICT (National Institute of Information and Communications Technology, Japan) we are now developing a new research environment named "OneSpaceNet". The OneSpaceNet is a cloud-computing environment specialized for science works, which connects many researchers with high-speed network (JGN: Japan Gigabit Network). The JGN is a wide-area back-born network operated by NICT; it provides 10G network and many access points (AP) over Japan. The OneSpaceNet also provides with rich computer resources for research studies, such as super-computers, large-scale data storage area, licensed applications, visualization devices (like tiled display wall: TDW), database/DBMS, cluster computers (4-8 nodes) for data processing and communication devices. What is amazing in use of the science cloud is that a user simply prepares a terminal (low-cost PC). Once connecting the PC to JGN2plus, the user can make full use of the rich resources of the science cloud. Using communication devices, such as video-conference system, streaming and reflector servers, and media-players, the users on the OneSpaceNet can make research communications as if they belong to a same (one) laboratory: they are members of a virtual laboratory. The specification of the computer resources on the OneSpaceNet is as follows: The size of data storage we have developed so far is almost 1PB. The number of the data files managed on the cloud storage is getting larger and now more than 40,000,000. What is notable is that the disks forming the large-scale storage are distributed to 5 data centers over Japan (but the storage system performs as one disk). There are three supercomputers allocated on the cloud, one from Tokyo, one from Osaka and the other from Nagoya. One's simulation job data on any supercomputers are saved on the cloud data storage (same directory); it is a kind of virtual computing environment. The tiled display wall has 36 panels acting as one display; the pixel (resolution) size of it is as large as 18000x4300. This size is enough to preview or analyze the large-scale computer simulation data. It also allows us to take a look of multiple (e.g., 100 pictures) on one screen together with many researchers. In our talk we also present a brief report of the initial results using the OneSpaceNet for Global MHD simulations as an example of successful use of our science cloud; (i) Ultra-high time resolution visualization of Global MHD simulations on the large-scale storage and parallel processing system on the cloud, (ii) Database of real-time Global MHD simulation and statistic analyses of the data, and (iii) 3D Web service of Global MHD simulations.

  14. Practical and Secure Recovery of Disk Encryption Key Using Smart Cards

    NASA Astrophysics Data System (ADS)

    Omote, Kazumasa; Kato, Kazuhiko

    In key-recovery methods using smart cards, a user can recover the disk encryption key in cooperation with the system administrator, even if the user has lost the smart card including the disk encryption key. However, the disk encryption key is known to the system administrator in advance in most key-recovery methods. Hence user's disk data may be read by the system administrator. Furthermore, if the disk encryption key is not known to the system administrator in advance, it is difficult to achieve a key authentication. In this paper, we propose a scheme which enables to recover the disk encryption key when the user's smart card is lost. In our scheme, the disk encryption key is not preserved anywhere and then the system administrator cannot know the key before key-recovery phase. Only someone who has a user's smart card and knows the user's password can decrypt that user's disk data. Furthermore, we measured the processing time required for user authentication in an experimental environment using a virtual machine monitor. As a result, we found that this processing time is short enough to be practical.

  15. Modeling and Observations of Debris Disks

    NASA Astrophysics Data System (ADS)

    Moro-Martín, Amaya

    2009-08-01

    Debris disks are disks of dust observed around mature main sequence stars (generally A to K2 type). They are evidence that these stars harbor a reservoir of dust-producing plantesimals on spatial scales that are similar to those found for the small-body population of our solar system. Debris disks present a wide range of sizes and structural features (inner cavities, warps, offsets, rings, clumps) and there is growing evidence that, in some cases, they might be the result of the dynamical perturbations of a massive planet. Our solar system also harbors a debris disk and some of its properties resemble those of extra-solar debris disks. The study of these disks can shed light on the diversity of planetary systems and can help us place our solar system into context. This contribution is an introduction to the debris disk phenomenon, including a summary of debris disks main properties (§1-based mostly on results from extensive surveys carried out with Spitzer), and a discussion of what they can teach us about the diversity of planetary systems (§2).

  16. Emulation Aid System II (EASY II) System Programmer’s Guide.

    DTIC Science & Technology

    1981-03-01

    DISK-SAVE, PASSWD =SSSS .MTUINIT= 17 ,MTF IILE=99,D)SKUNIT=7. RESTORE-DISK, PASSWD =SSSS,,MTt!NI=I 7,MTF [LE--=99,DSKtJNIT=7. where PASSWD - a system disk...DISK-SAVE, PASSWD =SSSS ,MTUNIT=17,MTFILE=99,DSKtJNIT=7. SAVE A DISK FILE ON TAPE HELP ,O,O,O. DSKSV. EDIT. CR’r BASED EDITOR (COMM ANDS EXPLAINED AS...BE EXPLICITLY TURNED ON QCNTRL ,LOCKED. RDTAPE,UNIT= 17. READING TAPE FOR USE WITH 6000 AND PRINT. 0. RDTAPE. RESTORE-DISK, PASSWD =SSSS ,MTUNIT= 17

  17. Time-resolved scanning Kerr microscopy of flux beam formation in hard disk write heads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valkass, Robert A. J., E-mail: rajv202@ex.ac.uk; Spicer, Timothy M.; Burgos Parra, Erick

    To meet growing data storage needs, the density of data stored on hard disk drives must increase. In pursuit of this aim, the magnetodynamics of the hard disk write head must be characterized and understood, particularly the process of “flux beaming.” In this study, seven different configurations of perpendicular magnetic recording (PMR) write heads were imaged using time-resolved scanning Kerr microscopy, revealing their detailed dynamic magnetic state during the write process. It was found that the precise position and number of driving coils can significantly alter the formation of flux beams during the write process. These results are applicable tomore » the design and understanding of current PMR and next-generation heat-assisted magnetic recording devices, as well as being relevant to other magnetic devices.« less

  18. Quiet, Computer at Work.

    ERIC Educational Resources Information Center

    Black, Claudia

    Libraries are becoming information access points, not just book repositories. With greater distribution of printed materials, increased use of optical disks and other compact storage techniques, the emergence of publication on demand, and the proliferation of electronic databases, libraries without large collections will be able to provide prompt…

  19. Ultraviolet light treatment for the restoration of age-related degradation of titanium bioactivity.

    PubMed

    Hori, Norio; Ueno, Takeshi; Suzuki, Takeo; Yamada, Masahiro; Att, Wael; Okada, Shunsaku; Ohno, Akinori; Aita, Hideki; Kimoto, Katsuhiko; Ogawa, Takahiro

    2010-01-01

    To examine the bioactivity of differently aged titanium (Ti) disks and to determine whether ultraviolet (UV) light treatment reverses the possible adverse effects of Ti aging. Ti disks with three different surface topographies were prepared: machined, acid-etched, and sandblasted. The disks were divided into three groups: disks tested for biologic capacity immediately after processing (fresh surfaces), disks stored under dark ambient conditions for 4 weeks, and disks stored for 4 weeks and treated with UV light. The protein adsorption capacity of Ti was examined using albumin and fibronectin. Cell attraction to Ti was evaluated by examining migration, attachment, and spreading behaviors of human osteoblasts on Ti disks. Osteoblast differentiation was evaluated by examining alkaline phosphatase activity, the expression of bone-related genes, and mineralized nodule area in the culture. Four-week-old Ti disks showed = or < 50% protein adsorption after 6 hours of incubation compared with fresh disks, regardless of surface topography. Total protein adsorption for 4-week-old surfaces did not reach the level of fresh surfaces, even after 24 hours of incubation. Fifty percent fewer human osteoblasts migrated and attached to 4-week-old surfaces compared with fresh surfaces. Alkaline phosphatase activity, gene expression, and mineralized nodule area were substantially reduced on the 4-week-old surfaces. The reduction of these biologic parameters was associated with the conversion of Ti disks from superhydrophilicity to hydrophobicity during storage for 4 weeks. UV-treated 4-week-old disks showed even higher protein adsorption, osteoblast migration, attachment, differentiation, and mineralization than fresh surfaces, and were associated with regenerated superhydrophilicity. Time-related degradation of Ti bioactivity is substantial and impairs the recruitment and function of human osteoblasts as compared to freshly prepared Ti surfaces, suggesting a "biologic aging"-like change of Ti. UV treatment of aged Ti, however, restores and even enhances bioactivity, exceeding its innate levels.

  20. Protoplanetary Disks in Multiple Star Systems

    NASA Astrophysics Data System (ADS)

    Harris, Robert J.

    Most stars are born in multiple systems, so the presence of a stellar companion may commonly influence planet formation. Theory indicates that companions may inhibit planet formation in two ways. First, dynamical interactions can tidally truncate circumstellar disks. Truncation reduces disk lifetimes and masses, leaving less time and material for planet formation. Second, these interactions might reduce grain-coagulation efficiency, slowing planet formation in its earliest stages. I present three observational studies investigating these issues. First is a spatially resolved Submillimeter Array (SMA) census of disks in young multiple systems in the Taurus-Auriga star-forming region to study their bulk properties. With this survey, I confirmed that disk lifetimes are preferentially decreased in multiples: single stars have detectable millimeter-wave continuum emission twice as often as components of multiples. I also verified that millimeter luminosity (proportional to disk mass) declines with decreasing stellar separation. Furthermore, by measuring resolved-disk radii, I quantitatively tested tidal-truncation theories: results were mixed, with a few disks much larger than expected. I then switch focus to the grain-growth properties of disks in multiple star systems. By combining SMA, Combined Array for Research in Millimeter Astronomy (CARMA), and Jansky Very Large Array (VLA) observations of the circumbinary disk in the UZ Tau quadruple system, I detected radial variations in the grain-size distribution: large particles preferentially inhabit the inner disk. Detections of these theoretically predicted variations have been rare. I related this to models of grain coagulation in gas disks and find that our results are consistent with growth limited by radial drift. I then present a study of grain growth in the disks of the AS 205 and UX Tau multiple systems. By combining SMA, Atacama Large Millimeter/submillimeter Array (ALMA), and VLA observations, I detected radial variations of the grain-size distribution in the AS 205 A disk, but not in the UX Tau A disk. I find that some combination of radial drift and fragmentation limits growth in the AS 205 A disk. In the final chapter, I summarize my findings that, while multiplicity clearly influences bulk disk properties, it does not obviously inhibit grain growth. Other investigations are suggested.

  1. Data federation strategies for ATLAS using XRootD

    NASA Astrophysics Data System (ADS)

    Gardner, Robert; Campana, Simone; Duckeck, Guenter; Elmsheuser, Johannes; Hanushevsky, Andrew; Hönig, Friedrich G.; Iven, Jan; Legger, Federica; Vukotic, Ilija; Yang, Wei; Atlas Collaboration

    2014-06-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  2. Investigation of storage options for scientific computing on Grid and Cloud facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storagemore » server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on bare metal nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.« less

  3. Head-Disk Interface Technology: Challenges and Approaches

    NASA Astrophysics Data System (ADS)

    Liu, Bo

    Magnetic hard disk drive (HDD) technology is believed to be one of the most successful examples of modern mechatronics systems. The mechanical beauty of magnetic HDD includes simple but super high accuracy positioning head, positioning technology, high speed and stability spindle motor technology, and head-disk interface technology which keeps the millimeter sized slider flying over a disk surface at nanometer level slider-disk spacing. This paper addresses the challenges and possible approaches on how to further reduce the slider disk spacing whilst retaining the stability and robustness level of head-disk systems for future advanced magnetic disk drives.

  4. Magnetic bearings for a high-performance optical disk buffer, volume 1

    NASA Technical Reports Server (NTRS)

    Hockney, Richard; Adler, Karen; Anastas, George, Jr.; Downer, James; Flynn, Frederick; Goldie, James; Gondhalekar, Vijay; Hawkey, Timothy; Johnson, Bruce

    1990-01-01

    The innovation investigated in this project was the application of magnetic bearing technology to the translator head of an optical-disk data storage device. Both the capability for space-based applications and improved performance are expected to result. The phase 1 effort produced: (1) detailed specifications for both the translator-head and rotary-spindel bearings; (2) candidate hardware configurations for both bearings with detail definition for the translator head; (3) required characteristics for the magnetic bearing control loops; (4) position sensor selection; and (5) definition of the required electronic functions. The principal objective of Phase 2 was the design, fabrication, assembly, and test of the magnetic bearing system for the translator head. The scope of work included: (1) mechanical design of each of the required components; (2) electrical design of the required circuitry; (3) fabrication of the component parts and bread-board electronics; (4) generation of a test plan; and (5) integration of the prototype unit and performance testing. The project has confirmed the applicability of magnetic bearing technology to suspension of the translator head of the optical disk device, and demonstrated the achievement of all performance objectives. The magnetic bearing control loops perform well, achieving 100 Hz nominal bandwidth with phase margins between 37 and 63 degrees. The worst-case position resolution is 0.02 micron in the displacement loops and 1 micron rad in the rotation loops, The system is very robust to shock disturbances, recovering smoothly even when collisions occur between the translator and frame. The unique start-up/shut-down circuit has proven very effective.

  5. Identifying Likely Disk-hosting M dwarfs with Disk Detective

    NASA Astrophysics Data System (ADS)

    Silverberg, Steven; Wisniewski, John; Kuchner, Marc J.; Disk Detective Collaboration

    2018-01-01

    M dwarfs are critical targets for exoplanet searches. Debris disks often provide key information as to the formation and evolution of planetary systems around higher-mass stars, alongside the planet themselves. However, less than 300 M dwarf debris disks are known, despite M dwarfs making up 70% of the local neighborhood. The Disk Detective citizen science project has identified over 6000 new potential disk host stars from the AllWISE catalog over the past three years. Here, we present preliminary results of our search for new disk-hosting M dwarfs in the survey. Based on near-infrared color cuts and fitting stellar models to photometry, we have identified over 500 potential new M dwarf disk hosts, nearly doubling the known number of such systems. In this talk, we present our methodology, and outline our ongoing work to confirm systems as M dwarf disks.

  6. Gas in the Terrestrial Planet Region of Disks: CO Fundamental Emission from T Tauri Stars

    DTIC Science & Technology

    2003-06-01

    planetary systems: protoplanetary disks — stars: variables: other 1. INTRODUCTION As the likely birthplaces of planets, the inner regions of young...both low column density regions, such as disk gaps , and temperature inversion regions in disk atmospheres can produce significant emission. The esti...which planetary systems form. The moti- vation to study inner disks is all the more intense today given the discovery of planets outside the solar system

  7. Data storage and retrieval system abstract

    NASA Technical Reports Server (NTRS)

    Matheson, Barbara

    1992-01-01

    The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.

  8. Data storage and retrieval system abstract

    NASA Astrophysics Data System (ADS)

    Matheson, Barbara

    1992-09-01

    The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.

  9. Evidence for dust grain growth in young circumstellar disks.

    PubMed

    Throop, H B; Bally, J; Esposito, L W; McCaughrean, M J

    2001-06-01

    Hundreds of circumstellar disks in the Orion nebula are being rapidly destroyed by the intense ultraviolet radiation produced by nearby bright stars. These young, million-year-old disks may not survive long enough to form planetary systems. Nevertheless, the first stage of planet formation-the growth of dust grains into larger particles-may have begun in these systems. Observational evidence for these large particles in Orion's disks is presented. A model of grain evolution in externally irradiated protoplanetary disks is developed and predicts rapid particle size evolution and sharp outer disk boundaries. We discuss implications for the formation rates of planetary systems.

  10. Stagger angle dependence of inertial and elastic coupling in bladed disks

    NASA Technical Reports Server (NTRS)

    Crawley, E. F.; Mokadam, D. R.

    1984-01-01

    Conditions which necessitate the inclusion of disk and shaft flexibility in the analysis of blade response in rotating blade-disk-shaft systems are derived in terms of nondimensional parameters. A simple semianalytical Rayleigh-Ritz model is derived in which the disk possesses all six rigid body degrees of freedom, which are elastically constrained by the shaft. Inertial coupling by the rigid body motion of the disk on a flexible shaft and out-of-plane elastic coupling due to disk flexure are included. Frequency ratios and mass ratios, which depend on the stagger angle, are determined for three typical rotors: a first stage high-pressure core compressor, a high bypass ratio fan, and an advanced turboprop. The stagger angle controls the degree of coupling in the blade-disk system. In the blade-disk-shaft system, the stagger angle determines whether blade-disk motion couples principally to the out-of-plane or in-plane motion of the disk on the shaft. The Ritz analysis shows excellent agreement with experimental results.

  11. AIIM '90: Themes and Trends.

    ERIC Educational Resources Information Center

    Cowan, Les

    1990-01-01

    Outlines and analyzes new trends and developments at the Association for Information and Image Management's 1990 spring conference. The growth of imaging and the optical storage industry is emphasized, and new developments that are discussed include hardware; optical disk drives; jukeboxes; local area networks (LANs); bar codes; image displays;…

  12. Digital Audio Tape: Yet Another Archival Media?

    ERIC Educational Resources Information Center

    Vanker, Anthony D.

    1989-01-01

    Provides an introduction to the technical aspects of digital audiotape and compares it to other computer storage devices such as optical data disks and magnetic tape cartridges in terms of capacity, transfer rate, and cost. The current development of digital audiotape standards is also discussed. (five references) (CLB)

  13. Manufacturing Methods and Technology Project Summary Reports

    DTIC Science & Technology

    1983-06-01

    Proposal will be prepared by Solar Turbines, Inc. for introduction of cast titanium impellers into T62T-40 production. Detroit Diesel Allison will...microprocessor con- trol, RS 232 serial zommunications ports, binary I/O ports, floppy disk mass storage and cor.-rol panal . A component pickup

  14. Holography and optical information processing; Proceedings of the Soviet-Chinese Joint Seminar, Bishkek, Kyrgyzstan, Sept. 21-26, 1991

    NASA Astrophysics Data System (ADS)

    Mikaelian, Andrei L.

    Attention is given to data storage, devices, architectures, and implementations of optical memory and neural networks; holographic optical elements and computer-generated holograms; holographic display and materials; systems, pattern recognition, interferometry, and applications in optical information processing; and special measurements and devices. Topics discussed include optical immersion as a new way to increase information recording density, systems for data reading from optical disks on the basis of diffractive lenses, a new real-time optical associative memory system, an optical pattern recognition system based on a WTA model of neural networks, phase diffraction grating for the integral transforms of coherent light fields, holographic recording with operated sensitivity and stability in chalcogenide glass layers, a compact optical logic processor, a hybrid optical system for computing invariant moments of images, optical fiber holographic inteferometry, and image transmission through random media in single pass via optical phase conjugation.

  15. Composition and Realization of Source-to-Sink High-Performance Flows: File Systems, Storage, Hosts, LAN and WAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi

    A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. To support such capabilities, significant progress has been made in various components including the deployment of 100 Gbps networks with future 1 Tbps bandwidth, increases in end-host capabilities with multiple cores and buses, capacity improvements in large disk arrays, and deployment of parallel file systems such as Lustre and GPFS. High-performance source-to-sink datamore » flows must be composed of these component systems, which requires significant optimizations of the storage-to-host data and execution paths to match the edge and long-haul network connections. In particular, end systems are currently supported by 10-40 Gbps Network Interface Cards (NIC) and 8-32 Gbps storage Host Channel Adapters (HCAs), which carry the individual flows that collectively must reach network speeds of 100 Gbps and higher. Indeed, such data flows must be synthesized using multicore, multibus hosts connected to high-performance storage systems on one side and to the network on the other side. Current experimental results show that the constituent flows must be optimally composed and preserved from storage systems, across the hosts and the networks with minimal interference. Furthermore, such a capability must be made available transparently to the science users without placing undue demands on them to account for the details of underlying systems and networks. And, this task is expected to become even more complex in the future due to the increasing sophistication of hosts, storage systems, and networks that constitute the high-performance flows. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to achieve source-to-sink high-performance flows, and (2) develop tools that provide these capabilities through simple interfaces to users and applications. In terms of the former, we propose to develop (1) optimization methods that align and transition multiple storage flows to multiple network flows on multicore, multibus hosts; and (2) edge and long-haul network path realization and maintenance using advanced provisioning methods including OSCARS and OpenFlow. We also propose synthesis methods that combine these individual technologies to compose high-performance flows using a collection of constituent storage-network flows, and realize them across the storage and local network connections as well as long-haul connections. We propose to develop automated user tools that profile the hosts, storage systems, and network connections; compose the source-to-sink complex flows; and set up and maintain the needed network connections. These solutions will be tested using (1) 100 Gbps connection(s) between Oak Ridge National Laboratory (ORNL) and Argonne National Laboratory (ANL) with storage systems supported by Lustre and GPFS file systems with an asymmetric connection to University of Memphis (UM); (2) ORNL testbed with multicore and multibus hosts, switches with OpenFlow capabilities, and network emulators; and (3) 100 Gbps connections from ESnet and their Openflow testbed, and other experimental connections. This proposal brings together the expertise and facilities of the two national laboratories, ORNL and ANL, and UM. It also represents a collaboration between DOE and the Department of Defense (DOD) projects at ORNL by sharing technical expertise and personnel costs, and leveraging the existing DOD Extreme Scale Systems Center (ESSC) facilities at ORNL.« less

  16. Disks, Young Stars, and Radio Waves: The Quest for Forming Planetary Systems

    NASA Astrophysics Data System (ADS)

    Chandler, C. J.; Shepherd, D. S.

    2008-08-01

    Kant and Laplace suggested the Solar System formed from a rotating gaseous disk in the 18th century, but convincing evidence that young stars are indeed surrounded by such disks was not presented for another 200 years. As we move into the 21st century the emphasis is now on disk formation, the role of disks in star formation, and on how planets form in those disks. Radio wavelengths play a key role in these studies, currently providing some of the highest-spatial-resolution images of disks, along with evidence of the growth of dust grains into planetesimals. The future capabilities of EVLA and ALMA provide extremely exciting prospects for resolving disk structure and kinematics, studying disk chemistry, directly detecting protoplanets, and imaging disks in formation.

  17. Dynamic stability and slider-lubricant interactions in hard disk drives

    NASA Astrophysics Data System (ADS)

    Ambekar, Rohit Pradeep

    2007-12-01

    Hard disk drives (HDD) have played a significant role in the current information age and have become the backbone of storage. The soaring demand for mass data storage drives the necessity for increasing capacity of the drives and hence the areal density on the disks as well as the reliability of the HDD. To achieve greater areal density in hard disk drives, the flying height of the airbearing slider continually decreases. Different proximity forces and interactions influence the air bearing slider resulting in fly height modulation and instability. This poses several challenges to increasing the areal density (current goal is 2Tb/in.2) as well as making the head-disk interface (HDI) more reliable. Identifying and characterizing these forces or interactions has become important for achieving a stable fly height at proximity and realizing the goals of areal density and reliability. Several proximity forces or interactions influencing the slider are identified through the study of touchdown-takeoff hysteresis. Slider-lubricant interaction which causes meniscus force between the slider and disk as well as airbearing surface contamination seems to be the most important factor affecting stability and reliability at proximity. In addition, intermolecular forces and disk topography are identified as important factors. Disk-to-slider lubricant transfer leads to lubricant pickup on the slider and also causes depletion of lubricant on the disk, affecting stability and reliability of the HDI. Experimental and numerical investigation as well as a parametric study of the process of lubricant transfer has been done using a half-delubed disk. In the first part of this parametric study, dependence on the disk lubricant thickness, lubricant type and slider ABS design has been investigated. It is concluded that the lubricant transfer can occur without slider-disk contact and there can be more than one timescale associated with the transfer. Further, the transfer increases non-linearly with increasing disk lubricant thickness. Also, the transfer depends on the type of lubricant used, and is less for Ztetraol than for Zdol. The slider ABS design also plays an important role, and a few suggestions are made to improve the ABS design for better lubricant performance. In the second part of the parametric study, the effect of carbon overcoat, lubricant molecular weight and inclusion of X-1P and A20H on the slider-lubricant interactions is investigated using a half-delubed disk approach. Based on the results, it is concluded that there exists a critical head-disk clearance above which there is negligible slider-lubricant interaction. The interaction starts at this critical clearance and increases in intensity as the head-disk clearance is further decreased below the critical clearance. Using shear stress simulations and previously published work a theory is developed to support the experimental observations. The critical clearance depends on various HDI parameters and hence can be reduced through proper design of the interface. Comparison of critical clearance on CHx and CHxNy media indicates that presence of nitrogen is better for HDI as it reduces the critical clearance, which is found to increase with increasing lubricant molecular weight and in presence of additives X-1P and A20H. Further experiments maintaining a fixed slider-disk clearance suggest that two different mechanisms dominate the disk-to-slider and slider-to-disk lubricant transfer. One of the key factors influencing the slider stability at proximity is the disk topography, since it provides dynamic excitation to the low-flying sliders and strongly influences its dynamics. The effect of circumferential as well as radial disk topography is investigated using a new method to measure the 2-D (true) disk topography. Simulations using CMLAir dynamic simulator indicate a strong dependence on the circumferential roughness and waviness features as well as radial features, which have not been studied intensively till now. The simulations with 2-D disk topography are viewed as more realistic than the 1-D simulations. Further, it is also seen that the effect of the radial features can be reduced through effective ABS design. Finally, an attempt has been made to establish correlations between some of the proximity interactions as well as others which may affect the HDI reliability by creating a relational chart. Such an organization serves to give a bigger picture of the various efforts being made in the field of HDI reliability and link them together. From this chart, a causal relationship is suggested between the electrostatic, intermolecular and meniscus forces.

  18. Floppy disk utility user's guide

    NASA Technical Reports Server (NTRS)

    Akers, J. W.

    1981-01-01

    The Floppy Disk Utility Program transfers programs between files on the hard disk and floppy disk. It also copies the data on one floppy disk onto another floppy disk and compares the data. The program operates on the Data General NOVA-4X under the Real Time Disk Operating System (RDOS).

  19. HST/WFC3 Imaging and Multi-Wavelength Characterization of Edge-On Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Gould, Carolina; Williams, Hayley; Duchene, Gaspard

    2017-10-01

    In recent years, the imaging detail in resolved protoplanetary disks has vastly improved and created a critical mass of objects to survey and compare properties, leading us to better understandings of system formation. In particular, disks with an edge-on inclination offer an important perspective, not only for the imaging convenience since the disk blocks stellar light, but scientifically an edge-on disk provides an otherwise impossible opportunity to observe vertical dust structure of a protoplanetary system. In this contribution, we compare seven HST-imaged edge-on protoplanetary disks in the Taurus, Chamaeleon and Ophiuchus star-forming regions, making note the variation in morphology (settled vs flared), dust properties revealed by multiwavelength color mapping, brightness variability over years timescales, and the presence in some systems of a blue-colored atmosphere far above the disk midplane. By using a uniform approach for their analysis, together these seven edge-on protoplanetary disk systems can give insights on evolutionary processes and inform future projects that explore this critical stage of planet formation.

  20. On the role of disks in the formation of stellar systems: A numerical parameter study of rapid accretion

    DOE PAGES

    Kratter, Kaitlin M.; Matzner, Christopher D.; Krumholz, Mark R.; ...

    2009-12-23

    We study rapidly accreting, gravitationally unstable disks with a series of idealized global, numerical experiments using the code ORION. Our numerical parameter study focuses on protostellar disks, showing that one can predict disk behavior and the multiplicity of the accreting star system as a function of two dimensionless parameters which compare the infall rate to the disk sound speed and orbital period. Although gravitational instabilities become strong, we find that fragmentation into binary or multiple systems occurs only when material falls in several times more rapidly than the canonical isothermal limit. The disk-to-star accretion rate is proportional to the infallmore » rate and governed by gravitational torques generated by low-m spiral modes. Furthermore, we also confirm the existence of a maximum stable disk mass: disks that exceed ~50% of the total system mass are subject to fragmentation and the subsequent formation of binary companions.« less

  1. Modifying the Standard Disk Model for the Ultraviolet Spectral Analysis of Disk-dominated Cataclysmic Variables. I. The Novalikes MV Lyrae, BZ Camelopardalis, and V592 Cassiopeiae.

    PubMed

    Godon, Patrick; Sion, Edward M; Balman, Şölen; Blair, William P

    2017-09-01

    The standard disk is often inadequate to model disk-dominated cataclysmic variables (CVs) and generates a spectrum that is bluer than the observed UV spectra. X-ray observations of these systems reveal an optically thin boundary layer (BL) expected to appear as an inner hole in the disk. Consequently, we truncate the inner disk. However, instead of removing the inner disk, we impose the no-shear boundary condition at the truncation radius, thereby lowering the disk temperature and generating a spectrum that better fits the UV data. With our modified disk, we analyze the archival UV spectra of three novalikes that cannot be fitted with standard disks. For the VY Scl systems MV Lyr and BZ Cam, we fit a hot inflated white dwarf (WD) with a cold modified disk ( [Formula: see text] ~ a few 10 -9 M ⊙ yr -1 ). For V592 Cas, the slightly modified disk ( [Formula: see text] ~ 6 × 10 -9 M ⊙ yr -1 ) completely dominates the UV. These results are consistent with Swift X-ray observations of these systems, revealing BLs merged with ADAF-like flows and/or hot coronae, where the advection of energy is likely launching an outflow and heating the WD, thereby explaining the high WD temperature in VY Scl systems. This is further supported by the fact that the X-ray hardness ratio increases with the shallowness of the UV slope in a small CV sample we examine. Furthermore, for 105 disk-dominated systems, the International Ultraviolet Explorer spectra UV slope decreases in the same order as the ratio of the X-ray flux to optical/UV flux: from SU UMa's, to U Gem's, Z Cam's, UX UMa's, and VY Scl's.

  2. Millimeter Studies of Nearby Debris Disks

    NASA Astrophysics Data System (ADS)

    MacGregor, Meredith Ann

    2017-03-01

    At least 20% of nearby main sequence stars are known to be surrounded by disks of dusty material resulting from the collisional erosion of planetesimals, similar to asteroids and comets in our own Solar System. The material in these ‘debris disks’ is directly linked to the larger bodies, like planets, in the system through collisions and gravitational perturbations. Observations at millimeter wavelengths are especially critical to our understanding of these systems, since the large grains that dominate emission at these long wavelengths reliably trace the underlying planetesimal distribution. In this thesis, I have used state-of-the-art observations at millimeter wavelengths to address three related questions concerning debris disks and planetary system evolution: 1) How are wide-separation, substellar companions formed? 2) What is the physical nature of the collisional process in debris disks? And, 3) Can the structure and morphology of debris disks provide probes of planet formation and subsequent dynamical evolution? Using ALMA observations of GQ Lup, a pre-main sequence system with a wide-separation, substellar companion, I have placed constraints on the mass of a circumplanetary disk around the companion, informing formation scenarios for this and other similar systems (Chapter 2). I obtained observations of a sample of fifteen debris disks with both the VLA and ATCA at centimeter wavelengths, and robustly determined the millimeter spectral index of each disk and thus the slope of the grain size distribution, providing the first observational test of collision models of debris disks (Chapter 3). By applying an MCMC modeling framework to resolved millimeter observations with ALMA and SMA, I have placed the first constraints on the position, width, surface density gradient, and any asymmetric structure of the AU Mic, HD 15115, Epsilon Eridani, Tau Ceti, and Fomalhaut debris disks (Chapters 4–8). These observations of individual systems hint at trends in disk structure and dynamics, which can be explored further with a comparative study of a sample of the eight brightest debris disks around Sun-like stars within 20 pc (Chapter 9). This body of work has yielded the first resolved images of notable debris disks at millimeter wavelengths, and complements other ground- and space-based observations by providing constraints on these systems with uniquely high angular resolution and wavelength coverage. Together these results provide a foundation to investigate the dynamical evolution of planetary systems through multi-wavelength observations of debris disks.

  3. RAID Disk Arrays for High Bandwidth Applications

    NASA Technical Reports Server (NTRS)

    Moren, Bill

    1996-01-01

    High bandwidth applications require large amounts of data transferred to/from storage devices at extremely high data rates. Further, these applications often are 'real time' in which access to the storage device must take place on the schedule of the data source, not the storage. A good example is a satellite downlink - the volume of data is quite large and the data rates quite high (dozens of MB/sec). Further, a telemetry downlink must take place while the satellite is overhead. A storage technology which is ideally suited to these types of applications is redundant arrays of independent discs (RAID). Raid storage technology, while offering differing methodologies for a variety of applications, supports the performance and redundancy required in real-time applications. Of the various RAID levels, RAID-3 is the only one which provides high data transfer rates under all operating conditions, including after a drive failure.

  4. 48 CFR 1552.215-72 - Instructions for the Preparation of Proposals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of the information, to expedite review of the proposal, submit an IBM-compatible software or storage... offeror used another spreadsheet program, indicate the software program used to create this information... submission of a compatible software or device will expedite review, failure to submit a disk will not affect...

  5. 48 CFR 1552.215-72 - Instructions for the Preparation of Proposals.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of the information, to expedite review of the proposal, submit an IBM-compatible software or storage... offeror used another spreadsheet program, indicate the software program used to create this information... submission of a compatible software or device will expedite review, failure to submit a disk will not affect...

  6. CD-ROMs: Volumes of Books on a Single 4 3/4-Inch Disk.

    ERIC Educational Resources Information Center

    Angle, Melanie

    1992-01-01

    Summarizes the storage capacity, advantages, disadvantages, hardware configurations, and costs of CD-ROMs. Several available titles are described, including "Books in Print," literature study guides, the works of Shakespeare, a historical almanac of "Time Magazine" articles, a scientific dictionary and encyclopedia, and a…

  7. 5 CFR 293.107 - Special safeguards for automated records.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... for automated records. (a) In addition to following the security requirements of § 293.106 of this... security safeguards for data about individuals in automated records, including input and output documents, reports, punched cards, magnetic tapes, disks, and on-line computer storage. The safeguards must be in...

  8. 5 CFR 293.107 - Special safeguards for automated records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... for automated records. (a) In addition to following the security requirements of § 293.106 of this... security safeguards for data about individuals in automated records, including input and output documents, reports, punched cards, magnetic tapes, disks, and on-line computer storage. The safeguards must be in...

  9. 5 CFR 293.107 - Special safeguards for automated records.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... for automated records. (a) In addition to following the security requirements of § 293.106 of this... security safeguards for data about individuals in automated records, including input and output documents, reports, punched cards, magnetic tapes, disks, and on-line computer storage. The safeguards must be in...

  10. 5 CFR 293.107 - Special safeguards for automated records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... for automated records. (a) In addition to following the security requirements of § 293.106 of this... security safeguards for data about individuals in automated records, including input and output documents, reports, punched cards, magnetic tapes, disks, and on-line computer storage. The safeguards must be in...

  11. Lubricant depletion under various laser heating conditions in Heat Assisted Magnetic Recording (HAMR)

    NASA Astrophysics Data System (ADS)

    Xiong, Shaomin; Wu, Haoyu; Bogy, David

    2014-09-01

    Heat assisted magnetic recording (HAMR) is expected to increase the storage areal density to more than 1 Tb/in2 in hard disk drives (HDDs). In this technology, a laser is used to heat the magnetic media to the Curie point (~400-600 °C) during the writing process. The lubricant on the top of a magnetic disk could evaporate and be depleted under the laser heating. The change of the lubricant can lead to instability of the flying slider and failure of the head-disk interface (HDI). In this study, a HAMR test stage is developed to study the lubricant thermal behavior. Various heating conditions are controlled for the study of the lubricant thermal depletion. The effects of laser heating repetitions and power levels on the lubricant depletion are investigated experimentally. The lubricant reflow behavior is discussed as well.

  12. From stars to dust: looking into a circumstellar disk through chondritic meteorites.

    PubMed

    Connolly, Harold C

    2005-01-07

    One of the most fundamental questions in planetary science is, How did the solar system form? In this special issue, astronomical observations and theories constraining circumstellar disks, their lifetimes, and the formation of planetary to subplanetary objects are reviewed. At present, it is difficult to observe what is happening within disks and to determine if another disk environment is comparable to the early solar system disk environment (called the protoplanetary disk). Fortunately, we have chondritic meteorites, which provide a record of the processes that operated and materials present within the protoplanetary disk.

  13. Vortical structures for nanomagnetic memory induced by dipole-dipole interaction in monolayer disks

    NASA Astrophysics Data System (ADS)

    Liu, Zhaosen; Ciftja, Orion; Zhang, Xichao; Zhou, Yan; Ian, Hou

    2018-05-01

    It is well known that magnetic domains in nanodisks can be used as storage units for computer memory. Using two quantum simulation approaches, we show here that spin vortices on magnetic monolayer nanodisks, which are chirality-free, can be induced by dipole-dipole interaction (DDI) on the disk-plane. When DDI is sufficiently strong, vortical and anti-vortical multi-domain textures can be generated simultaneously. Especially, a spin vortex can be easily created and deleted through either external magnetic or electrical signals, making them ideal to be used in nanomagnetic memory and logical devices. We demonstrate these properties in our simulations.

  14. Millimeter observations of the disk around GW Orionis

    NASA Astrophysics Data System (ADS)

    Fang, M.; Sicilia-Aguilar, A.; Wilner, D.; Wang, Y.; Roccatagliata, V.; Fedele, D.; Wang, J. Z.

    2017-07-01

    The GW Ori system is a pre-main sequence triple system (GW Ori A/B/C) with companions (GW Ori B/C) at 1 AU and 8 AU, respectively, from the primary (GW Ori A). The primary of the system has a mass of 3.9 M⊙, but shows a spectral type of G8. Thus, GW Ori A could be a precursor of a B star, but it is still at an earlier evolutionary stage than Herbig Be stars. GW Ori provides an ideal target for experiments and observations (being a "blown-up" solar system with a very massive sun and at least two upscaled planets). We present the first spatially resolved millimeter interferometric observations of the disk around the triple pre-main sequence system GW Ori, obtained with the Submillimeter Array, both in continuum and in the 12CO J = 2-1, 13CO J = 2-1, and C18O J = 2-1 lines. These new data reveal a huge, massive, and bright disk in the GW Ori system. The dust continuum emission suggests a disk radius of around 400 AU, but the 12CO J = 2-1 emission shows a much more extended disk with a size around 1300 AU. Owing to the spatial resolution ( 1''), we cannot detect the gap in the disk that is inferred from spectral energy distribution (SED) modeling. We characterize the dust and gas properties in the disk by comparing the observations with the predictions from the disk models with various parameters calculated with a Monte Carlo radiative transfer code RADMC-3D. The disk mass is around0.12 M⊙, and the disk inclination with respect to the line of sight is around 35°. The kinematics in the disk traced by the CO line emission strongly suggest that the circumstellar material in the disk is in Keplerian rotation around GW Ori.Tentatively substantial C18O depletion in gas phase is required to explain the characteristics of the line emission from the disk.

  15. Floppy disk utility user's guide

    NASA Technical Reports Server (NTRS)

    Akers, J. W.

    1980-01-01

    A floppy disk utility program is described which transfers programs between files on a hard disk and floppy disk. It also copies the data on one floppy disk onto another floppy disk and compares the data. The program operates on the Data General NOVA-4X under the Real Time Disk Operating System. Sample operations are given.

  16. Sharp Eccentric Rings in Planetless Hydrodynamical Models of Debris Disks

    NASA Technical Reports Server (NTRS)

    Lyra, W.; Kuchner, M. J.

    2013-01-01

    Exoplanets are often associated with disks of dust and debris, analogs of the Kuiper Belt in our solar system. These "debris disks" show a variety of non-trivial structures attributed to planetary perturbations and utilized to constrain the properties of the planets. However, analyses of these systems have largely ignored the fact that, increasingly, debris disks are found to contain small quantities of gas, a component all debris disks should contain at some level. Several debris disks have been measured with a dust-to-gas ratio around unity where the effect of hydrodynamics on the structure of the disk cannot be ignored. Here we report that dust-gas interactions can produce some of the key patterns seen in debris disks that were previously attributed to planets. Through linear and nonlinear modeling of the hydrodynamical problem, we find that a robust clumping instability exists in this configuration, organizing the dust into narrow, eccentric rings, similar to the Fomalhaut debris disk. The hypothesis that these disks might contain planets, though thrilling, is not necessarily required to explain these systems.

  17. You’re Cut Off: HD and MHD Simulations of Truncated Accretion Disks

    NASA Astrophysics Data System (ADS)

    Hogg, J. Drew; Reynolds, Christopher S.

    2017-01-01

    Truncated accretion disks are commonly invoked to explain the spectro-temporal variability from accreting black holes in both small systems, i.e. state transitions in galactic black hole binaries (GBHBs), and large systems, i.e. low-luminosity active galactic nuclei (LLAGNs). In the canonical truncated disk model of moderately low accretion rate systems, gas in the inner region of the accretion disk occupies a hot, radiatively inefficient phase, which leads to a geometrically thick disk, while the gas in the outer region occupies a cooler, radiatively efficient phase that resides in the standard geometrically thin disk. Observationally, there is strong empirical evidence to support this phenomenological model, but a detailed understanding of the disk behavior is lacking. We present well-resolved hydrodynamic (HD) and magnetohydrodynamic (MHD) numerical models that use a toy cooling prescription to produce the first sustained truncated accretion disks. Using these simulations, we study the dynamics, angular momentum transport, and energetics of a truncated disk in the two different regimes. We compare the behaviors of the HD and MHD disks and emphasize the need to incorporate a full MHD treatment in any discussion of truncated accretion disk evolution.

  18. ALMA Observations of a Misaligned Binary Protoplanetary Disk System in Orion

    NASA Astrophysics Data System (ADS)

    Williams, Jonathan P.; Mann, Rita K.; Di Francesco, James; Andrews, Sean M.; Hughes, A. Meredith; Ricci, Luca; Bally, John; Johnstone, Doug; Matthews, Brenda

    2014-12-01

    We present Atacama Large Millimeter/Submillimeter Array (ALMA) observations of a wide binary system in Orion, with projected separation 440 AU, in which we detect submillimeter emission from the protoplanetary disks around each star. Both disks appear moderately massive and have strong line emission in CO 3-2, HCO+ 4-3, and HCN 3-2. In addition, CS 7-6 is detected in one disk. The line-to-continuum ratios are similar for the two disks in each of the lines. From the resolved velocity gradients across each disk, we constrain the masses of the central stars, and show consistency with optical-infrared spectroscopy, both indicative of a high mass ratio ~9. The small difference between the systemic velocities indicates that the binary orbital plane is close to face-on. The angle between the projected disk rotation axes is very high, ~72°, showing that the system did not form from a single massive disk or a rigidly rotating cloud core. This finding, which adds to related evidence from disk geometries in other systems, protostellar outflows, stellar rotation, and similar recent ALMA results, demonstrates that turbulence or dynamical interactions act on small scales well below that of molecular cores during the early stages of star formation.

  19. Probing for Exoplanets Hiding in Dusty Debris Disks: Disk Imaging, Characterization, and Exploration with HST-STIS Multi-roll Coronagraphy

    NASA Technical Reports Server (NTRS)

    Schneider, Glenn; Grady, Carol A.; Hines, Dean C.; Stark, Christopher C.; Debes, John; Carson, Joe; Kuchner, Marc J.; Perrin, Marshall; Weinberger, Alycia; Wisniewski, John P.; hide

    2014-01-01

    Spatially resolved scattered-light images of circumstellar debris in exoplanetary systems constrain the physical properties and orbits of the dust particles in these systems. They also inform on co-orbiting (but unseen) planets, the systemic architectures, and forces perturbing the starlight-scattering circumstellar material. Using HST/STIS broadband optical coronagraphy, we have completed the observational phase of a program to study the spatial distribution of dust in a sample of ten circumstellar debris systems, and one "mature" protoplanetrary disk all with HST pedigree, using PSF-subtracted multi-roll coronagraphy. These observations probe stellocentric distances greater than or equal to 5 AU for the nearest systems, and simultaneously resolve disk substructures well beyond corresponding to the giant planet and Kuiper belt regions within our own Solar System. They also disclose diffuse very low-surface brightness dust at larger stellocentric distances. Herein we present new results inclusive of fainter disks such as HD92945 (F (sub disk) /F (sub star) = 5x10 (sup -5) confirming, and better revealing, the existence of a narrow inner debris ring within a larger diffuse dust disk. Other disks with ring-like sub-structures and significant asymmetries and complex morphologies include: HD181327 for which we posit a spray of ejecta from a recent massive collision in an exo-Kuiper belt; HD61005 suggested to be interacting with the local ISM; HD15115 and HD32297, discussed also in the context of putative environmental interactions. These disks, and HD15745, suggest that debris system evolution cannot be treated in isolation. For AU Mic's edge-on disk we find out-of-plane surface brightness asymmetries at greater than or equal to 5 AU that may implicate the existence of one or more planetary perturbers. Time resolved images of the MP Mus proto-planetary disk provide spatially resolved temporal variability in the disk illumination. These and other new images from our HST/STIS GO/12228 program enable direct inter-comparison of the architectures of these exoplanetary debris systems in the context of our own Solar System.

  20. Probing for exoplanets hiding in dusty debris disks: Disk imaging, characterization, and exploration with HST/STIS multi-roll coronagraphy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Glenn; Hinz, Phillip M.; Grady, Carol A.

    Spatially resolved scattered-light images of circumstellar debris in exoplanetary systems constrain the physical properties and orbits of the dust particles in these systems. They also inform on co-orbiting (but unseen) planets, the systemic architectures, and forces perturbing the starlight-scattering circumstellar material. Using Hubble Space Telescope (HST)/Space Telescope Imaging Spectrograph (STIS) broadband optical coronagraphy, we have completed the observational phase of a program to study the spatial distribution of dust in a sample of 10 circumstellar debris systems and 1 'mature' protoplanetrary disk, all with HST pedigree, using point-spread-function-subtracted multi-roll coronagraphy. These observations probe stellocentric distances ≥5 AU for the nearestmore » systems, and simultaneously resolve disk substructures well beyond corresponding to the giant planet and Kuiper Belt regions within our own solar system. They also disclose diffuse very low-surface-brightness dust at larger stellocentric distances. Herein we present new results inclusive of fainter disks such as HD 92945 (F {sub disk}/F {sub star} = 5 × 10{sup –5}), confirming, and better revealing, the existence of a narrow inner debris ring within a larger diffuse dust disk. Other disks with ring-like substructures and significant asymmetries and complex morphologies include HD 181327, for which we posit a spray of ejecta from a recent massive collision in an exo-Kuiper Belt; HD 61005, suggested to be interacting with the local interstellar medium; and HD 15115 and HD 32297, also discussed in the context of putative environmental interactions. These disks and HD 15745 suggest that debris system evolution cannot be treated in isolation. For AU Mic's edge-on disk, we find out-of-plane surface brightness asymmetries at ≥5 AU that may implicate the existence of one or more planetary perturbers. Time-resolved images of the MP Mus protoplanetary disk provide spatially resolved temporal variability in the disk illumination. These and other new images from our HST/STIS GO/12228 program enable direct inter-comparison of the architectures of these exoplanetary debris systems in the context of our own solar system.« less

  1. Horizontally scaling dChache SRM with the Terracotta platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Crawford, M.; Moibenko, A.

    2011-01-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a singlemore » node. Using the Terracotta platform, we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.« less

  2. Optoelectronic associative recall using motionless-head parallel readout optical disk

    NASA Astrophysics Data System (ADS)

    Marchand, P. J.; Krishnamoorthy, A. V.; Ambs, P.; Esener, S. C.

    1990-12-01

    High data rates, low retrieval times, and simple implementation are presently shown to be obtainable by means of a motionless-head 2D parallel-readout system for optical disks. Since the optical disk obviates mechanical head motions for access, focusing, and tracking, addressing is performed exclusively through the disk's rotation. Attention is given to a high-performance associative memory system configuration which employs a parallel readout disk.

  3. Planet Formation in Binary Star Systems

    NASA Astrophysics Data System (ADS)

    Martin, Rebecca

    About half of observed exoplanets are estimated to be in binary systems. Understanding planet formation and evolution in binaries is therefore essential for explaining observed exoplanet properties. Recently, we discovered that a highly misaligned circumstellar disk in a binary system can undergo global Kozai-Lidov (KL) oscillations of the disk inclination and eccentricity. These oscillations likely have a significant impact on the formation and orbital evolution of planets in binary star systems. Planet formation by core accretion cannot operate during KL oscillations of the disk. First, we propose to consider the process of disk mass transfer between the binary members. Secondly, we will investigate the possibility of planet formation by disk fragmentation. Disk self gravity can weaken or suppress the oscillations during the early disk evolution when the disk mass is relatively high for a narrow range of parameters. Thirdly, we will investigate the evolution of a planet whose orbit is initially aligned with respect to the disk, but misaligned with respect to the orbit of the binary. We will study how these processes relate to observations of star-spin and planet orbit misalignment and to observations of planets that appear to be undergoing KL oscillations. Finally, we will analyze the evolution of misaligned multi-planet systems. This theoretical work will involve a combination of analytic and numerical techniques. The aim of this research is to shed some light on the formation of planets in binary star systems and to contribute to NASA's goal of understanding of the origins of exoplanetary systems.

  4. A Triple Protostar System in L1448 IRS3B Formed via Fragmentation of a Gravitationally Unstable Disk

    NASA Astrophysics Data System (ADS)

    Tobin, John J.; Kratter, Kaitlin M.; Persson, Magnus; Looney, Leslie; Dunham, Michael; Segura-Cox, Dominique; Li, Zhi-Yun; Chandler, Claire J.; Sadavoy, Sarah; Harris, Robert J.; Melis, Carl; Perez, Laura M.

    2017-01-01

    Binary and multiple star systems are a frequent outcome of the star formation process; most stars form as part of a binary/multiple protostar system. A possible pathway to the formation of close (< 500 AU) binary/multiple star systems is fragmentation of a massive protostellar disk due to gravitational instability. We observed the triple protostar system L1448 IRS3B with ALMA at 1.3 mm in dust continuum and molecular lines to determine if this triple protostar system, where all companions are separated by < 200 AU, is likely to have formed via disk fragmentation. From the dust continuum emission, we find a massive, 0.39 solar mass disk surrounding the three protostars with spiral structure. The disk is centered on two protostars that are separated by 61 AU and the third protostar is located in the outer disk at 183 AU. The tertiary companion is coincident with a spiral arm, and it is the brightest source of emission in the disk, surrounded by ~0.09 solar masses of disk material. Molecular line observations from 13CO and C18O confirm that the kinematic center of mass is coincident with the two central protostars and that the disk is consistent with being in Keplerian rotation; the combined mass of the two close protostars is ~1 solar mass. We demonstrate that the disk around L1448 IRS3B remains marginally unstable at radii between 150~AU and 320~AU, overlapping with the location of the tertiary protostar. This is consistent with models for a protostellar disk that has recently undergone gravitational instability, spawning the companion stars.

  5. Aerodynamic and torque characteristics of enclosed Co/counter rotating disks

    NASA Astrophysics Data System (ADS)

    Daniels, W. A.; Johnson, B. V.; Graber, D. J.

    1989-06-01

    Experiments were conducted to determine the aerodynamic and torque characteristics of adjacent rotating disks enclosed in a shroud, in order to obtain an extended data base for advanced turbine designs such as the counterrotating turbine. Torque measurements were obtained on both disks in the rotating frame of reference for corotating, counterrotating and one-rotating/one-static disk conditions. The disk models used in the experiments included disks with typical smooth turbine geometry, disks with bolts, disks with bolts and partial bolt covers, and flat disks. A windage diaphragm was installed at mid-cavity for some experiments. The experiments were conducted with various amounts of coolant throughflow injected into the disk cavity from the disk hub or from the disk OD with swirl. The experiments were conducted at disk tangential Reynolds number up to 1.6 x 10 to the 7th with air as the working fluid. The results of this investigation indicated that the static shroud contributes a significant amount to the total friction within the disk system; the torque on counterrotating disks is essentially independent of coolant flow total rate, flow direction, and tangential Reynolds number over the range of conditions tested; and a static windage diaphragm reduces disk friction in counterrotating disk systems.

  6. Transitional Disks Associated with Intermediate-Mass Stars: Results of the SEEDS YSO Survey

    NASA Technical Reports Server (NTRS)

    Grady, C.; Fukagawa, M.; Maruta, Y.; Ohta, Y.; Wisniewski, J.; Hashimoto, J.; Okamoto, Y.; Momose, M.; Currie, T.; McElwain, M.; hide

    2014-01-01

    Protoplanetary disks are where planets form, grow, and migrate to produce the diversity of exoplanet systems we observe in mature systems. Disks where this process has advanced to the stage of gap opening, and in some cases central cavity formation, have been termed pre-transitional and transitional disks in the hope that they represent intermediate steps toward planetary system formation. Recent reviews have focussed on disks where the star is of solar or sub-solar mass. In contrast to the sub-millimeter where cleared central cavities predominate, at H-band some T Tauri star transitional disks resemble primordial disks in having no indication of clearing, some show a break in the radial surface brightness profile at the inner edge of the outer disk, while others have partially to fully cleared gaps or central cavities. Recently, the Meeus Group I Herbig stars, intermediate-mass PMS stars with IR spectral energy distributions often interpreted as flared disks, have been proposed to have transitional and pre-transitional disks similar to those associated with solar-mass PMS stars, based on thermal-IR imaging, and sub-millimeter interferometry. We have investigated their appearance in scattered light as part of the Strategic Exploration of Exoplanets and Disks with Subaru (SEEDS), obtaining H-band polarimetric imagery of 10 intermediate-mass stars with Meeus Group I disks. Augmented by other disks with imagery in the literature, the sample is now sufficiently large to explore how these disks are similar to and differ from T Tauri star disks. The disk morphologies seen in the Tauri disks are also found for the intermediate-mass star disks, but additional phenomena are found; a hallmark of these disks is remarkable individuality and diversity which does not simply correlate with disk mass or stellar properties, including age, including spiral arms in remnant envelopes, arms in the disk, asymmetrically and potentially variably shadowed outer disks, gaps, and one disk where only half of the disk is seen in scattered light at H. We will discuss our survey results in terms of spiral arm theory, dust trapping vortices, and systematic differences in the relative scale height of these disks compared to those around Solar-mass stars. For the disks with spiral arms we discuss the planet-hosting potential, and limits on where giant planets can be located. We also discuss the implications for imaging with extreme adaptive optics instruments. Grady is supported under NSF AST 1008440 and through the NASA Origins of Solar Systems program on NNG13PB64P. JPW is supported NSF AST 100314. 0) in marked contrast to protoplanetary disks, transitional disks exhibit wide range of structural features1) arm visibility correlated with relative scale height in disk2) asymmetric and possibly variable shadowing of outer portions some transitional disks3) confirm pre-transitional disk nature of Oph IRS 48, MWC 758, HD 169142, etc.

  7. Focus on the post-DVD formats

    NASA Astrophysics Data System (ADS)

    He, Hong; Wei, Jingsong

    2005-09-01

    As the digital TV(DTV) technologies are developing rapidly on its standard system, hardware desktop, software model, and interfaces between DTV and the home net, High Definition TV (HDTV) program worldwide broadcasting is scheduled. Enjoying high quality TV program at home is not a far-off dream for people. As for the main recording media, what would the main stream be for the optical storage technology to meet the HDTV requirements is becoming a great concern. At present, there are a few kinds of Post-DVD formats which are competing on technology, standard and market. Here we give a review on the co-existing Post-DVD formats in the world. We will discuss on the basic parameters for optical disk, video /audio coding strategy and system performance for HDTV program.

  8. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    PubMed Central

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  9. Elucidation of band structure of charge storage in conducting polymers using a redox reaction.

    PubMed

    Contractor, Asfiya Q; Juvekar, Vinay A

    2014-07-01

    A novel technique to investigate charge storage characteristics of intrinsically conducting polymer films has been developed. A redox reaction is conducted on a polymer film on a rotating disk electrode under potentiostatic condition so that the rate of charging of the film equals the rate of removal of the charge by the reaction. The voltammogram obtained from the experiment on polyaniline film using Fe(2+)/Fe(3+) in HCl as the redox system shows five distinct linear segments (bands) with discontinuity in the slope at specific transition potentials. These bands are the same as those indicated by electron spin resonance (ESR)/Raman spectroscopy with comparable transition potentials. From the dependence of the slopes of the bands on concentration of ferrous and ferric ions, it was possible to estimate the energies of the charge carriers in different bands. The film behaves as a redox capacitor and does not offer resistance to charge transfer and electronic conduction.

  10. The Science of Computing: Virtual Memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1986-01-01

    In the March-April issue, I described how a computer's storage system is organized as a hierarchy consisting of cache, main memory, and secondary memory (e.g., disk). The cache and main memory form a subsystem that functions like main memory but attains speeds approaching cache. What happens if a program and its data are too large for the main memory? This is not a frivolous question. Every generation of computer users has been frustrated by insufficient memory. A new line of computers may have sufficient storage for the computations of its predecessor, but new programs will soon exhaust its capacity. In 1960, a longrange planning committee at MIT dared to dream of a computer with 1 million words of main memory. In 1985, the Cray-2 was delivered with 256 million words. Computational physicists dream of computers with 1 billion words. Computer architects have done an outstanding job of enlarging main memories yet they have never kept up with demand. Only the shortsighted believe they can.

  11. Design and implementation of a channel decoder with LDPC code

    NASA Astrophysics Data System (ADS)

    Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan

    2008-12-01

    Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.

  12. A Compute Capable SSD Architecture for Next-Generation Non-volatile Memories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De, Arup

    2014-01-01

    Existing storage technologies (e.g., disks and ash) are failing to cope with the processor and main memory speed and are limiting the overall perfor- mance of many large scale I/O or data-intensive applications. Emerging fast byte-addressable non-volatile memory (NVM) technologies, such as phase-change memory (PCM), spin-transfer torque memory (STTM) and memristor are very promising and are approaching DRAM-like performance with lower power con- sumption and higher density as process technology scales. These new memories are narrowing down the performance gap between the storage and the main mem- ory and are putting forward challenging problems on existing SSD architecture, I/O interfacemore » (e.g, SATA, PCIe) and software. This dissertation addresses those challenges and presents a novel SSD architecture called XSSD. XSSD o oads com- putation in storage to exploit fast NVMs and reduce the redundant data tra c across the I/O bus. XSSD o ers a exible RPC-based programming framework that developers can use for application development on SSD without dealing with the complication of the underlying architecture and communication management. We have built a prototype of XSSD on the BEE3 FPGA prototyping system. We implement various data-intensive applications and achieve speedup and energy ef- ciency of 1.5-8.9 and 1.7-10.27 respectively. This dissertation also compares XSSD with previous work on intelligent storage and intelligent memory. The existing ecosystem and these new enabling technologies make this system more viable than earlier ones.« less

  13. Development of a set of equations for incorporating disk flexibility effects in rotordynamical analyses

    NASA Technical Reports Server (NTRS)

    Flowers, George T.; Ryan, Stephen G.

    1991-01-01

    Rotordynamical equations that account for disk flexibility are developed. These equations employ free-free rotor modes to model the rotor system. Only transverse vibrations of the disks are considered, with the shaft/disk system considered to be torsionally rigid. Second order elastic foreshortening effects that couple with the rotor speed to produce first order terms in the equations of motion are included. The approach developed in this study is readily adaptable for usage in many of the codes that are current used in rotordynamical simulations. The equations are similar to those used in standard rigid disk analyses but with additional terms that include the effects of disk flexibility. An example case is presented to demonstrate the use of the equations and to show the influence of disk flexibility on the rotordynamical behavior of a sample system.

  14. Debris disks as signposts of terrestrial planet formation. II. Dependence of exoplanet architectures on giant planet and disk properties

    NASA Astrophysics Data System (ADS)

    Raymond, S. N.; Armitage, P. J.; Moro-Martín, A.; Booth, M.; Wyatt, M. C.; Armstrong, J. C.; Mandell, A. M.; Selsis, F.; West, A. A.

    2012-05-01

    We present models for the formation of terrestrial planets, and the collisional evolution of debris disks, in planetary systems that contain multiple marginally unstable gas giants. We previously showed that in such systems, the dynamics of the giant planets introduces a correlation between the presence of terrestrial planets and cold dust, i.e., debris disks, which is particularly pronounced at λ ~ 70 μm. Here we present new simulations that show that this connection is qualitatively robust to a range of parameters: the mass distribution of the giant planets, the width and mass distribution of the outer planetesimal disk, and the presence of gas in the disk when the giant planets become unstable. We discuss how variations in these parameters affect the evolution. We find that systems with equal-mass giant planets undergo the most violent instabilities, and that these destroy both terrestrial planets and the outer planetesimal disks that produce debris disks. In contrast, systems with low-mass giant planets efficiently produce both terrestrial planets and debris disks. A large fraction of systems with low-mass (M ≲ 30 M⊕) outermost giant planets have final planetary separations that, scaled to the planets' masses, are as large or larger than the Saturn-Uranus and Uranus-Neptune separations in the solar system. We find that the gaps between these planets are not only dynamically stable to test particles, but are frequently populated by planetesimals. The possibility of planetesimal belts between outer giant planets should be taken into account when interpreting debris disk SEDs. In addition, the presence of ~ Earth-mass "seeds" in outer planetesimal disks causes the disks to radially spread to colder temperatures, and leads to a slow depletion of the outer planetesimal disk from the inside out. We argue that this may explain the very low frequency of >1 Gyr-old solar-type stars with observed 24 μm excesses. Our simulations do not sample the full range of plausible initial conditions for planetary systems. However, among the configurations explored, the best candidates for hosting terrestrial planets at ~1 AU are stars older than 0.1-1 Gyr with bright debris disks at 70 μm but with no currently-known giant planets. These systems combine evidence for the presence of ample rocky building blocks, with giant planet properties that are least likely to undergo destructive dynamical evolution. Thus, we predict two correlations that should be detected by upcoming surveys: an anti-correlation between debris disks and eccentric giant planets and a positive correlation between debris disks and terrestrial planets. Three movies associated to Figs. 1, 3, and 7 are available in electronic form at http://www.aanda.org

  15. Modifying the Standard Disk Model for the Ultraviolet Spectral Analysis of Disk-dominated Cataclysmic Variables. I. The Novalikes MV Lyrae, BZ Camelopardalis, and V592 Cassiopeiae

    NASA Astrophysics Data System (ADS)

    Godon, Patrick; Sion, Edward M.; Balman, Şölen; Blair, William P.

    2017-09-01

    The standard disk is often inadequate to model disk-dominated cataclysmic variables (CVs) and generates a spectrum that is bluer than the observed UV spectra. X-ray observations of these systems reveal an optically thin boundary layer (BL) expected to appear as an inner hole in the disk. Consequently, we truncate the inner disk. However, instead of removing the inner disk, we impose the no-shear boundary condition at the truncation radius, thereby lowering the disk temperature and generating a spectrum that better fits the UV data. With our modified disk, we analyze the archival UV spectra of three novalikes that cannot be fitted with standard disks. For the VY Scl systems MV Lyr and BZ Cam, we fit a hot inflated white dwarf (WD) with a cold modified disk (\\dot{M} ˜ a few 10-9 M ⊙ yr-1). For V592 Cas, the slightly modified disk (\\dot{M}˜ 6× {10}-9 {M}⊙ {{yr}}-1) completely dominates the UV. These results are consistent with Swift X-ray observations of these systems, revealing BLs merged with ADAF-like flows and/or hot coronae, where the advection of energy is likely launching an outflow and heating the WD, thereby explaining the high WD temperature in VY Scl systems. This is further supported by the fact that the X-ray hardness ratio increases with the shallowness of the UV slope in a small CV sample we examine. Furthermore, for 105 disk-dominated systems, the International Ultraviolet Explorer spectra UV slope decreases in the same order as the ratio of the X-ray flux to optical/UV flux: from SU UMa’s, to U Gem’s, Z Cam’s, UX UMa’s, and VY Scl’s.

  16. A near-infrared imaging survey of interacting galaxies - The disk-disk merger candidates subset

    NASA Technical Reports Server (NTRS)

    Stanford, S. A.; Bushouse, H. A.

    1991-01-01

    Near-infrared imaging obtained for systems believed to be advanced disk-disk mergers are presented and discussed. These systems were chosen from a sample of approximately 170 objects from the Arp Atlas of Peculiar Galaxies which have been imaged in the JHK bands as part of an investigation into the stellar component of interacting galaxies. Of the eight remnants which show optical signs of a disk-disk merger, the near-infrared surface brightness profiles are well-fitted by an r exp 1/4 law over all measured radii in four systems, and out to radii of about 3 kpc in three systems. These K band profiles indicate that most of the remnants in the sample either have finished or are in the process of relaxing into a mass distribution like that of normal elliptical galaxies.

  17. The Evolution of a Planet-Forming Disk Artist Concept Animation

    NASA Image and Video Library

    2004-12-09

    This frame from an animation shows the evolution of a planet-forming disk around a star. Initially, the young disk is bright and thick with dust, providing raw materials for building planets. In the first 10 million years or so, gaps appear within the disk as newborn planets coalesce out of the dust, clearing out a path. In time, this planetary "debris disk" thins out as gravitational interactions with numerous planets slowly sweep away the dust. Steady pressure from the starlight and solar winds also blows out the dust. After a few billion years, only a thin ring remains in the outermost reaches of the system, a faint echo of the once-brilliant disk. Our own solar system has a similar debris disk -- a ring of comets called the Kuiper Belt. Leftover dust in the inner portion of the solar system is known as "zodiacal dust." Bright, young disks can be imaged directly by visible-light telescopes, such as NASA's Hubble Space Telescope. Older, fainter debris disks can be detected only by infrared telescopes like NASA's Spitzer Space Telescope, which sense the disks' dim heat. http://photojournal.jpl.nasa.gov/catalog/PIA07099

  18. Deciphering Debris Disk Structure with the Submillimeter Array

    NASA Astrophysics Data System (ADS)

    MacGregor, Meredith Ann

    2018-01-01

    More than 20% of nearby main sequence stars are surrounded by dusty disks continually replenished via the collisional erosion of planetesimals, larger bodies similar to asteroids and comets in our own Solar System. The material in these ‘debris disks’ is directly linked to the larger bodies such as planets in the system. As a result, the locations, morphologies, and physical properties of dust in these disks provide important probes of the processes of planet formation and subsequent dynamical evolution. Observations at millimeter wavelengths are especially critical to our understanding of these systems, since they are dominated by larger grains that do not travel far from their origin and therefore reliably trace the underlying planetesimal distribution. The Submillimeter Array (SMA) plays a key role in advancing our understanding of debris disks by providing sensitivity at the short baselines required to determine the structure of wide-field disks, such as the HR 8799 debris disk. Many of these wide-field disks are among the closest systems to us, and will serve as cornerstone templates for the interpretation of more distant, less accessible systems.

  19. Solar heat collection with suspended metal roofing and whole house ventilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maynard, T.

    1996-10-01

    A south pitched roof is employed for solar collection directly onto a roofing with chocolate brown color. The roofing is structural and is suspended over plywood decking so as to create an air space which receives input from the coolest and lowest basement air of the house interior. Air heated beneath the metal roofing is returned to a basement storage wall. Full length plenum cavities are formed into the ordinary rafter truss framing--at the knee wall and collar tie spaces. Preliminary testing of BTU gain at known air flows is acquired with a microprocessor system continuously collecting input and outputmore » temperatures at the roof collector into disk data files.« less

  20. Screening of redox couples and electrode materials

    NASA Technical Reports Server (NTRS)

    Giner, J.; Swette, L.; Cahill, K.

    1976-01-01

    Electrochemical parameters of selected redox couples that might be potentially promising for application in bulk energy storage systems were investigated. This was carried out in two phases: a broad investigation of the basic characteristics and behavior of various redox couples, followed by a more limited investigation of their electrochemical performance in a redox flow reactor configuration. In the first phase of the program, eight redox couples were evaluated under a variety of conditions in terms of their exchange current densities as measured by the rotating disk electrode procedure. The second phase of the program involved the testing of four couples in a redox reactor under flow conditions with a varity of electrode materials and structures.

  1. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  2. Validation of a paper-disk approach to facilitate the sensory evaluation of bitterness in dairy protein hydrolysates from a newly developed food-grade fractionation system.

    PubMed

    Murray, Niamh M; O'Riordan, Dolores; Jacquier, Jean-Christophe; O'Sullivan, Michael; Cohen, Joshua L; Heymann, Hildegarde; Barile, Daniela; Dallas, David C

    2017-06-01

    Casein-hydrolysates (NaCaH) are desirable functional ingredients, but their bitterness impedes usage in foods. This study sought to validate a paper-disk approach to help evaluate bitterness in NaCaHs and to develop a food-grade approach to separate a NaCaH into distinct fractions, which could be evaluated by a sensory panel. Membrane filtration generated <0.2-μm and <3-kDa permeates. Further fractionation of the <3-kDa permeate by flash-chromatography generated four fractions using ethanol (EtOH) concentrations of 5, 10, 30 and 50%. As some fractions were poorly soluble in water, the fractions were resolubilzed in EtOH and impregnated into paper-disks for sensory evaluation. Bitterness differences observed in the membrane fractions using this sensory evaluation approach reflected those observed for the same fractions presented as a liquid. The flash-chromatography fractions increased in bitterness with an increase in hydrophobicity, except for the 50% EtOH fraction which had little bitterness. Amino acid analysis of the fractions showed enrichment of different essential amino acids in both the bitter and less bitter fractions. The developed food-grade fractionation system, allowed for a simple and reasonably scaled approach to separating a NaCaH, into physicochemically different fractions that could be evaluated by a sensory panel. The method of sensory evaluation used in this study, in which NaCaH samples are impregnated into paper-disks, provided potential solutions for issues such as sample insolubility and limited quantities of sample. As the impregnated paper-disk samples were dehydrated, their long storage life could also be suitable for sensory evaluations distributed by mail for large consumer studies. The research, in this study, allowed for a greater understanding of the physicochemical basis for bitterness in this NaCaH. As some essential amino acids were enriched in the less bitter fractions, selective removal of bitter fractions could allow for the incorporation of the less bitter NaCaH fractions into food products for added nutritional value, without negatively impacting sensory properties. There is potential for this approach to be applied to other food ingredients with undesirable tastes, such as polyphenols.

  3. Validation of a paper-disk approach to facilitate the sensory evaluation of bitterness in dairy protein hydrolysates from a newly developed food-grade fractionation system

    PubMed Central

    Murray, Niamh M.; O'Riordan, Dolores; Jacquier, Jean-Christophe; O'Sullivan, Michael; Cohen, Joshua L.; Heymann, Hildegarde; Barile, Daniela; Dallas, David C.

    2017-01-01

    Casein-hydrolysates (NaCaH) are desirable functional ingredients, but their bitterness impedes usage in foods. This study sought to validate a paper-disk approach to help evaluate bitterness in NaCaHs and to develop a food-grade approach to separate a NaCaH into distinct fractions, which could be evaluated by a sensory panel. Membrane filtration generated <0.2-μm and <3-kDa permeates. Further fractionation of the <3-kDa permeate by flash-chromatography generated four fractions using ethanol (EtOH) concentrations of 5, 10, 30 and 50%. As some fractions were poorly soluble in water, the fractions were resolubilzed in EtOH and impregnated into paper-disks for sensory evaluation. Bitterness differences observed in the membrane fractions using this sensory evaluation approach reflected those observed for the same fractions presented as a liquid. The flash-chromatography fractions increased in bitterness with an increase in hydrophobicity, except for the 50% EtOH fraction which had little bitterness. Amino acid analysis of the fractions showed enrichment of different essential amino acids in both the bitter and less bitter fractions. Practical Applications The developed food-grade fractionation system, allowed for a simple and reasonably scaled approach to separating a NaCaH, into physicochemically different fractions that could be evaluated by a sensory panel. The method of sensory evaluation used in this study, in which NaCaH samples are impregnated into paper-disks, provided potential solutions for issues such as sample insolubility and limited quantities of sample. As the impregnated paper-disk samples were dehydrated, their long storage life could also be suitable for sensory evaluations distributed by mail for large consumer studies. The research, in this study, allowed for a greater understanding of the physicochemical basis for bitterness in this NaCaH. As some essential amino acids were enriched in the less bitter fractions, selective removal of bitter fractions could allow for the incorporation of the less bitter NaCaH fractions into food products for added nutritional value, without negatively impacting sensory properties. There is potential for this approach to be applied to other food ingredients with undesirable tastes, such as polyphenols. PMID:29104365

  4. Comparison of different finishing/polishing systems on surface roughness and gloss of resin composites.

    PubMed

    Antonson, Sibel A; Yazici, A Rüya; Kilinc, Evren; Antonson, Donald E; Hardigan, Patrick C

    2011-07-01

    The aim of this study was to compare four finishing/polishing systems (F/P) on surface roughness and gloss of different resin composites. A total of 40 disc samples (15 mm × 3 mm) were prepared from a nanofill - Filtek Supreme Plus (FS) and a micro-hybrid resin composite - Esthet-X (EX). Following 24h storage in 37°C water, the top surfaces of each sample were roughened using 120-grit sandpaper. Baseline measurements of surface roughness (Ra, μm) and gloss were recorded. Each composite group was divided into four F/P disk groups: Astropol[AP], Enhance/PoGo[EP], Sof-Lex[SL], and an experimental disk system, EXL-695[EXL] (n=5). The same operator finished/polished all samples. One sample from each group was evaluated under SEM. Another blinded-operator conducted postoperative measurements. Results were analysed by two-way ANOVA, two interactive MANOVA and Tukey's t-test (p<0.05). In surface roughness, the baseline of two composites differed significantly from each other whereas postoperatively there was no significance. The Sof-Lex F/P system provided the smoothest surface although there were no statistical significance differences between F/P systems (p>0.01). In gloss, FS composite with the EXL-695 system provided a significantly higher gloss (p<0.01). EX treated by Soflex revealed the least gloss (p<0.05). SEM images revealed comparable results for F/P systems but EX surfaces included more air pockets. Four different finishing/polishing systems provided comparable surface smoothness for both composites, whereas EXL with FS provided significantly higher gloss. SEM evaluations revealed that the EX surface contained more air pockets but F/P systems were compatible. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. The use of the cannibalistic habit and elevated relative humidity to improve the storage and shipment of the predatory mite Neoseiulus californicus (Acari: Phytoseiidae).

    PubMed

    Ghazy, Noureldin Abuelfadl; Amano, Hiroshi

    2016-07-01

    This study investigated the feasibility of using the cannibalistic habits of the mite Neoseiulus californicus (McGregor) and controlling the relative humidity (RH) to prolong the survival time during the storage or shipment of this predatory mite. Three-day-old mated and unmated females were individually kept at 25 ± 1 °C in polypropylene vials (1.5 mL), each containing one of the following items or combinations of items: a kidney bean leaf disk (L), N. californicus eggs (E), and both a leaf disk and the eggs (LE). Because the leaf disk increased the RH in the vials, the RH was 95 ± 2 % under the L and LE treatments and 56 ± 6 % under the E treatment. The median lethal time (LT50) exceeded 50 days for the mated and unmated females under the LE treatment. However, it did not exceed 11 or 3 days for all females under the L or E treatments, respectively. Under the LE treatment, the mated and unmated females showed cannibalistic behavior and consumed an average of 5.2 and 4.6 eggs/female/10 days. Some of the females that survived for LT50 under each treatment were transferred and fed normally with a constant supply of Tetranychus urticae Koch. Unmated females were provided with adult males for 24 h for mating. Only females previously kept at LE treatment produced numbers of eggs equivalent to the control females (no treatment is applied). The results suggested that a supply of predator eggs and leaf material might have furnished nutrition and water vapor, respectively, and that this combination prolonged the survival time of N. californicus during storage. Moreover, this approach poses no risk of pest contamination in commercial products.

  6. Observational studies of the clearing phase in proto-planetary disk systems

    NASA Technical Reports Server (NTRS)

    Grady, Carol A.

    1994-01-01

    A summary of the work completed during the first year of a 5 year program to observationally study the clearing phase of proto-planetary disks is presented. Analysis of archival and current IUE data, together with supporting optical observations has resulted in the identification of 6 new proto-planetary disk systems associated with Herbig Ae/Be stars, the evolutionary precursors of the beta Pictoris system. These systems exhibit large amplitude light and optical color variations which enable us to identify additional systems which are viewed through their circumstellar disks including a number of classical T Tauri stars. On-going IUE observations of Herbig Ae/Be and T Tauri stars with this orientation have enabled us to detect bipolar emission plausibly associated with disk winds. Preliminary circumstellar extinction studies were completed for one star, UX Ori. Intercomparison of the available sample of edge-on systems, with stars ranging from 1-6 solar masses, suggests that the signatures of accreting gas, disk winds, and bipolar flows and the prominence of a dust-scattered light contribution to the integrated light of the system decreases with decreasing IR excess.

  7. Accretion Disks and the Formation of Stellar Systems

    NASA Astrophysics Data System (ADS)

    Kratter, Kaitlin Michelle

    2011-02-01

    In this thesis, we examine the role of accretion disks in the formation of stellar systems, focusing on young massive disks which regulate the flow of material from the parent molecular core down to the star. We study the evolution of disks with high infall rates that develop strong gravitational instabilities. We begin in chapter 1 with a review of the observations and theory which underpin models for the earliest phases of star formation and provide a brief review of basic accretion disk physics, and the numerical methods that we employ. In chapter 2 we outline the current models of binary and multiple star formation, and review their successes and shortcomings from a theoretical and observational perspective. In chapter 3 we begin with a relatively simple analytic model for disks around young, high mass stars, showing that instability in these disks may be responsible for the higher multiplicity fraction of massive stars, and perhaps the upper mass to which they grow. We extend these models in chapter 4 to explore the properties of disks and the formation of binary companions across a broad range of stellar masses. In particular, we model the role of global and local mechanisms for angular momentum transport in regulating the relative masses of disks and stars. We follow the evolution of these disks throughout the main accretion phase of the system, and predict the trajectory of disks through parameter space. We follow up on the predictions made in our analytic models with a series of high resolution, global numerical experiments in chapter 5. Here we propose and test a new parameterization for describing rapidly accreting, gravitationally unstable disks. We find that disk properties and system multiplicity can be mapped out well in this parameter space. Finally, in chapter 6, we address whether our studies of unstable disks are relevant to recently detected massive planets on wide orbits around their central stars.

  8. Numerical Simulations of Naturally Tilted, Retrogradely Precessing, Nodal Superhumping Accretion Disks

    NASA Astrophysics Data System (ADS)

    Montgomery, M. M.

    2012-02-01

    Accretion disks around black hole, neutron star, and white dwarf systems are thought to sometimes tilt, retrogradely precess, and produce hump-shaped modulations in light curves that have a period shorter than the orbital period. Although artificially rotating numerically simulated accretion disks out of the orbital plane and around the line of nodes generate these short-period superhumps and retrograde precession of the disk, no numerical code to date has been shown to produce a disk tilt naturally. In this work, we report the first naturally tilted disk in non-magnetic cataclysmic variables using three-dimensional smoothed particle hydrodynamics. Our simulations show that after many hundreds of orbital periods, the disk has tilted on its own and this disk tilt is without the aid of radiation sources or magnetic fields. As the system orbits, the accretion stream strikes the bright spot (which is on the rim of the tilted disk) and flows over and under the disk on different flow paths. These different flow paths suggest the lift force as a source to disk tilt. Our results confirm the disk shape, disk structure, and negative superhump period and support the source to disk tilt, source to retrograde precession, and location associated with X-ray and He II emission from the disk as suggested in previous works. Our results identify the fundamental negative superhump frequency as the indicator of disk tilt around the line of nodes.

  9. Dynamics of binary and planetary-system interaction with disks - Eccentricity changes

    NASA Technical Reports Server (NTRS)

    Atrymowicz, Pawel

    1992-01-01

    Protostellar and protoplanetary systems, as well as merging galactic nuclei, often interact tidally and resonantly with the astrophysical disks via gravity. Underlying our understanding of the formation processes of stars, planets, and some galaxies is a dynamical theory of such interactions. Its main goals are to determine the geometry of the binary-disk system and, through the torque calculations, the rate of change of orbital elements of the components. We present some recent developments in this field concentrating on eccentricity driving mechanisms in protoplanetary and protobinary systems. In those two types of systems the result of the interaction is opposite. A small body embedded in a disk suffers a decrease of orbital eccentricity, whereas newly formed binary stars surrounded by protostellar disks may undergo a significant orbital evolution increasing their eccentricities.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballering, Nicholas P.; Rieke, George H.; Gáspár, András, E-mail: ballerin@email.arizona.edu

    Observations of debris disks allow for the study of planetary systems, even where planets have not been detected. However, debris disks are often only characterized by unresolved infrared excesses that resemble featureless blackbodies, and the location of the emitting dust is uncertain due to a degeneracy with the dust grain properties. Here, we characterize the Spitzer Infrared Spectrograph spectra of 22 debris disks exhibiting 10 μm silicate emission features. Such features arise from small warm dust grains, and their presence can significantly constrain the orbital location of the emitting debris. We find that these features can be explained by themore » presence of an additional dust component in the terrestrial zones of the planetary systems, i.e., an exozodiacal belt. Aside from possessing exozodiacal dust, these debris disks are not particularly unique; their minimum grain sizes are consistent with the blowout sizes of their systems, and their brightnesses are comparable to those of featureless warm debris disks. These disks are in systems of a range of ages, though the older systems with features are found only around A-type stars. The features in young systems may be signatures of terrestrial planet formation. Analyzing the spectra of unresolved debris disks with emission features may be one of the simplest and most accessible ways to study the terrestrial regions of planetary systems.« less

  11. New capabilities in the HENP grand challenge storage access systemand its application at RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, L.; Gibbard, B.; Malon, D.

    2000-04-25

    The High Energy and Nuclear Physics Data Access GrandChallenge project has developed an optimizing storage access softwaresystem that was prototyped at RHIC. It is currently undergoingintegration with the STAR experiment in preparation for data taking thatstarts in mid-2000. The behavior and lessons learned in the RHIC MockData Challenge exercises are described as well as the observedperformance under conditions designed to characterize scalability. Up to250 simultaneous queries were tested and up to 10 million events across 7event components were involved in these queries. The system coordinatesthe staging of "bundles" of files from the HPSS tape system, so that allthe needed componentsmore » of each event are in disk cache when accessed bythe application software. The caching policy algorithm for thecoordinated bundle staging is described in the paper. The initialprototype implementation interfaced to the Objectivity/DB. In this latestversion, it evolved to work with arbitrary files and use CORBA interfacesto the tag database and file catalog services. The interface to the tagdatabase and the MySQL-based file catalog services used by STAR aredescribed along with the planned usage scenarios.« less

  12. High fold computer disk storage DATABASE for fast extended analysis of γ-rays events

    NASA Astrophysics Data System (ADS)

    Stézowski, O.; Finck, Ch.; Prévost, D.

    1999-03-01

    Recently spectacular technical developments have been achieved to increase the resolving power of large γ-ray spectrometers. With these new eyes, physicists are able to study the intricate nature of atomic nuclei. Concurrently more and more complex multidimensional analyses are needed to investigate very weak phenomena. In this article, we first present a software (DATABASE) allowing high fold coincidences γ-rays events to be stored on hard disk. Then, a non-conventional method of analysis, anti-gating procedure, is described. Two physical examples are given to explain how it can be used and Monte Carlo simulations have been performed to test the validity of this method.

  13. STS-53 Discovery, OV-103, DOD Hercules digital electronic imagery equipment

    NASA Technical Reports Server (NTRS)

    1992-01-01

    STS-53 Discovery, Orbiter Vehicle (OV) 103, Department of Defense (DOD) mission Hand-held Earth-oriented Real-time Cooperative, User-friendly, Location, targeting, and Environmental System (Hercules) spaceborne experiment equipment is documented in this table top view. HERCULES is a joint NAVY-NASA-ARMY payload designed to provide real-time high resolution digital electronic imagery and geolocation (latitude and longitude determination) of earth surface targets of interest. HERCULES system consists of (from left to right): a specially modified GRID Systems portable computer mounted atop NASA developed Playback-Downlink Unit (PDU) and the Naval Research Laboratory (NRL) developed HERCULES Attitude Processor (HAP); the NASA-developed Electronic Still Camera (ESC) Electronics Box (ESCEB) including removable imagery data storage disks and various connecting cables; the ESC (a NASA modified Nikon F-4 camera) mounted atop the NRL HERCULES Inertial Measurement Unit (HIMU) containing the three

  14. STS-53 Discovery, OV-103, DOD Hercules digital electronic imagery equipment

    NASA Image and Video Library

    1992-04-22

    STS-53 Discovery, Orbiter Vehicle (OV) 103, Department of Defense (DOD) mission Hand-held Earth-oriented Real-time Cooperative, User-friendly, Location, targeting, and Environmental System (Hercules) spaceborne experiment equipment is documented in this table top view. HERCULES is a joint NAVY-NASA-ARMY payload designed to provide real-time high resolution digital electronic imagery and geolocation (latitude and longitude determination) of earth surface targets of interest. HERCULES system consists of (from left to right): a specially modified GRID Systems portable computer mounted atop NASA developed Playback-Downlink Unit (PDU) and the Naval Research Laboratory (NRL) developed HERCULES Attitude Processor (HAP); the NASA-developed Electronic Still Camera (ESC) Electronics Box (ESCEB) including removable imagery data storage disks and various connecting cables; the ESC (a NASA modified Nikon F-4 camera) mounted atop the NRL HERCULES Inertial Measurement Unit (HIMU) containing the three-axis ring-laser gyro.

  15. Electron beam diagnostic for profiling high power beams

    DOEpatents

    Elmer, John W [Danville, CA; Palmer, Todd A [Livermore, CA; Teruya, Alan T [Livermore, CA

    2008-03-25

    A system for characterizing high power electron beams at power levels of 10 kW and above is described. This system is comprised of a slit disk assembly having a multitude of radial slits, a conducting disk with the same number of radial slits located below the slit disk assembly, a Faraday cup assembly located below the conducting disk, and a start-stop target located proximate the slit disk assembly. In order to keep the system from over-heating during use, a heat sink is placed in close proximity to the components discussed above, and an active cooling system, using water, for example, can be integrated into the heat sink. During use, the high power beam is initially directed onto a start-stop target and after reaching its full power is translated around the slit disk assembly, wherein the beam enters the radial slits and the conducting disk radial slits and is detected at the Faraday cup assembly. A trigger probe assembly can also be integrated into the system in order to aid in the determination of the proper orientation of the beam during reconstruction. After passing over each of the slits, the beam is then rapidly translated back to the start-stop target to minimize the amount of time that the high power beam comes in contact with the slit disk assembly. The data obtained by the system is then transferred into a computer system, where a computer tomography algorithm is used to reconstruct the power density distribution of the beam.

  16. Imaging Transitional Disks with TMT: Lessons Learned from the SEEDS Survey

    NASA Technical Reports Server (NTRS)

    Grady, Carol A.; Fukagawa, M.; Muto, T.; Hashimoto, J.

    2014-01-01

    TMT studies of the early phases of giant planet formation will build on studies carried out in this decade using 8-meter class telescopes. One such study is the Strategic Exploration of Exoplanets and Disks with Subaru transitional disk survey. We have found a wealth of indirect signatures of giant planet presence, including spiral arms, pericenter offsets of the outer disk from the star, and changes in disk color at the inner edge of the outer disk in intermediate-mass PMS star disks. T Tauri star transitional disks are less flamboyant, but are also dynamically colder: any spiral arms in these diskswill be more tightly wound. Imaging such features at the distance of the nearest star-forming regions requires higher angular resolution than achieved with HiCIAO+ AO188. Imaging such disks with extreme AO systems requires use of laser guide stars, and are infeasible with the extreme AO systems currently commissioning on 8-meter class telescopes. Similarly, the JWST and AFTAWFIRST coronagraphs being considered have inner working angles 0.2, and will occult the inner 28 atomic units of systems at d140pc, a region where both high-contrast imagery and ALMA data indicate that giant planets are located in transitional disks. However, studies of transitional disks associated with solar-mass stars and their planet complement are feasible with TMT using NFIRAOS.

  17. Collective transport for active matter run-and-tumble disk systems on a traveling-wave substrate

    DOE PAGES

    Sándor, Csand; Libál, Andras; Reichhardt, Charles; ...

    2017-01-17

    Here, we examine numerically the transport of an assembly of active run-and-tumble disks interacting with a traveling-wave substrate. We show that as a function of substrate strength, wave speed, disk activity, and disk density, a variety of dynamical phases arise that are correlated with the structure and net flux of disks. We find that there is a sharp transition into a state in which the disks are only partially coupled to the substrate and form a phase-separated cluster state. This transition is associated with a drop in the net disk flux, and it can occur as a function of themore » substrate speed, maximum substrate force, disk run time, and disk density. Since variation of the disk activity parameters produces different disk drift rates for a fixed traveling-wave speed on the substrate, the system we consider could be used as an efficient method for active matter species separation. Within the cluster phase, we find that in some regimes the motion of the cluster center of mass is in the opposite direction to that of the traveling wave, while when the maximum substrate force is increased, the cluster drifts in the direction of the traveling wave. This suggests that swarming or clustering motion can serve as a method by which an active system can collectively move against an external drift.« less

  18. Collective transport for active matter run-and-tumble disk systems on a traveling-wave substrate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sándor, Csand; Libál, Andras; Reichhardt, Charles

    Here, we examine numerically the transport of an assembly of active run-and-tumble disks interacting with a traveling-wave substrate. We show that as a function of substrate strength, wave speed, disk activity, and disk density, a variety of dynamical phases arise that are correlated with the structure and net flux of disks. We find that there is a sharp transition into a state in which the disks are only partially coupled to the substrate and form a phase-separated cluster state. This transition is associated with a drop in the net disk flux, and it can occur as a function of themore » substrate speed, maximum substrate force, disk run time, and disk density. Since variation of the disk activity parameters produces different disk drift rates for a fixed traveling-wave speed on the substrate, the system we consider could be used as an efficient method for active matter species separation. Within the cluster phase, we find that in some regimes the motion of the cluster center of mass is in the opposite direction to that of the traveling wave, while when the maximum substrate force is increased, the cluster drifts in the direction of the traveling wave. This suggests that swarming or clustering motion can serve as a method by which an active system can collectively move against an external drift.« less

  19. The properties of the disk system of globular clusters

    NASA Technical Reports Server (NTRS)

    Armandroff, Taft E.

    1989-01-01

    A large refined data sample is used to study the properties and origin of the disk system of globular clusters. A scale height for the disk cluster system of 800-1500 pc is found which is consistent with scale-height determinations for samples of field stars identified with the Galactic thick disk. A rotational velocity of 193 + or - 29 km/s and a line-of-sight velocity dispersion of 59 + or - 14 km/s have been found for the metal-rich clusters.

  20. Method and system for managing power grid data

    DOEpatents

    Yin, Jian; Akyol, Bora A.; Gorton, Ian

    2015-11-10

    A system and method of managing time-series data for smart grids is disclosed. Data is collected from a plurality of sensors. An index is modified for a newly created block. A one disk operation per read or write is performed. The one disk operation per read includes accessing and looking up the index to locate the data without movement of an arm of the disk, and obtaining the data. The one disk operation per write includes searching the disk for free space, calculating an offset, modifying the index, and writing the data contiguously into a block of the disk the index points to.

  1. Studies of extra-solar Oort Clouds and the Kuiper Disk

    NASA Technical Reports Server (NTRS)

    Stern, S. Alan

    1994-01-01

    The March 1994 Semi-Annual report for Studies of Extra-Solar Oort Clouds and the Kuiper Disk is presented. We are conducting research designed to enhance our understanding of the evolution and detectability of comet clouds and disks. This area holds promise for also improving our understanding of outer solar system formation, the bombardment history of the planets, the transport of volatiles and organics from the outer solar system to the inner planets, and to the ultimate fate of comet clouds around the Sun and other stars. According to 'standard' theory, both the Kuiper Disk and Oort Cloud are (at least in part) natural products of the planetary accumulation stage of solar system formation. One expects such assemblages to be a common attribute of other solar systems. Therefore, searches for comet disks and clouds orbiting other stars offer a new method for inferring the presence of planetary systems. Our three-year effort consists of two major efforts: observational work to predict and search for the signatures of Oort Clouds and comet disks around other stars; and modeling studies of the formation and evolution of the Kuiper Disk (KD) and similar assemblages that may reside around other stars, including beta Pic.

  2. Encapsulation of alpha-amylase into starch-based biomaterials: an enzymatic approach to tailor their degradation rate.

    PubMed

    Azevedo, Helena S; Reis, Rui L

    2009-10-01

    This paper reports the effect of alpha-amylase encapsulation on the degradation rate of a starch-based biomaterial. The encapsulation method consisted in mixing a thermostable alpha-amylase with a blend of corn starch and polycaprolactone (SPCL), which were processed by compression moulding to produce circular disks. The presence of water was avoided to keep the water activity low and consequently to minimize the enzyme activity during the encapsulation process. No degradation of the starch matrix occurred during processing and storage (the encapsulated enzyme remained inactive due to the absence of water), since no significant amount of reducing sugars was detected in solution. After the encapsulation process, the released enzyme activity from the SPCL disks after 28days was found to be 40% comparatively to the free enzyme (unprocessed). Degradation studies on SPCL disks, with alpha-amylase encapsulated or free in solution, showed no significant differences on the degradation behaviour between both conditions. This indicates that alpha-amylase enzyme was successfully encapsulated with almost full retention of its enzymatic activity and the encapsulation of alpha-amylase clearly accelerates the degradation rate of the SPCL disks, when compared with the enzyme-free disks. The results obtained in this work show that degradation kinetics of the starch polymer can be controlled by the amount of encapsulated alpha-amylase into the matrix.

  3. Using compressed images in multimedia education

    NASA Astrophysics Data System (ADS)

    Guy, William L.; Hefner, Lance V.

    1996-04-01

    The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD-ROM depending on the quality of image required for the finding in question. This reduces the weight of a teaching file from that of a baby elephant to that of a single CD-ROM disc. Thus, with this method of teaching file preparation and storage the following advantages are realized: (1) Technically easier and less time consuming image reproduction. (2) Considerably less unwieldy and substantially more portable teaching files. (3) Novel ability to index files and then retrieve specific cases of choice based on descriptive text.

  4. Communication: Practical and rigorous reduction of the many-electron quantum mechanical Coulomb problem to O(N{sup 2/3}) storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pederson, Mark R., E-mail: mark.pederson@science.doe.gov

    2015-04-14

    It is tacitly accepted that, for practical basis sets consisting of N functions, solution of the two-electron Coulomb problem in quantum mechanics requires storage of O(N{sup 4}) integrals in the small N limit. For localized functions, in the large N limit, or for planewaves, due to closure, the storage can be reduced to O(N{sup 2}) integrals. Here, it is shown that the storage can be further reduced to O(N{sup 2/3}) for separable basis functions. A practical algorithm, that uses standard one-dimensional Gaussian-quadrature sums, is demonstrated. The resulting algorithm allows for the simultaneous storage, or fast reconstruction, of any two-electron Coulombmore » integral required for a many-electron calculation on processors with limited memory and disk space. For example, for calculations involving a basis of 9171 planewaves, the memory required to effectively store all Coulomb integrals decreases from 2.8 Gbytes to less than 2.4 Mbytes.« less

  5. Communication: practical and rigorous reduction of the many-electron quantum mechanical Coulomb problem to O(N(2/3)) storage.

    PubMed

    Pederson, Mark R

    2015-04-14

    It is tacitly accepted that, for practical basis sets consisting of N functions, solution of the two-electron Coulomb problem in quantum mechanics requires storage of O(N(4)) integrals in the small N limit. For localized functions, in the large N limit, or for planewaves, due to closure, the storage can be reduced to O(N(2)) integrals. Here, it is shown that the storage can be further reduced to O(N(2/3)) for separable basis functions. A practical algorithm, that uses standard one-dimensional Gaussian-quadrature sums, is demonstrated. The resulting algorithm allows for the simultaneous storage, or fast reconstruction, of any two-electron Coulomb integral required for a many-electron calculation on processors with limited memory and disk space. For example, for calculations involving a basis of 9171 planewaves, the memory required to effectively store all Coulomb integrals decreases from 2.8 Gbytes to less than 2.4 Mbytes.

  6. Comets as Messengers from the Early Solar System - Emerging Insights on Delivery of Water, Nitriles, and Organics to Earth

    NASA Technical Reports Server (NTRS)

    Mumma, Michael J.; Charnley, Steven B.

    2012-01-01

    The question of exogenous delivery of water and organics to Earth and other young planets is of critical importance for understanding the origin of Earth's volatiles, and for assessing the possible existence of exo-planets similar to Earth. Viewed from a cosmic perspective, Earth is a dry planet, yet its oceans are enriched in deuterium by a large factor relative to nebular hydrogen and analogous isotopic enrichments in atmospheric nitrogen and noble gases are also seen. Why is this so? What are the implications for Mars? For icy Worlds in our Planetary System? For the existence of Earth-like exoplanets? An exogenous (vs. outgassed) origin for Earth's atmosphere is implied, and intense debate on the relative contributions of comets and asteroids continues - renewed by fresh models for dynamical transport in the protoplanetary disk, by revelations on the nature and diversity of volatile and rocky material within comets, and by the discovery of ocean-like water in a comet from the Kuiper Belt (cf., Mumma & Charnley 2011). Assessing the creation of conditions favorable to the emergence and sustenance of life depends critically on knowledge of the nature of the impacting bodies. Active comets have long been grouped according to their orbital properties, and this has proven useful for identifying the reservoir from which a given comet emerged (OC, KB) (Levison 1996). However, it is now clear that icy bodies were scattered into each reservoir from a range of nebular distances, and the comet populations in today's reservoirs thus share origins that are (in part) common. Comets from the Oort Cloud and Kuiper Disk reservoirs should have diverse composition, resulting from strong gradients in temperature and chemistry in the proto-planetary disk, coupled with dynamical models of early radial transport and mixing with later dispersion of the final cometary nuclei into the long-term storage reservoirs. The inclusion of material from the natal interstellar cloud is probable, for comets formed in the outer solar system.

  7. Modeling circumbinary planets: The case of Kepler-38

    NASA Astrophysics Data System (ADS)

    Kley, Wilhelm; Haghighipour, Nader

    2014-04-01

    Context. Recently, a number of planets orbiting binary stars have been discovered by the Kepler space telescope. In a few systems the planets reside close to the dynamical stability limit. Owing to the difficulty of forming planets in such close orbits, it is believed that they have formed farther out in the disk and migrated to their present locations. Aims: Our goal is to construct more realistic models of planet migration in circumbinary disks and to determine the final position of these planets more accurately. In our work, we focus on the system Kepler-38 where the planet is close to the stability limit. Methods: The evolution of the circumbinary disk is studied using two-dimensional hydrodynamical simulations. We study locally isothermal disks as well as more realistic models that include full viscous heating, radiative cooling from the disk surfaces, and radiative diffusion in the disk midplane. After the disk has been brought into a quasi-equilibrium state, a 115 Earth-mass planet is embedded and its evolution is followed. Results: In all cases the planets stop inward migration near the inner edge of the disk. In isothermal disks with a typical disk scale height of H/r = 0.05, the final outcome agrees very well with the observed location of planet Kepler-38b. For the radiative models, the disk thickness and location of the inner edge is determined by the mass in the system. For surface densities on the order of 3000 g/cm2 at 1 AU, the inner gap lies close to the binary and planets stop in the region between the 5:1 and 4:1 mean-motion resonances with the binary. A model with a disk with approximately a quarter of the mass yields a final position very close to the observed one. Conclusions: For planets migrating in circumbinary disks, the final position is dictated by the structure of the disk. Knowing the observed orbits of circumbinary planets, radiative disk simulations with embedded planets can provide important information on the physical state of the system during the final stages of its evolution. Movies are available in electronic form at http://www.aanda.org

  8. Evolution of protoplanetary disks with dynamo magnetic fields

    NASA Technical Reports Server (NTRS)

    Reyes-Ruiz, M.; Stepinski, Tomasz F.

    1994-01-01

    The notion that planetary systems are formed within dusty disks is certainly not a new one; the modern planet formation paradigm is based on suggestions made by Laplace more than 200 years ago. More recently, the foundations of accretion disk theory where initially developed with this problem in mind, and in the last decade astronomical observations have indicated that many young stars have disks around them. Such observations support the generally accepted model of a viscous Keplerian accretion disk for the early stages of planetary system formation. However, one of the major uncertainties remaining in understanding the dynamical evolution of protoplanetary disks is the mechanism responsible for the transport of angular momentum and subsequent mass accretion through the disk. This is a fundamental piece of the planetary system genesis problem since such mechanisms will determine the environment in which planets are formed. Among the mechanisms suggested for this effect is the Maxwell stress associated with a magnetic field treading the disk. Due to the low internal temperatures through most of the disk, even the question of the existence of a magnetic field must be seriously studied before including magnetic effects in the disk dynamics. On the other hand, from meteoritic evidence it is believed that magnetic fields of significant magnitude existed in the earliest, PP-disk-like, stage of our own solar system's evolution. Hence, the hypothesis that PP disks are magnetized is not made solely on the basis of theory. Previous studies have addressed the problem of the existence of a magnetic field in a steady-state disk and have found that the low conductivity results in a fast diffusion of the magnetic field on timescales much shorter than the evolutionary timescale. Hence the only way for a magnetic field to exist in PP disks for a considerable portion of their lifetimes is for it to be continuously regenerated. In the present work, we present results on the self-consistent evolution of a turbulent PP disk including the effects of a dynamo-generated magnetic field.

  9. Can Eccentric Debris Disks Be Long-lived? A First Numerical Investigation and Application to Zeta(exp 2) Reticuli

    NASA Technical Reports Server (NTRS)

    Faramaz, V.; Beust, H.; Thebault, P.; Augereau, J.-C.; Bonsor, A.; delBurgo, C.; Ertel, S.; Marshall, J. P.; Milli, J.; Montesinos, B.; hide

    2014-01-01

    Context. Imaging of debris disks has found evidence for both eccentric and offset disks. One hypothesis is that they provide evidence for massive perturbers, for example, planets or binary companions, which sculpt the observed structures. One such disk was recently observed in the far-IR by the Herschel Space Observatory around Zeta2 Reticuli. In contrast with previously reported systems, the disk is significantly eccentric, and the system is several Gyr old. Aims. We aim to investigate the long-term evolution of eccentric structures in debris disks caused by a perturber on an eccentric orbit around the star. We hypothesise that the observed eccentric disk around Zeta2 Reticuli might be evidence of such a scenario. If so, we are able to constrain the mass and orbit of a potential perturber, either a giant planet or a binary companion. Methods. Analytical techniques were used to predict the effects of a perturber on a debris disk. Numerical N-body simulations were used to verify these results and further investigate the observable structures that may be produced by eccentric perturbers. The long-term evolution of the disk geometry was examined, with particular application to the Zeta2 Reticuli system. In addition, synthetic images of the disk were produced for direct comparison with Herschel observations. Results. We show that an eccentric companion can produce both the observed offsets and eccentric disks. These effects are not immediate, and we characterise the timescale required for the disk to develop to an eccentric state (and any spirals to vanish). For Zeta2 Reticuli, we derive limits on the mass and orbit of the companion required to produce the observations. Synthetic images show that the pattern observed around Zeta2 Reticuli can be produced by an eccentric disk seen close to edge-on, and allow us to bring additional constraints on the disk parameters of our model (disk flux and extent). Conclusions. We conclude that eccentric planets or stellar companions can induce long-lived eccentric structures in debris disks. Observations of such eccentric structures thus provide potential evidence of the presence of such a companion in a planetary system. We considered the specific example of Zeta2 Reticuli, whose observed eccentric disk can be explained by a distant companion (at tens of AU) on an eccentric orbit (ep greater than approx. 0.3).

  10. Force Network of a 2D Frictionless Emulsion System

    NASA Astrophysics Data System (ADS)

    Desmond, Kenneth; Weeks, Eric R.

    2010-03-01

    We use a quasi-two-dimensional emulsion as a new experimental system to measure various jamming transition properties. Our system consist of confining oil-in-water emulsion droplets between two parallel plates, so that the droplets are squeezed into quasi-two dimensional disks, analogous to granular photoelastic disks. By varying the droplet area fraction, we investigate the force network of this system as we cross through the jamming transition. At a critical area fraction, the composition of the system is no longer characterized primarily by circular disks, but by disks deformed to varying degrees. Quantifying the deformation provides information about the forces acting upon each droplet, and ultimately the force network. The probability distribution of forces is similar to that found for photoelastic disks, with the width of the force distribution narrowing with increasing packing fraction.

  11. Planetary Systems Dynamics Eccentric patterns in debris disks & Planetary migration in binary systems

    NASA Astrophysics Data System (ADS)

    Faramaz, V.; Beust, H.; Augereau, J.-C.; Bonsor, A.; Thébault, P.; Wu, Y.; Marshall, J. P.; del Burgo, C.; Ertel, S.; Eiroa, C.; Montesinos, B.; Mora, A.

    2014-01-01

    We present some highlights of two ongoing investigations that deal with the dynamics of planetary systems. Firstly, until recently, observed eccentric patterns in debris disks were found in young systems. However recent observations of Gyr-old eccentric debris disks leads to question the survival timescale of this type of asymmetry. One such disk was recently observed in the far-IR by the Herschel Space Observatory around ζ2 Reticuli. Secondly, as a binary companion orbits a circumprimary disk, it creates regions where planet formation is strongly handicapped. However, some planets were detected in this zone in tight binary systems (γ Cep, HD 196885). We aim to determine whether a binary companion can affect migration such that planets are brought in these regions and focus in particular on the planetesimal-driven migration mechanism.

  12. Intelligent holographic databases

    NASA Astrophysics Data System (ADS)

    Barbastathis, George

    Memory is a key component of intelligence. In the human brain, physical structure and functionality jointly provide diverse memory modalities at multiple time scales. How could we engineer artificial memories with similar faculties? In this thesis, we attack both hardware and algorithmic aspects of this problem. A good part is devoted to holographic memory architectures, because they meet high capacity and parallelism requirements. We develop and fully characterize shift multiplexing, a novel storage method that simplifies disk head design for holographic disks. We develop and optimize the design of compact refreshable holographic random access memories, showing several ways that 1 Tbit can be stored holographically in volume less than 1 m3, with surface density more than 20 times higher than conventional silicon DRAM integrated circuits. To address the issue of photorefractive volatility, we further develop the two-lambda (dual wavelength) method for shift multiplexing, and combine electrical fixing with angle multiplexing to demonstrate 1,000 multiplexed fixed holograms. Finally, we propose a noise model and an information theoretic metric to optimize the imaging system of a holographic memory, in terms of storage density and error rate. Motivated by the problem of interfacing sensors and memories to a complex system with limited computational resources, we construct a computer game of Desert Survival, built as a high-dimensional non-stationary virtual environment in a competitive setting. The efficacy of episodic learning, implemented as a reinforced Nearest Neighbor scheme, and the probability of winning against a control opponent improve significantly by concentrating the algorithmic effort to the virtual desert neighborhood that emerges as most significant at any time. The generalized computational model combines the autonomous neural network and von Neumann paradigms through a compact, dynamic central representation, which contains the most salient features of the sensory inputs, fused with relevant recollections, reminiscent of the hypothesized cognitive function of awareness. The Declarative Memory is searched both by content and address, suggesting a holographic implementation. The proposed computer architecture may lead to a novel paradigm that solves 'hard' cognitive problems at low cost.

  13. Flake storage effects on properties of laboratory-made flakeboards

    Treesearch

    C. G. Carll

    1998-01-01

    Aspen (Populus gradidentata) and loblolly pine (Pinus taeda) flakes were prepared with tangential-grain and radial-grain faces on a laboratory disk flaker. These were gently dried in a steam-heated rotary drum dryer. Approximately 1 week after drying, surface wettability was measured on a large sample of flakes using an aqueous dye solution. Three replicate boards of...

  14. OT1_ipascucc_1: Understanding the Origin of Transition Disks via Disk Mass Measurements

    NASA Astrophysics Data System (ADS)

    Pascucci, I.

    2010-07-01

    Transition disks are a distinguished group of few Myr-old systems caught in the phase of dispersing their inner dust disk. Three different processes have been proposed to explain this inside-out clearing: grain growth, photoevaporation driven by the central star, and dynamical clearing by a forming giant planet. Which of these processes lead to a transition disk? Distinguishing between them requires the combined knowledge of stellar accretion rates and disk masses. We propose here to use 43.8 hours of PACS spectroscopy to detect the [OI] 63 micron emission line from a sample of 21 well-known transition disks with measured mass accretion rates. We will use this line, in combination with ancillary CO millimeter lines, to measure their gas disk mass. Because gas dominates the mass of protoplanetary disks our approach and choice of lines will enable us to trace the bulk of the disk mass that resides beyond tens of AU from young stars. Our program will quadruple the number of transition disks currently observed with Herschel in this setting and for which disk masses can be measured. We will then place the transition and the ~100 classical/non-transition disks of similar age (from the Herschel KP "Gas in Protoplanetary Systems") in the mass accretion rate-disk mass diagram with two main goals: 1) reveal which gaps have been created by grain growth, photoevaporation, or giant planet formation and 2) from the statistics, determine the main disk dispersal mechanism leading to a transition disk.

  15. UBVR observation of V1357 Cyg = Cyg X-1. Search of the optical radiation of the accretion disk

    NASA Technical Reports Server (NTRS)

    Shevchenko, V. S.

    1979-01-01

    Data from 30 nights of V 1357 Cyg observations in July, August, and September of 1977 are presented. The contribution of the disk to the optic brightness of the system is computed with regard for the heating of its surface by ultraviolet radiation from V 1357 Cyg and X-ray radiation from Cyg X-1. The disk radiation explains the irregular variability in the system brightness. The possibility of the eclipse of the star by the disk and the disk by the star is discussed.

  16. Compact laser amplifier system

    DOEpatents

    Carr, R.B.

    1974-02-26

    A compact laser amplifier system is described in which a plurality of face-pumped annular disks, aligned along a common axis, independently radially amplify a stimulating light pulse. Partially reflective or lasing means, coaxially positioned at the center of each annualar disk, radially deflects a stimulating light directed down the common axis uniformly into each disk for amplification, such that the light is amplified by the disks in a parallel manner. Circumferential reflecting means coaxially disposed around each disk directs amplified light emission, either toward a common point or in a common direction. (Official Gazette)

  17. Steamy Solar System

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] Annotated Version

    This diagram illustrates the earliest journeys of water in a young, forming star system. Stars are born out of icy cocoons of gas and dust. As the cocoon collapses under its own weight in an inside-out fashion, a stellar embryo forms at the center surrounded by a dense, dusty disk. The stellar embryo 'feeds' from the disk for a few million years, while material in the disk begins to clump together to form planets.

    NASA's Spitzer Space Telescope was able to probe a crucial phase of this stellar evolution - a time when the cocoon is vigorously falling onto the pre-planetary disk. The infrared telescope detected water vapor as it smacks down on a disk circling a forming star called NGC 1333-IRAS 4B. This vapor started out as ice in the outer envelope, but vaporized upon its arrival at the disk.

    By analyzing the water in the system, astronomers were also able learn about other characteristics of the disk, such as its size, density and temperature.

    How did Spitzer see the water vapor deep in the NGC 1333-IRAS 4B system? This is most likely because the system is oriented in just the right way, such that its thicker disk is seen face-on from our Earthly perspective. In this 'face-on' orientation, Spitzer can peer through a window carved by an outflow of material from the embryonic star. This system in this drawing is shown in the opposite 'edge-on' configuration.

  18. Data management support for selected climate data sets using the climate data access system

    NASA Technical Reports Server (NTRS)

    Reph, M. G.

    1983-01-01

    The functional capabilities of the Goddard Space Flight Center (GSFC) Climate Data Access System (CDAS), an interactive data storage and retrieval system, and the archival data sets which this system manages are discussed. The CDAS manages several climate-related data sets, such as the First Global Atmospheric Research Program (GARP) Global Experiment (FGGE) Level 2-b and Level 3-a data tapes. CDAS data management support consists of three basic functions: (1) an inventory capability which allows users to search or update a disk-resident inventory describing the contents of each tape in a data set, (2) a capability to depict graphically the spatial coverage of a tape in a data set, and (3) a data set selection capability which allows users to extract portions of a data set using criteria such as time, location, and data source/parameter and output the data to tape, user terminal, or system printer. This report includes figures that illustrate menu displays and output listings for each CDAS function.

  19. Optimizing a tandem disk model

    NASA Astrophysics Data System (ADS)

    Healey, J. V.

    1983-08-01

    The optimum values of the solidity ratio, tip speed ratio (TSR), and the preset angle of attack, the corresponding distribution, and the breakdown mechanism for a tandem disk model for a crosswind machine such as a Darrieus are examined analytically. Equations are formulated for thin blades with zero drag in consideration of two plane rectangular disks, both perpendicular to the wind flow. Power coefficients are obtained for both disks and comparisons are made between a single-disk system and a two-disk system. The power coefficient for the tandem disk model is shown to be a sum of the coefficients of the individual disks, with a maximum value of twice the Betz limit at an angle of attack of -1 deg and the TSR between 4-7. The model, applied to the NACA 0012 profile, gives a maximum power coefficient of 0.967 with a solidity ratio of 0.275 and highly limited ranges for the angle of attack and TSR.

  20. Formation of Sharp Eccentric Rings in Debris Disks with Gas but Without Planets

    NASA Technical Reports Server (NTRS)

    Lyra, W.; Kuchner, M.

    2013-01-01

    'Debris disks' around young stars (analogues of the Kuiper Belt in our Solar System) show a variety of non-trivial structures attributed to planetary perturbations and used to constrain the properties of those planets. However, these analyses have largely ignored the fact that some debris disks are found to contain small quantities of gas, a component that all such disks should contain at some level. Several debris disks have been measured with a dust-to-gas ratio of about unity, at which the effect of hydrodynamics on the structure of the disk cannot be ignored. Here we report linear and nonlinear modelling that shows that dust-gas interactions can produce some of the key patterns attributed to planets. We find a robust clumping instability that organizes the dust into narrow, eccentric rings, similar to the Fomalhaut debris disk. The conclusion that such disks might contain planets is not necessarily required to explain these systems.

Top