Sample records for disk storage required

  1. Mass storage technology in networks

    NASA Astrophysics Data System (ADS)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  2. Archive Storage Media Alternatives.

    ERIC Educational Resources Information Center

    Ranade, Sanjay

    1990-01-01

    Reviews requirements for a data archive system and describes storage media alternatives that are currently available. Topics discussed include data storage; data distribution; hierarchical storage architecture, including inline storage, online storage, nearline storage, and offline storage; magnetic disks; optical disks; conventional magnetic…

  3. Planning for optical disk technology with digital cartography.

    USGS Publications Warehouse

    Light, D.L.

    1986-01-01

    A major shortfall that still exists in digital systems is the need for very large mass storage capacity. The decade of the 1980s has introduced laser optical disk storage technology, which may be the breakthrough needed for mass storage. This paper addresses system concepts for digital cartography during the transition period. Emphasis will be placed on determining USGS mass storage requirements and introducing laser optical disk technology for handling storage problems for digital data in this decade.-from Author

  4. Electron trapping data storage system and applications

    NASA Technical Reports Server (NTRS)

    Brower, Daniel; Earman, Allen; Chaffin, M. H.

    1993-01-01

    The advent of digital information storage and retrieval has led to explosive growth in data transmission techniques, data compression alternatives, and the need for high capacity random access data storage. Advances in data storage technologies are limiting the utilization of digitally based systems. New storage technologies will be required which can provide higher data capacities and faster transfer rates in a more compact format. Magnetic disk/tape and current optical data storage technologies do not provide these higher performance requirements for all digital data applications. A new technology developed at the Optex Corporation out-performs all other existing data storage technologies. The Electron Trapping Optical Memory (ETOM) media is capable of storing as much as 14 gigabytes of uncompressed data on a single, double-sided 54 inch disk with a data transfer rate of up to 12 megabits per second. The disk is removable, compact, lightweight, environmentally stable, and robust. Since the Write/Read/Erase (W/R/E) processes are carried out 100 percent photonically, no heating of the recording media is required. Therefore, the storage media suffers no deleterious effects from repeated Write/Read/Erase cycling.

  5. PLANNING FOR OPTICAL DISK TECHNOLOGY WITH DIGITAL CARTOGRAPHY.

    USGS Publications Warehouse

    Light, Donald L.

    1984-01-01

    Progress in the computer field continues to suggest that the transition from traditional analog mapping systems to digital systems has become a practical possibility. A major shortfall that still exists in digital systems is the need for very large mass storage capacity. The decade of the 1980's has introduced laser optical disk storage technology, which may be the breakthrough needed for mass storage. This paper addresses system concepts for digital cartography during the transition period. Emphasis is placed on determining U. S. Geological Survey mass storage requirements and introducing laser optical disk technology for handling storage problems for digital data in this decade.

  6. Status of emerging standards for removable computer storage media and related contributions of NIST

    NASA Technical Reports Server (NTRS)

    Podio, Fernando L.

    1992-01-01

    Standards for removable computer storage media are needed so that users may reliably interchange data both within and among various computer installations. Furthermore, media interchange standards support competition in industry and prevent sole-source lock-in. NIST participates in magnetic tape and optical disk standards development through Technical Committees X3B5, Digital Magnetic Tapes, X3B11, Optical Digital Data Disk, and the Joint Technical Commission on Data Permanence. NIST also participates in other relevant national and international standards committees for removable computer storage media. Industry standards for digital magnetic tapes require the use of Standard Reference Materials (SRM's) developed and maintained by NIST. In addition, NIST has been studying care and handling procedures required for digital magnetic tapes. NIST has developed a methodology for determining the life expectancy of optical disks. NIST is developing care and handling procedures for optical digital data disks and is involved in a program to investigate error reporting capabilities of optical disk drives. This presentation reflects the status of emerging magnetic tape and optical disk standards, as well as NIST's contributions in support of these standards.

  7. Mean PB To Failure - Initial results from a long-term study of disk storage patterns at the RACF

    NASA Astrophysics Data System (ADS)

    Caramarcu, C.; Hollowell, C.; Rao, T.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, S. A.

    2015-12-01

    The RACF (RHIC-ATLAS Computing Facility) has operated a large, multi-purpose dedicated computing facility since the mid-1990’s, serving a worldwide, geographically diverse scientific community that is a major contributor to various HEPN projects. A central component of the RACF is the Linux-based worker node cluster that is used for both computing and data storage purposes. It currently has nearly 50,000 computing cores and over 23 PB of storage capacity distributed over 12,000+ (non-SSD) disk drives. The majority of the 12,000+ disk drives provide a cost-effective solution for dCache/XRootD-managed storage, and a key concern is the reliability of this solution over the lifetime of the hardware, particularly as the number of disk drives and the storage capacity of individual drives grow. We report initial results of a long-term study to measure lifetime PB read/written to disk drives in the worker node cluster. We discuss the historical disk drive mortality rate, disk drive manufacturers' published MPTF (Mean PB to Failure) data and how they are correlated to our results. The results help the RACF understand the productivity and reliability of its storage solutions and have implications for other highly-available storage systems (NFS, GPFS, CVMFS, etc) with large I/O requirements.

  8. Telemetry data storage systems technology for the Space Station Freedom era

    NASA Technical Reports Server (NTRS)

    Dalton, John T.

    1989-01-01

    This paper examines the requirements and functions of the telemetry-data recording and storage systems, and the data-storage-system technology projected for the Space Station, with particular attention given to the Space Optical Disk Recorder, an on-board storage subsystem based on 160 gigabit erasable optical disk units each capable of operating at 300 M bits per second. Consideration is also given to storage systems for ground transport recording, which include systems for data capture, buffering, processing, and delivery on the ground. These can be categorized as the first in-first out storage, the fast random-access storage, and the slow access with staging. Based on projected mission manifests and data rates, the worst case requirements were developed for these three storage architecture functions. The results of the analysis are presented.

  9. Integrating new Storage Technologies into EOS

    NASA Astrophysics Data System (ADS)

    Peters, Andreas J.; van der Ster, Dan C.; Rocha, Joaquim; Lensing, Paul

    2015-12-01

    The EOS[1] storage software was designed to cover CERN disk-only storage use cases in the medium-term trading scalability against latency. To cover and prepare for long-term requirements the CERN IT data and storage services group (DSS) is actively conducting R&D and open source contributions to experiment with a next generation storage software based on CEPH[3] and ethernet enabled disk drives. CEPH provides a scale-out object storage system RADOS and additionally various optional high-level services like S3 gateway, RADOS block devices and a POSIX compliant file system CephFS. The acquisition of CEPH by Redhat underlines the promising role of CEPH as the open source storage platform of the future. CERN IT is running a CEPH service in the context of OpenStack on a moderate scale of 1 PB replicated storage. Building a 100+PB storage system based on CEPH will require software and hardware tuning. It is of capital importance to demonstrate the feasibility and possibly iron out bottlenecks and blocking issues beforehand. The main idea behind this R&D is to leverage and contribute to existing building blocks in the CEPH storage stack and implement a few CERN specific requirements in a thin, customisable storage layer. A second research topic is the integration of ethernet enabled disks. This paper introduces various ongoing open source developments, their status and applicability.

  10. Short-term storage allocation in a filmless hospital

    NASA Astrophysics Data System (ADS)

    Strickland, Nicola H.; Deshaies, Marc J.; Reynolds, R. Anthony; Turner, Jonathan E.; Allison, David J.

    1997-05-01

    Optimizing limited short term storage (STS) resources requires gradual, systematic changes, monitored and modified within an operational PACS environment. Optimization of the centralized storage requires a balance of exam numbers and types in STS to minimize lengthy retrievals from long term archive. Changes to STS parameters and work procedures were made while monitoring the effects on resource allocation by analyzing disk space temporally. Proportions of disk space allocated to each patient category on STS were measured to approach the desired proportions in a controlled manner. Key factors for STS management were: (1) sophisticated exam prefetching algorithms: HIS/RIS-triggered, body part-related and historically-selected, and (2) a 'storage onion' design allocating various exam categories to layers with differential deletion protection. Hospitals planning for STS space should consider the needs of radiology, wards, outpatient clinics and clinicoradiological conferences for new and historical exams; desired on-line time; and potential increase in image throughput and changing resources, such as an increase in short term storage disk space.

  11. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  12. Disk storage management for LHCb based on Data Popularity estimator

    NASA Astrophysics Data System (ADS)

    Hushchyn, Mikhail; Charpentier, Philippe; Ustyuzhanin, Andrey

    2015-12-01

    This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times for jobs using this data.

  13. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  14. Incorporating Oracle on-line space management with long-term archival technology

    NASA Technical Reports Server (NTRS)

    Moran, Steven M.; Zak, Victor J.

    1996-01-01

    The storage requirements of today's organizations are exploding. As computers continue to escalate in processing power, applications grow in complexity and data files grow in size and in number. As a result, organizations are forced to procure more and more megabytes of storage space. This paper focuses on how to expand the storage capacity of a Very Large Database (VLDB) cost-effectively within a Oracle7 data warehouse system by integrating long term archival storage sub-systems with traditional magnetic media. The Oracle architecture described in this paper was based on an actual proof of concept for a customer looking to store archived data on optical disks yet still have access to this data without user intervention. The customer had a requirement to maintain 10 years worth of data on-line. Data less than a year old still had the potential to be updated thus will reside on conventional magnetic disks. Data older than a year will be considered archived and will be placed on optical disks. The ability to archive data to optical disk and still have access to that data provides the system a means to retain large amounts of data that is readily accessible yet significantly reduces the cost of total system storage. Therefore, the cost benefits of archival storage devices can be incorporated into the Oracle storage medium and I/O subsystem without loosing any of the functionality of transaction processing, yet at the same time providing an organization access to all their data.

  15. Cost-effective data storage/archival subsystem for functional PACS

    NASA Astrophysics Data System (ADS)

    Chen, Y. P.; Kim, Yongmin

    1993-09-01

    Not the least of the requirements of a workable PACS is the ability to store and archive vast amounts of information. A medium-size hospital will generate between 1 and 2 TBytes of data annually on a fully functional PACS. A high-speed image transmission network coupled with a comparably high-speed central data storage unit can make local memory and magnetic disks in the PACS workstations less critical and, in an extreme case, unnecessary. Under these circumstances, the capacity and performance of the central data storage subsystem and database is critical in determining the response time at the workstations, thus significantly affecting clinical acceptability. The central data storage subsystem not only needs to provide sufficient capacity to store about ten days worth of images (five days worth of new studies, and on the average, about one comparison study for each new study), but also supplies images to the requesting workstation in a timely fashion. The database must provide fast retrieval responses upon users' requests for images. This paper analyzes both advantages and disadvantages of multiple parallel transfer disks versus RAID disks for short-term central data storage subsystem, as well as optical disk jukebox versus digital recorder tape subsystem for long-term archive. Furthermore, an example high-performance cost-effective storage subsystem which integrates both the RAID disks and high-speed digital tape subsystem as a cost-effective PACS data storage/archival unit are presented.

  16. Storage Media for Microcomputers.

    ERIC Educational Resources Information Center

    Trautman, Rodes

    1983-01-01

    Reviews computer storage devices designed to provide additional memory for microcomputers--chips, floppy disks, hard disks, optical disks--and describes how secondary storage is used (file transfer, formatting, ingredients of incompatibility); disk/controller/software triplet; magnetic tape backup; storage volatility; disk emulator; and…

  17. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 2

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    This report contains copies of nearly all of the technical papers and viewgraphs presented at the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Application. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include the following: magnetic disk and tape technologies; optical disk and tape; software storage and file management systems; and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  18. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 3

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    This report contains copies of nearly all of the technical papers and viewgraphs presented at the National Space Science Data Center (NSSDC) Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990s.

  19. 45 CFR 160.103 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., the following definitions apply to this subchapter: Act means the Social Security Act. Administrative..., statements, and other required documents. Electronic media means: (1) Electronic storage material on which...) and any removable/transportable digital memory medium, such as magnetic tape or disk, optical disk, or...

  20. 45 CFR 160.103 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., the following definitions apply to this subchapter: Act means the Social Security Act. Administrative..., statements, and other required documents. Electronic media means: (1) Electronic storage material on which...) and any removable/transportable digital memory medium, such as magnetic tape or disk, optical disk, or...

  1. DPM: Future Proof Storage

    NASA Astrophysics Data System (ADS)

    Alvarez, Alejandro; Beche, Alexandre; Furano, Fabrizio; Hellmich, Martin; Keeble, Oliver; Rocha, Ricardo

    2012-12-01

    The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the last year we have been working on providing stable, high performant data access to our storage system using standard protocols, while extending the storage management functionality and adapting both configuration and deployment procedures to reuse commonly used building blocks. In this contribution we cover in detail the extensive evaluation we have performed of our new HTTP/WebDAV and NFS 4.1 frontends, in terms of functionality and performance. We summarize the issues we faced and the solutions we developed to turn them into valid alternatives to the existing grid protocols - namely the additional work required to provide multi-stream transfers for high performance wide area access, support for third party copies, credential delegation or the required changes in the experiment and fabric management frameworks and tools. We describe new functionality that has been added to ease system administration, such as different filesystem weights and a faster disk drain, and new configuration and monitoring solutions based on the industry standards Puppet and Nagios. Finally, we explain some of the internal changes we had to do in the DPM architecture to better handle the additional load from the analysis use cases.

  2. A Simulation Model Of A Picture Archival And Communication System

    NASA Astrophysics Data System (ADS)

    D'Silva, Vijay; Perros, Harry; Stockbridge, Chris

    1988-06-01

    A PACS architecture was simulated to quantify its performance. The model consisted of reading stations, acquisition nodes, communication links, a database management system, and a storage system consisting of magnetic and optical disks. Two levels of storage were simulated, a high-speed magnetic disk system for short term storage, and optical disk jukeboxes for long term storage. The communications link was a single bus via which image data were requested and delivered. Real input data to the simulation model were obtained from surveys of radiology procedures (Bowman Gray School of Medicine). From these the following inputs were calculated: - the size of short term storage necessary - the amount of long term storage required - the frequency of access of each store, and - the distribution of the number of films requested per diagnosis. The performance measures obtained were - the mean retrieval time for an image, - mean queue lengths, and - the utilization of each device. Parametric analysis was done for - the bus speed, - the packet size for the communications link, - the record size on the magnetic disk, - compression ratio, - influx of new images, - DBMS time, and - diagnosis think times. Plots give the optimum values for those values of input speed and device performance which are sufficient to achieve subsecond image retrieval times

  3. Evolution of Archival Storage (from Tape to Memory)

    NASA Technical Reports Server (NTRS)

    Ramapriyan, Hampapuram K.

    2015-01-01

    Over the last three decades, there has been a significant evolution in storage technologies supporting archival of remote sensing data. This section provides a brief survey of how these technologies have evolved. Three main technologies are considered - tape, hard disk and solid state disk. Their historical evolution is traced, summarizing how reductions in cost have helped being able to store larger volumes of data on faster media. The cost per GB of media is only one of the considerations in determining the best approach to archival storage. Active archives generally require faster response to user requests for data than permanent archives. The archive costs have to consider facilities and other capital costs, operations costs, software licenses, utilities costs, etc. For meeting requirements in any organization, typically a mix of technologies is needed.

  4. Records Management with Optical Disk Technology: Now Is the Time.

    ERIC Educational Resources Information Center

    Retherford, April; Williams, W. Wes

    1991-01-01

    The University of Kansas record management system using optical disk storage in a network environment and the selection process used to meet existing hardware and budgeting requirements are described. Viability of the technology, document legality, and difficulties encountered during implementation are discussed. (Author/MSE)

  5. Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Blackwell, Kim; Blasso, Len (Editor); Lipscomb, Ann (Editor)

    1991-01-01

    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  6. Influence of technology on magnetic tape storage device characteristics

    NASA Technical Reports Server (NTRS)

    Gniewek, John J.; Vogel, Stephen M.

    1994-01-01

    There are available today many data storage devices that serve the diverse application requirements of the consumer, professional entertainment, and computer data processing industries. Storage technologies include semiconductors, several varieties of optical disk, optical tape, magnetic disk, and many varieties of magnetic tape. In some cases, devices are developed with specific characteristics to meet specification requirements. In other cases, an existing storage device is modified and adapted to a different application. For magnetic tape storage devices, examples of the former case are 3480/3490 and QIC device types developed for the high end and low end segments of the data processing industry respectively, VHS, Beta, and 8 mm formats developed for consumer video applications, and D-1, D-2, D-3 formats developed for professional video applications. Examples of modified and adapted devices include 4 mm, 8 mm, 12.7 mm and 19 mm computer data storage devices derived from consumer and professional audio and video applications. With the conversion of the consumer and professional entertainment industries from analog to digital storage and signal processing, there have been increasing references to the 'convergence' of the computer data processing and entertainment industry technologies. There has yet to be seen, however, any evidence of convergence of data storage device types. There are several reasons for this. The diversity of application requirements results in varying degrees of importance for each of the tape storage characteristics.

  7. A Comprehensive Study on Energy Efficiency and Performance of Flash-based SSD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seon-Yeon; Kim, Youngjae; Urgaonkar, Bhuvan

    2011-01-01

    Use of flash memory as a storage medium is becoming popular in diverse computing environments. However, because of differences in interface, flash memory requires a hard-disk-emulation layer, called FTL (flash translation layer). Although the FTL enables flash memory storages to replace conventional hard disks, it induces significant computational and space overhead. Despite the low power consumption of flash memory, this overhead leads to significant power consumption in an overall storage system. In this paper, we analyze the characteristics of flash-based storage devices from the viewpoint of power consumption and energy efficiency by using various methodologies. First, we utilize simulation tomore » investigate the interior operation of flash-based storage of flash-based storages. Subsequently, we measure the performance and energy efficiency of commodity flash-based SSDs by using microbenchmarks to identify the block-device level characteristics and macrobenchmarks to reveal their filesystem level characteristics.« less

  8. Maintaining cultures of wood-rotting fungi.

    Treesearch

    E.E. Nelson; H.A. Fay

    1985-01-01

    Phellinus weirii cultures were stored successfully for 10 years in small alder (Alnus rubra Bong.) disks at 2 °C. The six isolates tested appeared morphologically identical and after 10 years varied little in growth rate from those stored on malt agar slants. Long-term storage on alder disks reduces the time required for...

  9. RAMA: A file system for massively parallel computers

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  10. A study of mass data storage technology for rocket engine data

    NASA Technical Reports Server (NTRS)

    Ready, John F.; Benser, Earl T.; Fritz, Bernard S.; Nelson, Scott A.; Stauffer, Donald R.; Volna, William M.

    1990-01-01

    The results of a nine month study program on mass data storage technology for rocket engine (especially the Space Shuttle Main Engine) health monitoring and control are summarized. The program had the objective of recommending a candidate mass data storage technology development for rocket engine health monitoring and control and of formulating a project plan and specification for that technology development. The work was divided into three major technical tasks: (1) development of requirements; (2) survey of mass data storage technologies; and (3) definition of a project plan and specification for technology development. The first of these tasks reviewed current data storage technology and developed a prioritized set of requirements for the health monitoring and control applications. The second task included a survey of state-of-the-art and newly developing technologies and a matrix-based ranking of the technologies. It culminated in a recommendation of optical disk technology as the best candidate for technology development. The final task defined a proof-of-concept demonstration, including tasks required to develop, test, analyze, and demonstrate the technology advancement, plus an estimate of the level of effort required. The recommended demonstration emphasizes development of an optical disk system which incorporates an order-of-magnitude increase in writing speed above the current state of the art.

  11. Striped tertiary storage arrays

    NASA Technical Reports Server (NTRS)

    Drapeau, Ann L.

    1993-01-01

    Data stripping is a technique for increasing the throughput and reducing the response time of large access to a storage system. In striped magnetic or optical disk arrays, a single file is striped or interleaved across several disks; in a striped tape system, files are interleaved across tape cartridges. Because a striped file can be accessed by several disk drives or tape recorders in parallel, the sustained bandwidth to the file is greater than in non-striped systems, where access to the file are restricted to a single device. It is argued that applying striping to tertiary storage systems will provide needed performance and reliability benefits. The performance benefits of striping for applications using large tertiary storage systems is discussed. It will introduce commonly available tape drives and libraries, and discuss their performance limitations, especially focusing on the long latency of tape accesses. This section will also describe an event-driven tertiary storage array simulator that is being used to understand the best ways of configuring these storage arrays. The reliability problems of magnetic tape devices are discussed, and plans for modeling the overall reliability of striped tertiary storage arrays to identify the amount of error correction required are described. Finally, work being done by other members of the Sequoia group to address latency of accesses, optimizing tertiary storage arrays that perform mostly writes, and compression is discussed.

  12. Electron trapping optical data storage system and applications

    NASA Technical Reports Server (NTRS)

    Brower, Daniel; Earman, Allen; Chaffin, M. H.

    1993-01-01

    A new technology developed at Optex Corporation out-performs all other existing data storage technologies. The Electron Trapping Optical Memory (ETOM) media stores 14 gigabytes of uncompressed data on a single, double-sided 130 mm disk with a data transfer rate of up to 120 megabits per second. The disk is removable, compact, lightweight, environmentally stable, and robust. Since the Write/Read/Erase (W/R/E) processes are carried out photonically, no heating of the recording media is required. Therefore, the storage media suffers no deleterious effects from repeated W/R/E cycling. This rewritable data storage technology has been developed for use as a basis for numerous data storage products. Industries that can benefit from the ETOM data storage technologies include: satellite data and information systems, broadcasting, video distribution, image processing and enhancement, and telecommunications. Products developed for these industries are well suited for the demanding store-and-forward buffer systems, data storage, and digital video systems needed for these applications.

  13. Evolution of magnetic disk subsystems

    NASA Astrophysics Data System (ADS)

    Kaneko, Satoru

    1994-06-01

    The higher recording density of magnetic disk realized today has brought larger storage capacity per unit and smaller form factors. If the required access performance per MB is constant, the performance of large subsystems has to be several times better. This article describes mainly the technology for improving the performance of the magnetic disk subsystems and the prospects of their future evolution. Also considered are 'crosscall pathing' which makes the data transfer channel more effective, 'disk cache' which improves performance coupling with solid state memory technology, and 'RAID' which improves the availability and integrity of disk subsystems by organizing multiple disk drives in a subsystem. As a result, it is concluded that since the performance of the subsystem is dominated by that of the disk cache, maximation of the performance of the disk cache subsystems is very important.

  14. Architecture and method for a burst buffer using flash technology

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung

    2016-03-15

    A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.

  15. Disk space and load time requirements for eye movement biometric databases

    NASA Astrophysics Data System (ADS)

    Kasprowski, Pawel; Harezlak, Katarzyna

    2016-06-01

    Biometric identification is a very popular area of interest nowadays. Problems with the so-called physiological methods like fingerprints or iris recognition resulted in increased attention paid to methods measuring behavioral patterns. Eye movement based biometric (EMB) identification is one of the interesting behavioral methods and due to the intensive development of eye tracking devices it has become possible to define new methods for the eye movement signal processing. Such method should be supported by an efficient storage used to collect eye movement data and provide it for further analysis. The aim of the research was to check various setups enabling such a storage choice. There were various aspects taken into consideration, like disk space usage, time required for loading and saving whole data set or its chosen parts.

  16. Optical Digital Disk Storage: An Application for News Libraries.

    ERIC Educational Resources Information Center

    Crowley, Mary Jo

    1988-01-01

    Describes the technology, equipment, and procedures necessary for converting a historical newspaper clipping collection to optical disk storage. Alternative storage systems--microforms, laser scanners, optical storage--are also retrieved, and the advantages and disadvantages of optical storage are considered. (MES)

  17. SODR Memory Control Buffer Control ASIC

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.

    1994-01-01

    The Spacecraft Optical Disk Recorder (SODR) is a state of the art mass storage system for future NASA missions requiring high transmission rates and a large capacity storage system. This report covers the design and development of an SODR memory buffer control applications specific integrated circuit (ASIC). The memory buffer control ASIC has two primary functions: (1) buffering data to prevent loss of data during disk access times, (2) converting data formats from a high performance parallel interface format to a small computer systems interface format. Ten 144 p in, 50 MHz CMOS ASIC's were designed, fabricated and tested to implement the memory buffer control function.

  18. Basics of Videodisc and Optical Disk Technology.

    ERIC Educational Resources Information Center

    Paris, Judith

    1983-01-01

    Outlines basic videodisc and optical disk technology describing both optical and capacitance videodisc technology. Optical disk technology is defined as a mass digital image and data storage device and briefly compared with other information storage media including magnetic tape and microforms. The future of videodisc and optical disk is…

  19. Libraries and Desktop Storage Options: Results of a Web-Based Survey.

    ERIC Educational Resources Information Center

    Hendricks, Arthur; Wang, Jian

    2002-01-01

    Reports the results of a Web-based survey that investigated what plans, if any, librarians have for dealing with the expected obsolescence of the floppy disk and still retain effective library service. Highlights include data storage options, including compact disks, zip disks, and networked storage products; and a copy of the Web survey.…

  20. Selected Conference Proceedings from the 1985 Videodisc, Optical Disk, and CD-ROM Conference and Exposition (Philadelphia, PA, December 10-12, 1985).

    ERIC Educational Resources Information Center

    Cerva, John R.; And Others

    1986-01-01

    Eight papers cover: optical storage technology; cross-cultural videodisc design; optical disk technology use at the Library of Congress Research Service and National Library of Medicine; Internal Revenue Service image storage and retrieval system; solving business problems with CD-ROM; a laser disk operating system; and an optical disk for…

  1. Ceph-based storage services for Run2 and beyond

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel C.; Lamanna, Massimo; Mascetti, Luca; Peters, Andreas J.; Rousseau, Hervé

    2015-12-01

    In 2013, CERN IT evaluated then deployed a petabyte-scale Ceph cluster to support OpenStack use-cases in production. With now more than a year of smooth operations, we will present our experience and tuning best-practices. Beyond the cloud storage use-cases, we have been exploring Ceph-based services to satisfy the growing storage requirements during and after Run2. First, we have developed a Ceph back-end for CASTOR, allowing this service to deploy thin disk server nodes which act as gateways to Ceph; this feature marries the strong data archival and cataloging features of CASTOR with the resilient and high performance Ceph subsystem for disk. Second, we have developed RADOSFS, a lightweight storage API which builds a POSIX-like filesystem on top of the Ceph object layer. When combined with Xrootd, RADOSFS can offer a scalable object interface compatible with our HEP data processing applications. Lastly the same object layer is being used to build a scalable and inexpensive NFS service for several user communities.

  2. Fast disk array for image storage

    NASA Astrophysics Data System (ADS)

    Feng, Dan; Zhu, Zhichun; Jin, Hai; Zhang, Jiangling

    1997-01-01

    A fast disk array is designed for the large continuous image storage. It includes a high speed data architecture and the technology of data striping and organization on the disk array. The high speed data path which is constructed by two dual port RAM and some control circuit is configured to transfer data between a host system and a plurality of disk drives. The bandwidth can be more than 100 MB/s if the data path based on PCI (peripheral component interconnect). The organization of data stored on the disk array is similar to RAID 4. Data are striped on a plurality of disk, and each striping unit is equal to a track. I/O instructions are performed in parallel on the disk drives. An independent disk is used to store the parity information in the fast disk array architecture. By placing the parity generation circuit directly on the SCSI (or SCSI 2) bus, the parity information can be generated on the fly. It will affect little on the data writing in parallel on the other disks. The fast disk array architecture designed in the paper can meet the demands of the image storage.

  3. The impact of image storage organization on the effectiveness of PACS.

    PubMed

    Hindel, R

    1990-11-01

    Picture archiving communication system (PACS) requires efficient handling of large amounts of data. Mass storage systems are cost effective but slow, while very fast systems, like frame buffers and parallel transfer disks, are expensive. The image traffic can be divided into inbound traffic generated by diagnostic modalities and outbound traffic into workstations. At the contact points with medical professionals, the responses must be fast. Archiving, on the other hand, can employ slower but less expensive storage systems, provided that the primary activities are not impeded. This article illustrates a segmentation architecture meeting these requirements based on a clearly defined PACS concept.

  4. Data storage systems technology for the Space Station era

    NASA Technical Reports Server (NTRS)

    Dalton, John; Mccaleb, Fred; Sos, John; Chesney, James; Howell, David

    1987-01-01

    The paper presents the results of an internal NASA study to determine if economically feasible data storage solutions are likely to be available to support the ground data transport segment of the Space Station mission. An internal NASA effort to prototype a portion of the required ground data processing system is outlined. It is concluded that the requirements for all ground data storage functions can be met with commercial disk and tape drives assuming conservative technology improvements and that, to meet Space Station data rates with commercial technology, the data will have to be distributed over multiple devices operating in parallel and in a sustained maximum throughput mode.

  5. Database recovery using redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1992-01-01

    Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper a method is proposed for using redundant disk arrays to support rapid-recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, it is shown that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  6. Recovery issues in databases using redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1993-01-01

    Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  7. Performance evaluation of redundant disk array support for transaction recovery

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.

    1991-01-01

    Redundant disk arrays provide a way of achieving rapid recovery from media failures with a relatively low storage cost for large scale data systems requiring high availability. Here, we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  8. Data storage technology comparisons

    NASA Technical Reports Server (NTRS)

    Katti, Romney R.

    1990-01-01

    The role of data storage and data storage technology is an integral, though conceptually often underestimated, portion of data processing technology. Data storage is important in the mass storage mode in which generated data is buffered for later use. But data storage technology is also important in the data flow mode when data are manipulated and hence required to flow between databases, datasets and processors. This latter mode is commonly associated with memory hierarchies which support computation. VLSI devices can reasonably be defined as electronic circuit devices such as channel and control electronics as well as highly integrated, solid-state devices that are fabricated using thin film deposition technology. VLSI devices in both capacities play an important role in data storage technology. In addition to random access memories (RAM), read-only memories (ROM), and other silicon-based variations such as PROM's, EPROM's, and EEPROM's, integrated devices find their way into a variety of memory technologies which offer significant performance advantages. These memory technologies include magnetic tape, magnetic disk, magneto-optic disk, and vertical Bloch line memory. In this paper, some comparison between selected technologies will be made to demonstrate why more than one memory technology exists today, based for example on access time and storage density at the active bit and system levels.

  9. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  10. Improvement in HPC performance through HIPPI RAID storage

    NASA Technical Reports Server (NTRS)

    Homan, Blake

    1993-01-01

    In 1986, RAID (redundant array of inexpensive (or independent) disks) technology was introduced as a viable solution to the I/O bottleneck. A number of different RAID levels were defined in 1987 by the Computer Science Division (EECS) University of California, Berkeley, each with specific advantages and disadvantages. With multiple RAID options available, taking advantage of RAID technology required matching particular RAID levels with specific applications. It was not possible to use one RAID device to address all applications. Maximum Strategy's Gen 4 Storage Server addresses this issue with a new capability called programmable RAID level partitioning. This capability enables users to have multiple RAID levels coexist on the same disks, thereby providing the versatility necessary for multiple concurrent applications.

  11. Optical Disk for Digital Storage and Retrieval Systems.

    ERIC Educational Resources Information Center

    Rose, Denis A.

    1983-01-01

    Availability of low-cost digital optical disks will revolutionize storage and retrieval systems over next decade. Three major factors will effect this change: availability of disks and controllers at low-cost and in plentiful supply; availability of low-cost and better output means for system users; and more flexible, less expensive communication…

  12. Performances of multiprocessor multidisk architectures for continuous media storage

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.

    1996-03-01

    Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.

  13. Medical image digital archive: a comparison of storage technologies

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy; Hutchings, Matt

    1998-07-01

    A cost effective, high capacity digital archive system is one of the remaining key factors that will enable a radiology department to eliminate film as an archive medium. The ever increasing amount of digital image data is creating the need for huge archive systems that can reliably store and retrieve millions of images and hold from a few terabytes of data to possibly hundreds of terabytes. Selecting the right archive solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, conformance to open standards, archive availability and reliability, security, cost, achievable benefits and cost savings, investment protection, and more. This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. New technologies will be discussed, such as DVD and high performance tape. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on random and pre-fetch retrieval time will be analyzed. The concept of automated migration of images from high performance, RAID disk storage devices to high capacity, NearlineR storage devices will be introduced as a viable way to minimize overall storage costs for an archive.

  14. A high-speed, large-capacity, 'jukebox' optical disk system

    NASA Technical Reports Server (NTRS)

    Ammon, G. J.; Calabria, J. A.; Thomas, D. T.

    1985-01-01

    Two optical disk 'jukebox' mass storage systems which provide access to any data in a store of 10 to the 13th bits (1250G bytes) within six seconds have been developed. The optical disk jukebox system is divided into two units, including a hardware/software controller and a disk drive. The controller provides flexibility and adaptability, through a ROM-based microcode-driven data processor and a ROM-based software-driven control processor. The cartridge storage module contains 125 optical disks housed in protective cartridges. Attention is given to a conceptual view of the disk drive unit, the NASA optical disk system, the NASA database management system configuration, the NASA optical disk system interface, and an open systems interconnect reference model.

  15. The performance of disk arrays in shared-memory database machines

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  16. Evaluating the effect of online data compression on the disk cache of a mass storage system

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Yesha, Yelena

    1994-01-01

    A trace driven simulation of the disk cache of a mass storage system was used to evaluate the effect of an online compression algorithm on various performance measures. Traces from the system at NASA's Center for Computational Sciences were used to run the simulation and disk cache hit ratios, number of files and bytes migrating to tertiary storage were measured. The measurements were performed for both an LRU and a size based migration algorithm. In addition to seeing the effect of online data compression on the disk cache performance measure, the simulation provided insight into the characteristics of the interactive references, suggesting that hint based prefetching algorithms are the only alternative for any future improvements to the disk cache hit ratio.

  17. Building an organic block storage service at CERN with Ceph

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel; Wiebalck, Arne

    2014-06-01

    Emerging storage requirements, such as the need for block storage for both OpenStack VMs and file services like AFS and NFS, have motivated the development of a generic backend storage service for CERN IT. The goals for such a service include (a) vendor neutrality, (b) horizontal scalability with commodity hardware, (c) fault tolerance at the disk, host, and network levels, and (d) support for geo-replication. Ceph is an attractive option due to its native block device layer RBD which is built upon its scalable, reliable, and performant object storage system, RADOS. It can be considered an "organic" storage solution because of its ability to balance and heal itself while living on an ever-changing set of heterogeneous disk servers. This work will present the outcome of a petabyte-scale test deployment of Ceph by CERN IT. We will first present the architecture and configuration of our cluster, including a summary of best practices learned from the community and discovered internally. Next the results of various functionality and performance tests will be shown: the cluster has been used as a backend block storage system for AFS and NFS servers as well as a large OpenStack cluster at CERN. Finally, we will discuss the next steps and future possibilities for Ceph at CERN.

  18. A case for automated tape in clinical imaging.

    PubMed

    Bookman, G; Baune, D

    1998-08-01

    Electronic archiving of radiology images over many years will require many terabytes of storage with a need for rapid retrieval of these images. As more large PACS installations are installed and implemented, a data crisis occurs. The ability to store this large amount of data using the traditional method of optical jukeboxes or online disk alone becomes an unworkable solution. The amount of floor space number of optical jukeboxes, and off-line shelf storage required to store the images becomes unmanageable. With the recent advances in tape and tape drives, the use of tape for long term storage of PACS data has become the preferred alternative. A PACS system consisting of a centrally managed system of RAID disk, software and at the heart of the system, tape, presents a solution that for the first time solves the problems of multi-modality high end PACS, non-DICOM image, electronic medical record and ADT data storage. This paper will examine the installation of the University of Utah, Department of Radiology PACS system and the integration of automated tape archive. The tape archive is also capable of storing data other than traditional PACS data. The implementation of an automated data archive to serve the many other needs of a large hospital will also be discussed. This will include the integration of a filmless cardiology department and the backup/archival needs of a traditional MIS department. The need for high bandwidth to tape with a large RAID cache will be examined and how with an interface to a RIS pre-fetch engine, tape can be a superior solution to optical platters or other archival solutions. The data management software will be discussed in detail. The performance and cost of RAID disk cache and automated tape compared to a solution that includes optical will be examined.

  19. Optical Disks Compete with Videotape and Magnetic Storage Media: Part I.

    ERIC Educational Resources Information Center

    Urrows, Henry; Urrows, Elizabeth

    1988-01-01

    Describes the latest technology in videotape cassette systems and other magnetic storage devices and their possible effects on optical data disks. Highlights include Honeywell's Very Large Data Store (VLDS); Exabyte's tape cartridge storage system; standards for tape drives; and Masstor System's videotape cartridge system. (LRW)

  20. A composite-flywheel burst-containment study

    NASA Astrophysics Data System (ADS)

    Sapowith, A. D.; Handy, W. E.

    1982-01-01

    A key component impacting total flywheel energy storage system weight is the containment structure. This report addresses the factors that shape this structure and define its design criteria. In addition, containment weight estimates are made for the several composite flywheel designs of interest so that judgements can be made as to the relative weights of their containment structure. The requirements set down for this program were that all containment weight estimates be based on a 1 kWh burst. It should be noted that typical flywheel requirements for regenerative braking of small automobiles call for deliverable energies of 0.25 kWh. This leads to expected maximum burst energies of 0.5 kWh. The flywheels studied are those considered most likely to be carried further for operational design. These are: The pseudo isotropic disk flywheel, sometimes called the alpha ply; the SMC molded disk; either disk with a carbon ring; the subcircular rim with cruciform hub; and Avco's bi-directional circular weave disk.

  1. Recording and reading of information on optical disks

    NASA Astrophysics Data System (ADS)

    Bouwhuis, G.; Braat, J. J. M.

    In the storage of information, related to video programs, in a spiral track on a disk, difficulties arise because the bandwidth for video is much greater than for audio signals. An attractive solution was found in optical storage. The optical noncontact method is free of wear, and allows for fast random access. Initial problems regarding a suitable light source could be overcome with the aid of appropriate laser devices. The basic concepts of optical storage on disks are treated insofar as they are relevant for the optical arrangement. A general description is provided of a video, a digital audio, and a data storage system. Scanning spot microscopy for recording and reading of optical disks is discussed, giving attention to recording of the signal, the readout of optical disks, the readout of digitally encoded signals, and cross talk. Tracking systems are also considered, taking into account the generation of error signals for radial tracking and the generation of focus error signals.

  2. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  3. 45 CFR 160.103 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., the following definitions apply to this subchapter: Act means the Social Security Act. ANSI stands for... required documents. Electronic media means: (1) Electronic storage media including memory devices in computers (hard drives) and any removable/transportable digital memory medium, such as magnetic tape or disk...

  4. 45 CFR 160.103 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., the following definitions apply to this subchapter: Act means the Social Security Act. ANSI stands for... required documents. Electronic media means: (1) Electronic storage media including memory devices in computers (hard drives) and any removable/transportable digital memory medium, such as magnetic tape or disk...

  5. 45 CFR 160.103 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., the following definitions apply to this subchapter: Act means the Social Security Act. ANSI stands for... required documents. Electronic media means: (1) Electronic storage media including memory devices in computers (hard drives) and any removable/transportable digital memory medium, such as magnetic tape or disk...

  6. Case Study Analysis of United States Navy Financial Field Activity

    DTIC Science & Technology

    1991-06-01

    and must be continued in order to keep providing quality base administration. The replacement of ten inch magnetic disk with modem data storage media...Equipment Priority: 3 Total Required Total Funded Shortfall Total Funding: 847K 791K 56K Narrative Description of Requirements: This requirement is...Equipment Priority: 3 Total Required Total Funded Shortfall Total Funding: 847K 791K 56K Narrative Description of Requirements: This deficiency would

  7. Saying goodbye to optical storage technology.

    PubMed

    McLendon, Kelly; Babbitt, Cliff

    2002-08-01

    The days of using optical disk based mass storage devices for high volume applications like health care document imaging are coming to an end. The price/performance curve for redundant magnetic disks, known as RAID, is now more positive than for optical disks. All types of application systems, across many sectors of the marketplace are using these newer magnetic technologies, including insurance, banking, aerospace, as well as health care. The main components of these new storage technologies are RAID and SAN. SAN refers to storage area network, which is a complex mechanism of switches and connections that allow multiple systems to store huge amounts of data securely and safely.

  8. Mass storage at NSA

    NASA Technical Reports Server (NTRS)

    Shields, Michael F.

    1993-01-01

    The need to manage large amounts of data on robotically controlled devices has been critical to the mission of this Agency for many years. In many respects this Agency has helped pioneer, with their industry counterparts, the development of a number of products long before these systems became commercially available. Numerous attempts have been made to field both robotically controlled tape and optical disk technology and systems to satisfy our tertiary storage needs. Custom developed products were architected, designed, and developed without vendor partners over the past two decades to field workable systems to handle our ever increasing storage requirements. Many of the attendees of this symposium are familiar with some of the older products, such as: the Braegen Automated Tape Libraries (ATL's), the IBM 3850, the Ampex TeraStore, just to name a few. In addition, we embarked on an in-house development of a shared disk input/output support processor to manage our every increasing tape storage needs. For all intents and purposes, this system was a file server by current definitions which used CDC Cyber computers as the control processors. It served us well and was just recently removed from production usage.

  9. Spacecraft optical disk recorder memory buffer control

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.

    1993-01-01

    This paper discusses the research completed under the NASA-ASEE summer faculty fellowship program. The project involves development of an Application Specific Integrated Circuit (ASIC) to be used as a Memory Buffer Controller (MBC) in the Spacecraft Optical Disk System (SODR). The SODR system has demanding capacity and data rate specifications requiring specialized electronics to meet processing demands. The system is being designed to support Gigabit transfer rates with Terabit storage capability. The complete SODR system is designed to exceed the capability of all existing mass storage systems today. The ASIC development for SODR consist of developing a 144 pin CMOS device to perform format conversion and data buffering. The final simulations of the MBC were completed during this summer's NASA-ASEE fellowship along with design preparations for fabrication to be performed by an ASIC manufacturer.

  10. Software Engineering Principles 3-14 August 1981,

    DTIC Science & Technology

    1981-08-01

    small disk used (but rot that of the extended mass storage or large disk option); it is very fast (about 1/5 the speed of the primary memory, where the...extended mass storage or large disk option); it is very fast (about 1/5 the speed of the primary memory, where the disk was 1/10000 for access); and...programed and tested - must be correct and fast D. Choice of right synchronization operations: Design problem 1. Several mentioned in literature 9-22

  11. Communication: Practical and rigorous reduction of the many-electron quantum mechanical Coulomb problem to O(N{sup 2/3}) storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pederson, Mark R., E-mail: mark.pederson@science.doe.gov

    2015-04-14

    It is tacitly accepted that, for practical basis sets consisting of N functions, solution of the two-electron Coulomb problem in quantum mechanics requires storage of O(N{sup 4}) integrals in the small N limit. For localized functions, in the large N limit, or for planewaves, due to closure, the storage can be reduced to O(N{sup 2}) integrals. Here, it is shown that the storage can be further reduced to O(N{sup 2/3}) for separable basis functions. A practical algorithm, that uses standard one-dimensional Gaussian-quadrature sums, is demonstrated. The resulting algorithm allows for the simultaneous storage, or fast reconstruction, of any two-electron Coulombmore » integral required for a many-electron calculation on processors with limited memory and disk space. For example, for calculations involving a basis of 9171 planewaves, the memory required to effectively store all Coulomb integrals decreases from 2.8 Gbytes to less than 2.4 Mbytes.« less

  12. Communication: practical and rigorous reduction of the many-electron quantum mechanical Coulomb problem to O(N(2/3)) storage.

    PubMed

    Pederson, Mark R

    2015-04-14

    It is tacitly accepted that, for practical basis sets consisting of N functions, solution of the two-electron Coulomb problem in quantum mechanics requires storage of O(N(4)) integrals in the small N limit. For localized functions, in the large N limit, or for planewaves, due to closure, the storage can be reduced to O(N(2)) integrals. Here, it is shown that the storage can be further reduced to O(N(2/3)) for separable basis functions. A practical algorithm, that uses standard one-dimensional Gaussian-quadrature sums, is demonstrated. The resulting algorithm allows for the simultaneous storage, or fast reconstruction, of any two-electron Coulomb integral required for a many-electron calculation on processors with limited memory and disk space. For example, for calculations involving a basis of 9171 planewaves, the memory required to effectively store all Coulomb integrals decreases from 2.8 Gbytes to less than 2.4 Mbytes.

  13. Using Solid State Disk Array as a Cache for LHC ATLAS Data Analysis

    NASA Astrophysics Data System (ADS)

    Yang, W.; Hanushevsky, A. B.; Mount, R. P.; Atlas Collaboration

    2014-06-01

    User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and cover a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of the spinning disk storage to help improve the performance of LHC ATLAS user analysis at SLAC. The system uses several days of data access records to make caching decisions. It can also use information from other sources such as a work-flow management system. We evaluate the performance of the system both in terms of caching and its impact on user analysis jobs. The system currently uses Xrootd technology, but the technique can be applied to any storage system.

  14. RAID Disk Arrays for High Bandwidth Applications

    NASA Technical Reports Server (NTRS)

    Moren, Bill

    1996-01-01

    High bandwidth applications require large amounts of data transferred to/from storage devices at extremely high data rates. Further, these applications often are 'real time' in which access to the storage device must take place on the schedule of the data source, not the storage. A good example is a satellite downlink - the volume of data is quite large and the data rates quite high (dozens of MB/sec). Further, a telemetry downlink must take place while the satellite is overhead. A storage technology which is ideally suited to these types of applications is redundant arrays of independent discs (RAID). Raid storage technology, while offering differing methodologies for a variety of applications, supports the performance and redundancy required in real-time applications. Of the various RAID levels, RAID-3 is the only one which provides high data transfer rates under all operating conditions, including after a drive failure.

  15. Evolving Requirements for Magnetic Tape Data Storage Systems

    NASA Technical Reports Server (NTRS)

    Gniewek, John J.

    1996-01-01

    Magnetic tape data storage systems have evolved in an environment where the major applications have been back-up/restore, disaster recovery, and long term archive. Coincident with the rapidly improving price-performance of disk storage systems, the prime requirements for tape storage systems have remained: (1) low cost per MB, (2) a data rate balanced to the remaining system components. Little emphasis was given to configuring the technology components to optimize retrieval of the stored data. Emerging new applications such as network attached high speed memory (HSM), and digital libraries, place additional emphasis and requirements on the retrieval of the stored data. It is therefore desirable to consider the system to be defined both by STorage And Retrieval System (STARS) requirements. It is possible to provide comparative performance analysis of different STARS by incorporating parameters related to (1) device characteristics, and (2) application characteristics in combination with queuing theory analysis. Results of these analyses are presented here in the form of response time as a function of system configuration for two different types of devices and for a variety of applications.

  16. Holographic optical disc

    NASA Astrophysics Data System (ADS)

    Zhou, Gan; An, Xin; Pu, Allen; Psaltis, Demetri; Mok, Fai H.

    1999-11-01

    The holographic disc is a high capacity, disk-based data storage device that can provide the performance for next generation mass data storage needs. With a projected capacity approaching 1 terabit on a single 12 cm platter, the holographic disc has the potential to become a highly efficient storage hardware for data warehousing applications. The high readout rate of holographic disc makes it especially suitable for generating multiple, high bandwidth data streams such as required for network server computers. Multimedia applications such as interactive video and HDTV can also potentially benefit from the high capacity and fast data access of holographic memory.

  17. 5 CFR 293.107 - Special safeguards for automated records.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... for automated records. (a) In addition to following the security requirements of § 293.106 of this... security safeguards for data about individuals in automated records, including input and output documents, reports, punched cards, magnetic tapes, disks, and on-line computer storage. The safeguards must be in...

  18. 5 CFR 293.107 - Special safeguards for automated records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... for automated records. (a) In addition to following the security requirements of § 293.106 of this... security safeguards for data about individuals in automated records, including input and output documents, reports, punched cards, magnetic tapes, disks, and on-line computer storage. The safeguards must be in...

  19. 5 CFR 293.107 - Special safeguards for automated records.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... for automated records. (a) In addition to following the security requirements of § 293.106 of this... security safeguards for data about individuals in automated records, including input and output documents, reports, punched cards, magnetic tapes, disks, and on-line computer storage. The safeguards must be in...

  20. 5 CFR 293.107 - Special safeguards for automated records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... for automated records. (a) In addition to following the security requirements of § 293.106 of this... security safeguards for data about individuals in automated records, including input and output documents, reports, punched cards, magnetic tapes, disks, and on-line computer storage. The safeguards must be in...

  1. DPM — efficient storage in diverse environments

    NASA Astrophysics Data System (ADS)

    Hellmich, Martin; Furano, Fabrizio; Smith, David; Brito da Rocha, Ricardo; Álvarez Ayllón, Alejandro; Manzi, Andrea; Keeble, Oliver; Calvet, Ivan; Regala, Miguel Antonio

    2014-06-01

    Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular storage solution Disk Pool Manager (DPM/dmlite) as the enabling technology. The storage manager DPM is designed for these new environments, allowing users to scale up and down as they need it, and optimizing their computing centers energy efficiency and costs. DPM runs on high-performance machines, profiting from multi-core and multi-CPU setups. It supports separating the database from the metadata server, the head node, largely reducing its hard disk requirements. Since version 1.8.6, DPM is released in EPEL and Fedora, simplifying distribution and maintenance, but also supporting the ARM architecture beside i386 and x86_64, allowing it to run the smallest low-power machines such as the Raspberry Pi or the CuBox. This usage is facilitated by the possibility to scale horizontally using a main database and a distributed memcached-powered namespace cache. Additionally, DPM supports a variety of storage pools in the backend, most importantly HDFS, S3-enabled storage, and cluster file systems, allowing users to fit their DPM installation exactly to their needs. In this paper, we investigate the power-efficiency and total cost of ownership of various DPM configurations. We develop metrics to evaluate the expected performance of a setup both in terms of namespace and disk access considering the overall cost including equipment, power consumptions, or data/storage fees. The setups tested range from the lowest scale using Raspberry Pis with only 700MHz single cores and a 100Mbps network connections, over conventional multi-core servers to typical virtual machine instances in cloud settings. We evaluate the combinations of different name server setups, for example load-balanced clusters, with different storage setups, from using a classic local configuration to private and public clouds.

  2. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  3. Kodak Optical Disk and Microfilm Technologies Carve Niches in Specific Applications.

    ERIC Educational Resources Information Center

    Gallenberger, John; Batterton, John

    1989-01-01

    Describes the Eastman Kodak Company's microfilm and optical disk technologies and their applications. Topics discussed include WORM technology; retrieval needs and cost effective archival storage needs; engineering applications; jukeboxes; optical storage options; systems for use with mainframes and microcomputers; and possible future…

  4. Jefferson Lab Mass Storage and File Replication Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Bird; Ying Chen; Bryan Hess

    Jefferson Lab has implemented a scalable, distributed, high performance mass storage system - JASMine. The system is entirely implemented in Java, provides access to robotic tape storage and includes disk cache and stage manager components. The disk manager subsystem may be used independently to manage stand-alone disk pools. The system includes a scheduler to provide policy-based access to the storage systems. Security is provided by pluggable authentication modules and is implemented at the network socket level. The tape and disk cache systems have well defined interfaces in order to provide integration with grid-based services. The system is in production andmore » being used to archive 1 TB per day from the experiments, and currently moves over 2 TB per day total. This paper will describe the architecture of JASMine; discuss the rationale for building the system, and present a transparent 3rd party file replication service to move data to collaborating institutes using JASMine, XM L, and servlet technology interfacing to grid-based file transfer mechanisms.« less

  5. Experiences From NASA/Langley's DMSS Project

    NASA Technical Reports Server (NTRS)

    1996-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at the NASA Langley Research Center (LaRC) has placed such a system into production use. This paper will present the experiences, both good and bad, we have had with this system since putting it into production usage. The system is comprised of: 1) National Storage Laboratory (NSL)/UniTree 2.1, 2) IBM 9570 HIPPI attached disk arrays (both RAID 3 and RAID 5), 3) IBM RS6000 server, 4) HIPPI/IPI3 third party transfers between the disk array systems and the supercomputer clients, a CRAY Y-MP and a CRAY 2, 5) a "warm spare" file server, 6) transition software to convert from CRAY's Data Migration Facility (DMF) based system to DMSS, 7) an NSC PS32 HIPPI switch, and 8) a STK 4490 robotic library accessed from the IBM RS6000 block mux interface. This paper will cover: the performance of the DMSS in the following areas: file transfer rates, migration and recall, and file manipulation (listing, deleting, etc.); the appropriateness of a workstation class of file server for NSL/UniTree with LaRC's present storage requirements in mind the role of the third party transfers between the supercomputers and the DMSS disk array systems in DMSS; a detailed comparison (both in performance and functionality) between the DMF and DMSS systems LaRC's enhancements to the NSL/UniTree system administration environment the mechanism for DMSS to provide file server redundancy the statistics on the availability of DMSS the design and experiences with the locally developed transparent transition software which allowed us to make over 1.5 million DMF files available to NSL/UniTree with minimal system outage

  6. Attaching IBM-compatible 3380 disks to Cray X-MP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Midlock, J.L.

    1989-01-01

    A method of attaching IBM-compatible 3380 disks directly to a Cray X-MP via the XIOP with a BMC is described. The IBM 3380 disks appear to the UNICOS operating system as DD-29 disks with UNICOS file systems. IBM 3380 disks provide cheap, reliable large capacity disk storage. Combined with a small number of high-speed Cray disks, the IBM disks provide for the bulk of the storage for small files and infrequently used files. Cray Research designed the BMC and its supporting software in the XIOP to allow IBM tapes and other devices to be attached to the X-MP. No hardwaremore » changes were necessary, and we added less than 2000 lines of code to the XIOP to accomplish this project. This system has been in operation for over eight months. Future enhancements such as the use of a cache controller and attachment to a Y-MP are also described. 1 tab.« less

  7. Proof of cipher text ownership based on convergence encryption

    NASA Astrophysics Data System (ADS)

    Zhong, Weiwei; Liu, Zhusong

    2017-08-01

    Cloud storage systems save disk space and bandwidth through deduplication technology, but with the use of this technology has been targeted security attacks: the attacker can get the original file just use hash value to deceive the server to obtain the file ownership. In order to solve the above security problems and the different security requirements of cloud storage system files, an efficient information theory security proof of ownership scheme is proposed. This scheme protects the data through the convergence encryption method, and uses the improved block-level proof of ownership scheme, and can carry out block-level client deduplication to achieve efficient and secure cloud storage deduplication scheme.

  8. Archival storage solutions for PACS

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy

    1997-05-01

    While they are many, one of the inhibitors to the wide spread diffusion of PACS systems has been robust, cost effective digital archive storage solutions. Moreover, an automated Nearline solution is key to a central, sharable data repository, enabling many applications such as PACS, telemedicine and teleradiology, and information warehousing and data mining for research such as patient outcome analysis. Selecting the right solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, configuration architecture and flexibility, subsystem availability and reliability, security requirements, system cost, achievable benefits and cost savings, investment protection, strategic fit and more.This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on storage system throughput will be analyzed. The concept of automated migration of images from high performance, high cost storage devices to high capacity, low cost storage devices will be introduced as a viable way to minimize overall storage costs for an archive. The concept of access density will also be introduced and applied to the selection of the most cost effective archive solution.

  9. A Layered Solution for Supercomputing Storage

    ScienceCinema

    Grider, Gary

    2018-06-13

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  10. Method and apparatus for bistable optical information storage for erasable optical disks

    DOEpatents

    Land, Cecil E.; McKinney, Ira D.

    1990-01-01

    A method and an optical device for bistable storage of optical information, together with reading and erasure of the optical information, using a photoactivated shift in a field dependent phase transition between a metastable or a bias-stabilized ferroelectric (FE) phase and a stable antiferroelectric (AFE) phase in an lead lanthanum zirconate titanate (PLZT). An optical disk contains the PLZT. Writing and erasing of optical information can be accomplished by a light beam normal to the disk. Reading of optical information can be accomplished by a light beam at an incidence angle of 15 to 60 degrees to the normal of the disk.

  11. Method and apparatus for bistable optical information storage for erasable optical disks

    DOEpatents

    Land, C.E.; McKinney, I.D.

    1988-05-31

    A method and an optical device for bistable storage of optical information, together with reading and erasure of the optical information, using a photoactivated shift in a field dependent phase transition between a metastable or a bias-stabilized ferroelectric (FE) phase and a stable antiferroelectric (AFE) phase in a lead lanthanum zirconate titanate (PLZT). An optical disk contains the PLZT. Writing and erasing of optical information can be accomplished by a light beam normal to the disk. Reading of optical information can be accomplished by a light beam at an incidence angle of 15 to 60 degrees to the normal of the disk. 10 figs.

  12. Disk Memories: What You Should Know before You Buy Them.

    ERIC Educational Resources Information Center

    Bursky, Dave

    1981-01-01

    Explains the basic features of floppy disk and hard disk computer storage systems and the purchasing decisions which must be made, particularly in relation to certain popular microcomputers. A disk vendors directory is included. Journal availability: Hayden Publishing Company, 50 Essex Street, Rochelle Park, NJ 07662. (SJL)

  13. Laser beam modeling in optical storage systems

    NASA Technical Reports Server (NTRS)

    Treptau, J. P.; Milster, T. D.; Flagello, D. G.

    1991-01-01

    A computer model has been developed that simulates light propagating through an optical data storage system. A model of a laser beam that originates at a laser diode, propagates through an optical system, interacts with a optical disk, reflects back from the optical disk into the system, and propagates to data and servo detectors is discussed.

  14. KEYNOTE ADDRESS: The role of standards in the emerging optical digital data disk storage systems market

    NASA Astrophysics Data System (ADS)

    Bainbridge, Ross C.

    1984-09-01

    The Institute for Computer Sciences and Technology at the National Bureau of Standards is pleased to cooperate with the International Society for Optical Engineering and to join with the other distinguished organizations in cosponsoring this conference on applications of optical digital data disk storage systems.

  15. User and group storage management the CMS CERN T2 centre

    NASA Astrophysics Data System (ADS)

    Cerminara, G.; Franzoni, G.; Pfeiffer, A.

    2015-12-01

    A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user/group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user/group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.

  16. Beating the tyranny of scale with a private cloud configured for Big Data

    NASA Astrophysics Data System (ADS)

    Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag

    2015-04-01

    The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.

  17. Standards on the permanence of recording materials

    NASA Astrophysics Data System (ADS)

    Adelstein, Peter Z.

    1996-02-01

    The permanence of recording materials is dependent upon many factors, and these differ for photographic materials, magnetic tape and optical disks. Photographic permanence is affected by the (1) stability of the material, (2) the photographic processing and (3) the storage conditions. American National Standards on the material and the processing have been published for different types of film and standard test methods have been established for color film. The third feature of photographic permanence is the storage requirements and these have been established for photographic film, prints and plates. Standardization on the permanence of electronic recording materials is more complicated. As with photographic materials, stability is dependent upon (1) the material itself and (2) the storage environment. In addition, retention of the necessary (3) hardware and (4) software is also a prerequisite. American National Standards activity in these areas has been underway for the past six years. A test method for the material which determines the life expectancy of CD-ROMs has been standardized. The problems of determining the expected life of magnetic tape have been more formidable but the critical physical properties have been determined. A specification for the storage environment of magnetic tape has been finalized and one on the storage of optical disks is being worked on. Critical but unsolved problems are the obsolescence of both the hardware and the software necessary to read digital images.

  18. Standards on the permanence of recording materials

    NASA Astrophysics Data System (ADS)

    Adelstein, Peter Z.

    1996-01-01

    The permanence of recording materials is dependent upon many factors, and these differ for photographic materials, magnetic tape and optical disks. Photographic permanence is affected by the (1) stability of the material, (2) the photographic processing, and (3) the storage conditions. American National Standards on the material and the processing have been published for different types of film and standard test methods have been established for color film. The third feature of photographic permanence is the storage requirements and these have been established for photographic film, prints, and plates. Standardization on the permanence of electronic recording materials is more complicated. As with photographic materials, stability is dependent upon (1) the material itself and (2) the storage environment. In addition, retention of the necessary (3) hardware and (4) software is also a prerequisite. American National Standards activity in these areas has been underway for the past six years. A test method for the material which determines the life expectancy of CD-ROMs has been standardized. The problems of determining the expected life of magnetic tape have been more formidable but the critical physical properties have been determined. A specification for the storage environment of magnetic tapes has been finalized and one on the storage of optical disks is being worked on. Critical but unsolved problems are the obsolescence of both the hardware and the software necessary to read digital images.

  19. Optical Disks.

    ERIC Educational Resources Information Center

    Gale, John C.; And Others

    1985-01-01

    This four-article section focuses on information storage capacity of the optical disk covering the information workstation (uses microcomputer, optical disk, compact disc to provide reference information, information content, work product support); use of laser videodisc technology for dissemination of agricultural information; encoding databases…

  20. An ASIC memory buffer controller for a high speed disk system

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.; Campbell, Steve

    1993-01-01

    The need for large capacity, high speed mass memory storage devices has become increasingly evident at NASA during the past decade. High performance mass storage systems are crucial to present and future NASA systems. Spaceborne data storage system requirements have grown in response to the increasing amounts of data generated and processed by orbiting scientific experiments. Predictions indicate increases in the volume of data by orders of magnitude during the next decade. Current predictions are for storage capacities on the order of terabits (Tb), with data rates exceeding one gigabit per second (Gbps). As part of the design effort for a state of the art mass storage system, NASA Langley has designed a 144 CMOS ASIC to support high speed data transfers. This paper discusses the system architecture, ASIC design and some of the lessons learned in the development process.

  1. An Effective Cache Algorithm for Heterogeneous Storage Systems

    PubMed Central

    Li, Yong; Feng, Dan

    2013-01-01

    Modern storage environment is commonly composed of heterogeneous storage devices. However, traditional cache algorithms exhibit performance degradation in heterogeneous storage systems because they were not designed to work with the diverse performance characteristics. In this paper, we present a new cache algorithm called HCM for heterogeneous storage systems. The HCM algorithm partitions the cache among the disks and adopts an effective scheme to balance the work across the disks. Furthermore, it applies benefit-cost analysis to choose the best allocation of cache block to improve the performance. Conducting simulations with a variety of traces and a wide range of cache size, our experiments show that HCM significantly outperforms the existing state-of-the-art storage-aware cache algorithms. PMID:24453890

  2. The medium is NOT the message or Indefinitely long-term file storage at Leeds University

    NASA Technical Reports Server (NTRS)

    Holdsworth, David

    1996-01-01

    Approximately 3 years ago we implemented an archive file storage system which embodies experiences gained over more than 25 years of using and writing file storage systems. It is the third in-house system that we have written, and all three systems have been adopted by other institutions. This paper discusses the requirements for long-term data storage in a university environment, and describes how our present system is designed to meet these requirements indefinitely. Particular emphasis is laid on experiences from past systems, and their influence on current system design. We also look at the influence of the IEEE-MSS standard. We currently have the system operating in five UK universities. The system operates in a multi-server environment, and is currently operational with UNIX (SunOS4, Solaris2, SGI-IRIX, HP-UX), NetWare3 and NetWare4. PCs logged on to NetWare can also archive and recover files that live on their hard disks.

  3. Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)

    1998-01-01

    Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.

  4. Attention Novices: Friendly Intro to Shiny Disks.

    ERIC Educational Resources Information Center

    Bardes, D'Ellen

    1986-01-01

    Provides an overview of how optical storage technologies--videodisk, Write-Once disks, and CD-ROM CD-I disks are built into and controlled via DEC, Apple, Atari, Amiga, and IBM PC compatible microcomputers. Several available products are noted and a list of producers is included. (EM)

  5. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

  6. Design Alternatives to Improve Access Time Performance of Disk Drives Under DOS and UNIX

    NASA Astrophysics Data System (ADS)

    Hospodor, Andy

    For the past 25 years, improvements in CPU performance have overshadowed improvements in the access time performance of disk drives. CPU performance has been slanted towards greater instruction execution rates, measured in millions of instructions per second (MIPS). However, the slant for performance of disk storage has been towards capacity and corresponding increased storage densities. The IBM PC, introduced in 1982, processed only a fraction of a MIP. Follow-on CPUs, such as the 80486 and 80586, sported 5-10 MIPS by 1992. Single user PCs and workstations, with one CPU and one disk drive, became the dominant application, as implied by their production volumes. However, disk drives did not enjoy a corresponding improvement in access time performance, although the potential still exists. The time to access a disk drive improves (decreases) in two ways: by altering the mechanical properties of the drive or by adding cache to the drive. This paper explores the improvement to access time performance of disk drives using cache, prefetch, faster rotation rates, and faster seek acceleration.

  7. A media maniac's guide to removable mass storage media

    NASA Technical Reports Server (NTRS)

    Kempster, Linda S.

    1996-01-01

    This paper addresses at a high level, the many individual technologies available today in the removable storage arena including removable magnetic tapes, magnetic floppies, optical disks and optical tape. Tape recorders represented below discuss logitudinal, serpantine, logitudinal serpantine,and helical scan technologies. The magnetic floppies discussed will be used for personal electronic in-box applications.Optical disks still fill the role for dense long-term storage. The media capacities quoted are for native data. In some cases, 2 KB ASC2 pages or 50 KB document images will be referenced.

  8. Analyses of requirements for computer control and data processing experiment subsystems: Image data processing system (IDAPS) software description (7094 version), volume 2

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.

  9. Laser Optical Disk: The Coming Revolution in On-Line Storage.

    ERIC Educational Resources Information Center

    Fujitani, Larry

    1984-01-01

    Review of similarities and differences between magnetic-based and optical disk drives includes a discussion of the electronics necessary for their operation; describes benefits, possible applications, and future trends in development of laser-based drives; and lists manufacturers of laser optical disk drives. (MBR)

  10. Set processing in a network environment. [data bases and magnetic disks and tapes

    NASA Technical Reports Server (NTRS)

    Hardgrave, W. T.

    1975-01-01

    A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.

  11. NASA Langley Research Center's distributed mass storage system

    NASA Technical Reports Server (NTRS)

    Pao, Juliet Z.; Humes, D. Creig

    1993-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.

  12. High-Speed Data Recorder for Space, Geodesy, and Other High-Speed Recording Applications

    NASA Technical Reports Server (NTRS)

    Taveniku, Mikael

    2013-01-01

    A high-speed data recorder and replay equipment has been developed for reliable high-data-rate recording to disk media. It solves problems with slow or faulty disks, multiple disk insertions, high-altitude operation, reliable performance using COTS hardware, and long-term maintenance and upgrade path challenges. The current generation data recor - ders used within the VLBI community are aging, special-purpose machines that are both slow (do not meet today's requirements) and are very expensive to maintain and operate. Furthermore, they are not easily upgraded to take advantage of commercial technology development, and are not scalable to multiple 10s of Gbit/s data rates required by new applications. The innovation provides a softwaredefined, high-speed data recorder that is scalable with technology advances in the commercial space. It maximally utilizes current technologies without being locked to a particular hardware platform. The innovation also provides a cost-effective way of streaming large amounts of data from sensors to disk, enabling many applications to store raw sensor data and perform post and signal processing offline. This recording system will be applicable to many applications needing realworld, high-speed data collection, including electronic warfare, softwaredefined radar, signal history storage of multispectral sensors, development of autonomous vehicles, and more.

  13. A study of application of remote sensing to river forecasting. Volume 2: Detailed technical report, NASA-IBM streamflow forecast model user's guide

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Model is described along with data preparation, determining model parameters, initializing and optimizing parameters (calibration) selecting control options and interpreting results. Some background information is included, and appendices contain a dictionary of variables, a source program listing, and flow charts. The model was operated on an IBM System/360 Model 44, using a model 2250 keyboard/graphics terminal for interactive operation. The model can be set up and operated in a batch processing mode on any System/360 or 370 that has the memory capacity. The model requires 210K bytes of core storage, and the optimization program, OPSET (which was used previous to but not in this study), requires 240K bytes. The data band for one small watershed requires approximately 32 tracks of disk storage.

  14. Operational characteristics of energy storage high temperature superconducting flywheels considering time dependent processes

    NASA Astrophysics Data System (ADS)

    Vajda, Istvan; Kohari, Zalan; Porjesz, Tamas; Benko, Laszlo; Meerovich, V.; Sokolovsky; Gawalek, W.

    2002-08-01

    Technical and economical feasibilities of short-term energy storage flywheels with high temperature superconducting (HTS) bearing are widely investigated. It is essential to reduce the ac losses caused by magnetic field variations in HTS bulk disks/rings (levitators) used in the magnetic bearings of flywheels. For the HTS bearings the calculation and measurement of the magnetic field distribution were performed. Effects like eccentricity, tilting were measured. Time dependency of the levitation force following a jumpwise movement of the permanent magnet was measured. The results were used to setup an engineering design algorithm for energy storage HTS flywheels. This algorithm was applied to an experimental HTS flywheel model with a disk type permanent magnet motor/generator unit designed and constructed by the authors. A conceptual design of the disk-type motor/generator with radial flux is shown.

  15. Advanced optical disk storage technology

    NASA Technical Reports Server (NTRS)

    Haritatos, Fred N.

    1996-01-01

    There is a growing need within the Air Force for more and better data storage solutions. Rome Laboratory, the Air Force's Center of Excellence for C3I technology, has sponsored the development of a number of operational prototypes to deal with this growing problem. This paper will briefly summarize the various prototype developments with examples of full mil-spec and best commercial practice. These prototypes have successfully operated under severe space, airborne and tactical field environments. From a technical perspective these prototypes have included rewritable optical media ranging from a 5.25-inch diameter format up to the 14-inch diameter disk format. Implementations include an airborne sensor recorder, a deployable optical jukebox and a parallel array of optical disk drives. They include stand-alone peripheral devices to centralized, hierarchical storage management systems for distributed data processing applications.

  16. Ability of Shiga Toxin-Producing Escherichia coli and Salmonella spp. To Survive in a Desiccation Model System and in Dry Foods

    PubMed Central

    Hiramatsu, Reiji; Matsumoto, Masakado; Sakae, Kenji; Miyazaki, Yutaka

    2005-01-01

    In order to determine desiccation tolerances of bacterial strains, the survival of 58 diarrheagenic strains (18 salmonellae, 35 Shiga toxin-producing Escherichia coli [STEC], and 5 shigellae) and of 15 nonpathogenic E. coli strains was determined after drying at 35°C for 24 h in paper disks. At an inoculum level of 107 CFU/disk, most of the salmonellae (14/18) and the STEC strains (31/35) survived with a population of 103 to 104 CFU/disk, whereas all of the shigellae (5/5) and the majority of the nonpathogenic E. coli strains (9/15) did not survive (the population was decreased to less than the detection limit of 102 CFU/disk). After 22 to 24 months of subsequent storage at 4°C, all of the selected salmonellae (4/4) and most of the selected STEC strains (12/15) survived, keeping the original populations (103 to 104 CFU/disk). In contrast to the case for storage at 4°C, all of 15 selected strains (5 strains each of Salmonella spp., STEC O157, and STEC O26) died after 35 to 70 days of storage at 25°C and 35°C. The survival rates of all of these 15 strains in paper disks after the 24 h of drying were substantially increased (10 to 79 times) by the presence of sucrose (12% to 36%). All of these 15 desiccated strains in paper disks survived after exposure to 70°C for 5 h. The populations of these 15 strains inoculated in dried foods containing sucrose and/or fat (e.g., chocolate) were 100 times higher than those in the dried paper disks after drying for 24 h at 25°C. PMID:16269694

  17. LVFS: A Big Data File Storage Bridge for the HPC Community

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.

    2015-12-01

    Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.

  18. A Layered Solution for Supercomputing Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grider, Gary

    To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.

  19. Status of international optical disk standards

    NASA Astrophysics Data System (ADS)

    Chen, Di; Neumann, John

    1999-11-01

    Optical technology for data storage offers media removability with unsurpassed reliability. As the media are removable, data interchange between the media and drives from different sources is a major concern. The optical recording community realized, at the inception of this new storage technology development, that international standards for all optical recording disk/cartridge must be established to insure the healthy growth of this industry and for the benefit of the users. Many standards organizations took up the challenge and numerous international standards were established which are now being used world-wide. This paper provides a brief summary of the current status of the international optical disk standards.

  20. Influence of Sous Vide and water immersion processing on polyacetylene content and instrumental color of parsnip (Pastinaca sativa) disks.

    PubMed

    Rawson, Ashish; Koidis, Anastasios; Rai, Dilip K; Tuohy, Maria; Brunton, Nigel

    2010-07-14

    The effect of blanching (95 +/- 3 degrees C) followed by sous vide (SV) processing (90 degrees C for 10 min) on levels of two polyacetylenes in parsnip disks immediately after processing and during chill storage was studied and compared with the effect of water immersion (WI) processing (70 degrees C for 2 min.). Blanching had the greatest influence on the retention of polyacetylenes in sous vide processed parsnip disks resulting in significant decreases of 24.5 and 24% of falcarinol (1) and falcarindiol (2) respectively (p < 0.05). Subsequent SV processing did not result in additional significant losses in polyacetylenes compared to blanched samples. Subsequent anaerobic storage of SV processed samples resulted in a significant decrease in 1 levels (p < 0.05) although no change in 2 levels was observed (p > 0.05). 1 levels in WI processed samples were significantly higher than in SV samples (p

  1. Optical storage media data integrity studies

    NASA Technical Reports Server (NTRS)

    Podio, Fernando L.

    1994-01-01

    Optical disk-based information systems are being used in private industry and many Federal Government agencies for on-line and long-term storage of large quantities of data. The storage devices that are part of these systems are designed with powerful, but not unlimited, media error correction capacities. The integrity of data stored on optical disks does not only depend on the life expectancy specifications for the medium. Different factors, including handling and storage conditions, may result in an increase of medium errors in size and frequency. Monitoring the potential data degradation is crucial, especially for long term applications. Efforts are being made by the Association for Information and Image Management Technical Committee C21, Storage Devices and Applications, to specify methods for monitoring and reporting to the user medium errors detected by the storage device while writing, reading or verifying the data stored in that medium. The Computer Systems Laboratory (CSL) of the National Institute of Standard and Technology (NIST) has a leadership role in the development of these standard techniques. In addition, CSL is researching other data integrity issues, including the investigation of error-resilient compression algorithms. NIST has conducted care and handling experiments on optical disk media with the objective of identifying possible causes of degradation. NIST work in data integrity and related standards activities is described.

  2. Redundant Disk Arrays in Transaction Processing Systems. Ph.D. Thesis, 1993

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine Nagib

    1994-01-01

    We address various issues dealing with the use of disk arrays in transaction processing environments. We look at the problem of transaction undo recovery and propose a scheme for using the redundancy in disk arrays to support undo recovery. The scheme uses twin page storage for the parity information in the array. It speeds up transaction processing by eliminating the need for undo logging for most transactions. The use of redundant arrays of distributed disks to provide recovery from disasters as well as temporary site failures and disk crashes is also studied. We investigate the problem of assigning the sites of a distributed storage system to redundant arrays in such a way that a cost of maintaining the redundant parity information is minimized. Heuristic algorithms for solving the site partitioning problem are proposed and their performance is evaluated using simulation. We also develop a heuristic for which an upper bound on the deviation from the optimal solution can be established.

  3. Flexible matrix composite laminated disk/ring flywheel

    NASA Technical Reports Server (NTRS)

    Gupta, B. P.; Hannibal, A. J.

    1984-01-01

    An energy storage flywheel consisting of a quasi-isotropic composite disk overwrapped by a circumferentially wound ring made of carbon fiber and a elastometric matrix is proposed. Through analysis it was demonstrated that with an elastomeric matrix to relieve the radial stresses, a laminated disk/ring flywheel can be designed to store a least 80.3 Wh/kg or about 68% more than previous disk/ring designs. at the same time the simple construction is preserved.

  4. SAM-FS: LSC's New Solaris-Based Storage Management Product

    NASA Technical Reports Server (NTRS)

    Angell, Kent

    1996-01-01

    SAM-FS is a full featured hierarchical storage management (HSM) device that operates as a file system on Solaris-based machines. The SAM-FS file system provides the user with all of the standard UNIX system utilities and calls, and adds some new commands, i.e. archive, release, stage, sls, sfind, and a family of maintenance commands. The system also offers enhancements such as high performance virtual disk read and write, control of the disk through an extent array, and the ability to dynamically allocate block size. SAM-FS provides 'archive sets' which are groupings of data to be copied to secondary storage. In practice, as soon as a file is written to disk, SAM-FS will make copies onto secondary media. SAM-FS is a scalable storage management system. The system can manage millions of files per system, though this is limited today by the speed of UNIX and its utilities. In the future, a new search algorithm will be implemented that will remove logical and performance restrictions on the number of files managed.

  5. Tick, Tock, Tick, Tock...

    NASA Astrophysics Data System (ADS)

    Evans, N. W.; Molloy, M.

    2014-07-01

    The Gaia dataset will require a huge leap forward in terms of modelling of the Milky Way. Two problems are highlighted here. First, models of the Galactic Bar remain primitive as compared to the Galactic Disk and Stellar Halo. Although Schwarzschild and N-body methods are useful, the future belongs to Made-to-Measure (M2M) models which have significant advantages in terms of storage and flexibility. Second, the Milky Way potential will need much better representation than hitherto. Most models still use very simple building blocks (Miyamoto-Nagai disks or Hernquist bulges) and these will not be fit for purpose in the Gaia Era. Expansions in terms of basis functions offer the possibility of incorporating cosmological information as priors, as well as mych greater adaptability.

  6. Efficient proof of ownership for cloud storage systems

    NASA Astrophysics Data System (ADS)

    Zhong, Weiwei; Liu, Zhusong

    2017-08-01

    Cloud storage system through the deduplication technology to save disk space and bandwidth, but the use of this technology has appeared targeted security attacks: the attacker can deceive the server to obtain ownership of the file by get the hash value of original file. In order to solve the above security problems and the different security requirements of the files in the cloud storage system, an efficient and information-theoretical secure proof of ownership sceme is proposed to support the file rating. Through the K-means algorithm to implement file rating, and use random seed technology and pre-calculation method to achieve safe and efficient proof of ownership scheme. Finally, the scheme is information-theoretical secure, and achieve better performance in the most sensitive areas of client-side I/O and computation.

  7. Analysis Report for Exascale Storage Requirements for Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, Thomas M.

    Over the next 10 years, the Department of Energy will be transitioning from Petascale to Exascale Computing resulting in data storage, networking, and infrastructure requirements to increase by three orders of magnitude. The technologies and best practices used today are the result of a relatively slow evolution of ancestral technologies developed in the 1950s and 1960s. These include magnetic tape, magnetic disk, networking, databases, file systems, and operating systems. These technologies will continue to evolve over the next 10 to 15 years on a reasonably predictable path. Experience with the challenges involved in transitioning these fundamental technologies from Terascale tomore » Petascale computing systems has raised questions about how these will scale another 3 or 4 orders of magnitude to meet the requirements imposed by Exascale computing systems. This report is focused on the most concerning scaling issues with data storage systems as they relate to High Performance Computing- and presents options for a path forward. Given the ability to store exponentially increasing amounts of data, far more advanced concepts and use of metadata will be critical to managing data in Exascale computing systems.« less

  8. The successful of finite element to invent particle cleaning system by air jet in hard disk drive

    NASA Astrophysics Data System (ADS)

    Jai-Ngam, Nualpun; Tangchaichit, Kaitfa

    2018-02-01

    Hard Disk Drive manufacturing has faced very challenging with the increasing demand of high capacity drives for Cloud-based storage. Particle adhesion has also become increasingly important in HDD to gain more reliability of storage capacity. The ability to clean on surfaces is more complicated in removing such particles without damaging the surface. This research is aim to improve the particle cleaning in HSA by using finite element to develop the air flow model then invent the prototype of air cleaning system to remove particle from surface. Surface cleaning by air pressure can be applied as alternative for the removal of solid particulate contaminants that is adhering on a solid surface. These technical and economic challenges have driven the process development from traditional way that chemical solvent cleaning. The focus of this study is to develop alternative way from scrub, ultrasonic, mega sonic on surface cleaning principles to serve as a foundation for the development of new processes to meet current state-of-the-art process requirements and minimize the waste from chemical cleaning for environment safety.

  9. Ultrahigh resolution photographic films for X-ray/EUV/FUV astronomy

    NASA Technical Reports Server (NTRS)

    Hoover, Richard B.; Walker, Arthur B. C., Jr.; Deforest, Craig E.; Watts, Richard; Tarrio, Charles

    1993-01-01

    The quest for ultrahigh resolution full-disk images of the sun at soft X-ray/EUV/FUV wavelengths has increased the demand for photographic films with broad spectral sensitivity, high spatial resolution, and wide dynamic range. These requirements were made more stringent by the recent development of multilayer telescopes and coronagraphs capable of operating at normal incidence at soft X-ray/EUV wavelengths. Photographic films are the only detectors now available with the information storage capacity and dynamic range such as is required for recording images of the solar disk and corona simultaneously with sub arc second spatial resolution. During the Stanford/MSFC/LLNL Rocket X-Ray Spectroheliograph and Multi-Spectral Solar Telescope Array (MSSTA) programs, we utilized photographic films to obtain high resolution full-disk images of the sun at selected soft X-ray/EUV/FUV wavelengths. In order to calibrate our instrumentation for quantitative analysis of our solar data and to select the best emulsions and processing conditions for the MSSTA reflight, we recently tested several photographic films. These studies were carried out at the NIST SURF II synchrotron and the Stanford Synchrotron Radiation Laboratory. In this paper, we provide the results of those investigations.

  10. RALPH: An online computer program for acquisition and reduction of pulse height data

    NASA Technical Reports Server (NTRS)

    Davies, R. C.; Clark, R. S.; Keith, J. E.

    1973-01-01

    A background/foreground data acquisition and analysis system incorporating a high level control language was developed for acquiring both singles and dual parameter coincidence data from scintillation detectors at the Radiation Counting Laboratory at the NASA Manned Spacecraft Center in Houston, Texas. The system supports acquisition of gamma ray spectra in a 256 x 256 coincidence matrix (utilizing disk storage) and simultaneous operation of any of several background support and data analysis functions. In addition to special instruments and interfaces, the hardware consists of a PDP-9 with 24K core memory, 256K words of disk storage, and Dectape and Magtape bulk storage.

  11. Optimising LAN access to grid enabled storage elements

    NASA Astrophysics Data System (ADS)

    Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.

    2008-07-01

    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.

  12. Sawmill: A Logging File System for a High-Performance RAID Disk Array

    DTIC Science & Technology

    1995-01-01

    from limiting disk performance, new controller architectures connect the disks directly to the network so that data movement bypasses the file server...These developments raise two questions for file systems: how to get the best performance from a RAID, and how to use such a controller architecture ...the RAID-II storage system; this architecture provides a fast data path that moves data rapidly among the disks, high-speed controller memory, and the

  13. Performance of redundant disk array organizations in transaction processing environments

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1993-01-01

    A performance evaluation is conducted for two redundant disk-array organizations in a transaction-processing environment, relative to the performance of both mirrored disk organizations and organizations using neither striping nor redundancy. The proposed parity-striping alternative to striping with rotated parity is shown to furnish rapid recovery from failure at the same low storage cost without interleaving the data over multiple disks. Both noncached systems and systems using a nonvolatile cache as the controller are considered.

  14. PCM-Based Durable Write Cache for Fast Disk I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apyan, A.; Badillo, J.; Cruz, J. Diaz

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less

  16. Optical Disk Technology and Information.

    ERIC Educational Resources Information Center

    Goldstein, Charles M.

    1982-01-01

    Provides basic information on videodisks and potential applications, including inexpensive online storage, random access graphics to complement online information systems, hybrid network architectures, office automation systems, and archival storage. (JN)

  17. Eighth Goddard Conference on Mass Storage Systems and Technologies in Cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    2000-01-01

    This document contains copies of those technical papers received in time for publication prior to the Eighth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center March 27-30, 2000. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, new technology with a special emphasis on holographic storage, performance, standards, site reports, vendor solutions. Tutorials will be available on stability of optical media, disk subsystem performance evaluation, I/O and storage tuning, functionality and performance evaluation of file systems for storage area networks.

  18. Integrated IMA (Information Mission Areas) IC (Information Center) Guide

    DTIC Science & Technology

    1989-06-01

    COMPUTER AIDED DESIGN / COMPUTER AIDED MANUFACTURE 8-8 8.3.7 LIQUID CRYSTAL DISPLAY PANELS 8-8 8.3.8 ARTIFICIAL INTELLIGENCE APPLIED TO VI 8-9 8.4...2 10.3.1 DESKTOP PUBLISHING 10-3 10.3.2 INTELLIGENT COPIERS 10-5 10.3.3 ELECTRONIC ALTERNATIVES TO PRINTED DOCUMENTS 10-5 10.3.4 ELECTRONIC FORMS...Optical Disk LCD Units Storage Image Scanners Graphics Forms Output Generation Copiers Devices Software Optical Disk Intelligent Storage Copiers Work Group

  19. Study of data I/O performance on distributed disk system in mask data preparation

    NASA Astrophysics Data System (ADS)

    Ohara, Shuichiro; Odaira, Hiroyuki; Chikanaga, Tomoyuki; Hamaji, Masakazu; Yoshioka, Yasuharu

    2010-09-01

    Data volume is getting larger every day in Mask Data Preparation (MDP). In the meantime, faster data handling is always required. MDP flow typically introduces Distributed Processing (DP) system to realize the demand because using hundreds of CPU is a reasonable solution. However, even if the number of CPU were increased, the throughput might be saturated because hard disk I/O and network speeds could be bottlenecks. So, MDP needs to invest a lot of money to not only hundreds of CPU but also storage and a network device which make the throughput faster. NCS would like to introduce new distributed processing system which is called "NDE". NDE could be a distributed disk system which makes the throughput faster without investing a lot of money because it is designed to use multiple conventional hard drives appropriately over network. NCS studies I/O performance with OASIS® data format on NDE which contributes to realize the high throughput in this paper.

  20. Large Format Multifunction 2-Terabyte Optical Disk Storage System

    NASA Technical Reports Server (NTRS)

    Kaiser, David R.; Brucker, Charles F.; Gage, Edward C.; Hatwar, T. K.; Simmons, George O.

    1996-01-01

    The Kodak Digital Science OD System 2000E automated disk library (ADL) base module and write-once drive are being developed as the next generation commercial product to the currently available System 2000 ADL. Under government sponsorship with the Air Force's Rome Laboratory, Kodak is developing magneto-optic (M-O) subsystems compatible with the Kodak Digital Science ODW25 drive architecture, which will result in a multifunction (MF) drive capable of reading and writing 25 gigabyte (GB) WORM media and 15 GB erasable media. In an OD system 2000 E ADL configuration with 4 MF drives and 100 total disks with a 50% ration of WORM and M-O media, 2.0 terabytes (TB) of versatile near line mass storage is available.

  1. NSSDC activities with 12-inch optical disk drives

    NASA Technical Reports Server (NTRS)

    Lowrey, Barbara E.; Lopez-Swafford, Brian

    1986-01-01

    The development status of optical-disk data transfer and storage technology at the National Space Science Data Center (NSSDC) is surveyed. The aim of the R&D program is to facilitate the exchange of large volumes of data. Current efforts focus on a 12-inch 1-Gbyte write-once/read-many disk and a disk drive which interfaces with VAX/VMS computer systems. The history of disk development at NSSDC is traced; the results of integration and performance tests are summarized; the operating principles of the 12-inch system are explained and illustrated with diagrams; and the need for greater standardization is indicated.

  2. Optical Digital Image Storage System

    DTIC Science & Technology

    1991-03-18

    figures courtesy of Sony Corporation x LIST OF TABLES Indexing Workstation - Ease of Learning ................................... 99 Indexing Workstation...retaining a master negative copy of the microfilm. 121 The Sony Corporation, the supplier of the optical disk media used in the ODISS projeLt, claims...disk." During the ODISS project, several CMSR files-stored on the Sony optical disks were read several thousand times with no -loss of information

  3. The amino acid's backup bone - storage solutions for proteomics facilities.

    PubMed

    Meckel, Hagen; Stephan, Christian; Bunse, Christian; Krafzik, Michael; Reher, Christopher; Kohl, Michael; Meyer, Helmut Erich; Eisenacher, Martin

    2014-01-01

    Proteomics methods, especially high-throughput mass spectrometry analysis have been continually developed and improved over the years. The analysis of complex biological samples produces large volumes of raw data. Data storage and recovery management pose substantial challenges to biomedical or proteomic facilities regarding backup and archiving concepts as well as hardware requirements. In this article we describe differences between the terms backup and archive with regard to manual and automatic approaches. We also introduce different storage concepts and technologies from transportable media to professional solutions such as redundant array of independent disks (RAID) systems, network attached storages (NAS) and storage area network (SAN). Moreover, we present a software solution, which we developed for the purpose of long-term preservation of large mass spectrometry raw data files on an object storage device (OSD) archiving system. Finally, advantages, disadvantages, and experiences from routine operations of the presented concepts and technologies are evaluated and discussed. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013. Published by Elsevier B.V.

  4. Small Form Factor Information Storage Devices for Mobile Applications in Korea

    NASA Astrophysics Data System (ADS)

    Park, Young-Pil; Park, No-Cheol; Kim, Chul-Jin

    Recently, the ubiquitous environment in which anybody can reach a lot of information data without any limitations on the place and time has become an important social issue. There are two basic requirements in the field of information storage devices which have to be satisfied; the first is the demand for the improvement of memory capacity to manage the increased data capacity in personal and official purposes. The second is the demand for new development of information storage devices small enough to be applied to mobile multimedia digital electronics, including digital camera, PDA and mobile phones. To summarize, for the sake of mobile applications, it is necessary to develop information storage devices which have simultaneously a large capacity and a small size. Korea possesses the necessary infrastructure for developing such small sized information storage devices. It has a good digital market, major digital companies, and various research institutes. Nowadays, many companies and research institutes including university cooperate together in the research on small sized information storage devices. Thus, it is expected that small form factor optical disk drives will be commercialized in the very near future in Korea.

  5. Research and implementation of SATA protocol link layer based on FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Wen-long; Liu, Xue-bin; Qiang, Si-miao; Yan, Peng; Wen, Zhi-gang; Kong, Liang; Liu, Yong-zheng

    2018-02-01

    In order to solve the problem high-performance real-time, high-speed the image data storage generated by the detector. In this thesis, it choose an suitable portable image storage hard disk of SATA interface, it is relative to the existing storage media. It has a large capacity, high transfer rate, inexpensive, power-down data which is not lost, and many other advantages. This paper focuses on the link layer of the protocol, analysis the implementation process of SATA2.0 protocol, and build state machines. Then analyzes the characteristics resources of Kintex-7 FPGA family, builds state machines according to the agreement, write Verilog implement link layer modules, and run the simulation test. Finally, the test is on the Kintex-7 development board platform. It meets the requirements SATA2.0 protocol basically.

  6. Nano-optical information storage induced by the nonlinear saturable absorption effect

    NASA Astrophysics Data System (ADS)

    Wei, Jingsong; Liu, Shuang; Geng, Yongyou; Wang, Yang; Li, Xiaoyi; Wu, Yiqun; Dun, Aihuan

    2011-08-01

    Nano-optical information storage is very important in meeting information technology requirements. However, obtaining nanometric optical information recording marks by the traditional optical method is difficult due to diffraction limit restrictions. In the current work, the nonlinear saturable absorption effect is used to generate a subwavelength optical spot and to induce nano-optical information recording and readout. Experimental results indicate that information marks below 100 nm are successfully recorded and read out by a high-density digital versatile disk dynamic testing system with a laser wavelength of 405 nm and a numerical aperture of 0.65. The minimum marks of 60 nm are realized, which is only about 1/12 of the diffraction-limited theoretical focusing spot. This physical scheme is very useful in promoting the development of optical information storage in the nanoscale field.

  7. Voltage assisted asymmetric nanoscale wear on ultra-smooth diamond like carbon thin films at high sliding speeds

    PubMed Central

    Rajauria, Sukumar; Schreck, Erhard; Marchon, Bruno

    2016-01-01

    The understanding of tribo- and electro-chemical phenomenons on the molecular level at a sliding interface is a field of growing interest. Fundamental chemical and physical insights of sliding surfaces are crucial for understanding wear at an interface, particularly for nano or micro scale devices operating at high sliding speeds. A complete investigation of the electrochemical effects on high sliding speed interfaces requires a precise monitoring of both the associated wear and surface chemical reactions at the interface. Here, we demonstrate that head-disk interface inside a commercial magnetic storage hard disk drive provides a unique system for such studies. The results obtained shows that the voltage assisted electrochemical wear lead to asymmetric wear on either side of sliding interface. PMID:27150446

  8. Voltage assisted asymmetric nanoscale wear on ultra-smooth diamond like carbon thin films at high sliding speeds

    NASA Astrophysics Data System (ADS)

    Rajauria, Sukumar; Schreck, Erhard; Marchon, Bruno

    2016-05-01

    The understanding of tribo- and electro-chemical phenomenons on the molecular level at a sliding interface is a field of growing interest. Fundamental chemical and physical insights of sliding surfaces are crucial for understanding wear at an interface, particularly for nano or micro scale devices operating at high sliding speeds. A complete investigation of the electrochemical effects on high sliding speed interfaces requires a precise monitoring of both the associated wear and surface chemical reactions at the interface. Here, we demonstrate that head-disk interface inside a commercial magnetic storage hard disk drive provides a unique system for such studies. The results obtained shows that the voltage assisted electrochemical wear lead to asymmetric wear on either side of sliding interface.

  9. Evaluation of Optical Disk Jukebox Software.

    ERIC Educational Resources Information Center

    Ranade, Sanjay; Yee, Fonald

    1989-01-01

    Discusses software that is used to drive and access optical disk jukeboxes, which are used for data storage. Categories of the software are described, user categories are explained, the design of implementation approaches is discussed, and representative software products are reviewed. (eight references) (LRW)

  10. 40 CFR 94.509 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  11. 40 CFR 94.509 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  12. 40 CFR 94.509 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  13. 40 CFR 94.509 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... disk, or some other method of data storage, depending upon the manufacturer's record retention..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  14. Towards more stable operation of the Tokyo Tier2 center

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; Mashimo, T.; Matsui, N.; Sakamoto, H.; Ueda, I.

    2014-06-01

    The Tokyo Tier2 center, which is located at the International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hardware as the third system upgrade to deal with analysis for further growing data of the ATLAS experiment. The number of CPU cores are increased by factor of two (9984 cores in total), and the performance of individual CPU core is improved by 20% according to the HEPSPEC06 benchmark test at 32bit compile mode. The score is estimated as 18.03 (SL6) per core by using Intel Xeon E5-2680 2.70 GHz. Since all worker nodes are made by 16 CPU cores configuration, we deployed 624 blade servers in total. They are connected to 6.7 PB of disk storage system with non-blocking 10 Gbps internal network backbone by using two center network switches (NetIron MLXe-32). The disk storage is made by 102 of RAID6 disk arrays (Infortrend DS S24F-G2840-4C16DO0) and served by equivalent number of 1U file servers with 8G-FC connection to maximize the file transfer throughput per storage capacity. As of February 2013, 2560 CPU cores and 2.00 PB of disk storage have already been deployed for WLCG. Currently, the remaining non-grid resources for both CPUs and disk storage are used as dedicated resources for the data analysis by the ATLAS Japan collaborators. Since all hardware in the non-grid resources are made by same architecture with Tier2 resource, they will be able to be migrated as the Tier2 extra resource on demand of the ATLAS experiment in the future. In addition to the upgrade of computing resources, we expect the improvement of connectivity on the wide area network. Thanks to the Japanese NREN (NII), another 10 Gbps trans-Pacific line from Japan to Washington will be available additionally with existing two 10 Gbps lines (Tokyo to New York and Tokyo to Los Angeles). The new line will be connected to LHCONE for the more improvement of the connectivity. In this circumstance, we are working for the further stable operation. For instance, we have newly introduced GPFS (IBM) for the non-grid disk storage, while Disk Pool Manager (DPM) are continued to be used as Tier2 disk storage from the previous system. Since the number of files stored in a DPM pool will be increased with increasing the total amount of data, the development of stable database configuration is one of the crucial issues as well as scalability. We have started some studies on the performance of asynchronous database replication so that we can take daily full backup. In this report, we would like to introduce several improvements in terms of the performances and stability of our new system and possibility of the further improvement of local I/O performance in the multi-core worker node. We also present the status of the wide area network connectivity from Japan to US and/or EU with LHCONE.

  15. Data oriented job submission scheme for the PHENIX user analysis in CCJ

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; En'yo, H.; Ichihara, T.; Watanabe, Y.; Yokkaichi, S.

    2011-12-01

    The RIKEN Computing Center in Japan (CCJ) has been developed to make it possible analyzing huge amount of data corrected by the PHENIX experiment at RHIC. The corrected raw data or reconstructed data are transferred via SINET3 with 10 Gbps bandwidth from Brookheaven National Laboratory (BNL) by using GridFTP. The transferred data are once stored in the hierarchical storage management system (HPSS) prior to the user analysis. Since the size of data grows steadily year by year, concentrations of the access request to data servers become one of the serious bottlenecks. To eliminate this I/O bound problem, 18 calculating nodes with total 180 TB local disks were introduced to store the data a priori. We added some setup in a batch job scheduler (LSF) so that user can specify the requiring data already distributed to the local disks. The locations of data are automatically obtained from a database, and jobs are dispatched to the appropriate node which has the required data. To avoid the multiple access to a local disk from several jobs in a node, techniques of lock file and access control list are employed. As a result, each job can handle a local disk exclusively. Indeed, the total throughput was improved drastically as compared to the preexisting nodes in CCJ, and users can analyze about 150 TB data within 9 hours. We report this successful job submission scheme and the feature of the PC cluster.

  16. Software for Optical Archive and Retrieval (SOAR) user's guide, version 4.2

    NASA Technical Reports Server (NTRS)

    Davis, Charles

    1991-01-01

    The optical disk is an emerging technology. Because it is not a magnetic medium, it offers a number of distinct advantages over the established form of storage, advantages that make it extremely attractive. They are as follows: (1) the ability to store much more data within the same space; (2) the random access characteristics of the Write Once Read Many optical disk; (3) a much longer life than that of traditional storage media; and (4) much greater data access rate. Software for Optical Archive and Retrieval (SOAR) user's guide is presented.

  17. Recent Cooperative Research Activities of HDD and Flexible Media Transport Technologies in Japan

    NASA Astrophysics Data System (ADS)

    Ono, Kyosuke

    This paper presents the recent status of industry-university cooperative research activities in Japan on the mechatronics of information storage and input/output equipment. There are three research committees for promoting information exchange on technical problems and research topics of head-disk interface in hard disk drives (HDD), flexible media transport and image printing processes which are supported by the Japan Society of Mechanical Engineering (JSME), the Japanese Society of Tribologists (JAST) and the Japan Society of Precision Engineering (JSPE). For hard disk drive technology, the Storage Research Consortium (SRC) is supporting more than 40 research groups in various different universities to perform basic research for future HDD technology. The past and present statuses of these activities are introduced, particularly focusing on HDD and flexible media transport mechanisms.

  18. Free Factories: Unified Infrastructure for Data Intensive Web Services

    PubMed Central

    Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.

    2010-01-01

    We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356

  19. Wide-area-distributed storage system for a multimedia database

    NASA Astrophysics Data System (ADS)

    Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro

    1998-12-01

    We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.

  20. Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression

    NASA Astrophysics Data System (ADS)

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-08-01

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.

  1. Reducing Disk Storage of Full-3D Seismic Waveform Tomography (F3DT) Through Lossy Online Compression

    DOE PAGES

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-05-05

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithmmore » into our F3DT SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.« less

  2. Design and implementation of reliability evaluation of SAS hard disk based on RAID card

    NASA Astrophysics Data System (ADS)

    Ren, Shaohua; Han, Sen

    2015-10-01

    Because of the huge advantage of RAID technology in storage, it has been widely used. However, the question associated with this technology is that the hard disk based on the RAID card can not be queried by Operating System. Therefore how to read the self-information and log data of hard disk has been a problem, while this data is necessary for reliability test of hard disk. In traditional way, this information can be read just suitable for SATA hard disk, but not for SAS hard disk. In this paper, we provide a method by using LSI RAID card's Application Program Interface, communicating with RAID card and analyzing the feedback data to solve the problem. Then we will get the necessary information to assess the SAS hard disk.

  3. Pooling the resources of the CMS Tier-1 sites

    DOE PAGES

    Apyan, A.; Badillo, J.; Cruz, J. Diaz; ...

    2015-12-23

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit thismore » mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.« less

  4. Pooling the resources of the CMS Tier-1 sites

    NASA Astrophysics Data System (ADS)

    Apyan, A.; Badillo, J.; Diaz Cruz, J.; Gadrat, S.; Gutsche, O.; Holzman, B.; Lahiff, A.; Magini, N.; Mason, D.; Perez, A.; Stober, F.; Taneja, S.; Taze, M.; Wissing, C.

    2015-12-01

    The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community. The long shutdown of the LHC in 2013-2014 was an opportunity to revisit this mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems. With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Finally, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape. In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.

  5. Data Management, the Victorian era child of the 21st century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farber, Rob

    2007-03-30

    Do you remember when a gigabyte disk drive was “a lot” of storage in that by-gone age of the 20th century? Still in our first decade of the 21st century, major supercomputer sites now speak of storage in terms of petabytes, 1015 bytes, or six orders of magnitude increase in capacity over a gigabyte! Unlike our archaic “big” disk drive where all the data was in one place, HPC storage is now distributed across many machines and even across the Internet. Collaborative research engages many scientists who need to find and use each others data, preferably in an automated fashion,more » which complicates an already muddled problem.« less

  6. Demonstration of fully enabled data center subsystem with embedded optical interconnect

    NASA Astrophysics Data System (ADS)

    Pitwon, Richard; Worrall, Alex; Stevens, Paul; Miller, Allen; Wang, Kai; Schmidtke, Katharine

    2014-03-01

    The evolution of data storage communication protocols and corresponding in-system bandwidth densities is set to impose prohibitive cost and performance constraints on future data storage system designs, fuelling proposals for hybrid electronic and optical architectures in data centers. The migration of optical interconnect into the system enclosure itself can substantially mitigate the communications bottlenecks resulting from both the increase in data rate and internal interconnect link lengths. In order to assess the viability of embedding optical links within prevailing data storage architectures, we present the design and assembly of a fully operational data storage array platform, in which all internal high speed links have been implemented optically. This required the deployment of mid-board optical transceivers, an electro-optical midplane and proprietary pluggable optical connectors for storage devices. We present the design of a high density optical layout to accommodate the midplane interconnect requirements of a data storage enclosure with support for 24 Small Form Factor (SFF) solid state or rotating disk drives and the design of a proprietary optical connector and interface cards, enabling standard drives to be plugged into an electro-optical midplane. Crucially, we have also modified the platform to accommodate longer optical interconnect lengths up to 50 meters in order to investigate future datacenter architectures based on disaggregation of modular subsystems. The optically enabled data storage system has been fully validated for both 6 Gb/s and 12 Gb/s SAS data traffic conveyed along internal optical links.

  7. Designing and application of SAN extension interface based on CWDM

    NASA Astrophysics Data System (ADS)

    Qin, Leihua; Yu, Shengsheng; Zhou, Jingli

    2005-11-01

    As Fibre Channel (FC) becomes the protocol of choice within corporate data centers, enterprises are increasingly deploying SANs in their data central. In order to mitigate the risk of losing data and improve the availability of data, more and more enterprises are increasingly adopting storage extension technologies to replicate their business critical data to a secondary site. Transmitting this information over distance requires a carrier grade environment with zero data loss, scalable throughput, low jitter, high security and ability to travel long distance. To address this business requirements, there are three basic architectures for storage extension, they are Storage over Internet Protocol, Storage over Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) and Storage over Dense Wavelength Division Multiplexing (DWDM). Each approach varies in functionality, complexity, cost, scalability, security, availability , predictable behavior (bandwidth, jitter, latency) and multiple carrier limitations. Compared with these connectiviy technologies,Coarse Wavelength Division Multiplexing (CWDM) is a Simplified, Low Cost and High Performance connectivity solutions for enterprises to deploy their storage extension. In this paper, we design a storage extension connectivity over CWDM and test it's electrical characteristic and random read and write performance of disk array through the CWDM connectivity, testing result show us that the performance of the connectivity over CWDM is acceptable. Furthermore, we propose three kinds of network architecture of SAN extension based on CWDM interface. Finally the credit-Based flow control mechanism of FC, and the relationship between credits and extension distance is analyzed.

  8. A report on the ST ScI optical disk workstation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The STScI optical disk project was designed to explore the options, opportunities and problems presented by the optical disk technology, and to see if optical disks are a viable, and inexpensive, means of storing the large amount of data which are found in astronomical digital imagery. A separate workstation was purchased on which the development can be done and serves as an astronomical image processing computer, incorporating the optical disks into the solution of standard image processing tasks. It is indicated that small workstations can be powerful tools for image processing, and that astronomical image processing may be more conveniently and cost-effectively performed on microcomputers than on the mainframe and super-minicomputers. The optical disks provide unique capabilities in data storage.

  9. Implementation of system intelligence in a 3-tier telemedicine/PACS hierarchical storage management system

    NASA Astrophysics Data System (ADS)

    Chao, Woodrew; Ho, Bruce K. T.; Chao, John T.; Sadri, Reza M.; Huang, Lu J.; Taira, Ricky K.

    1995-05-01

    Our tele-medicine/PACS archive system is based on a three-tier distributed hierarchical architecture, including magnetic disk farms, optical jukebox, and tape jukebox sub-systems. The hierarchical storage management (HSM) architecture, built around a low cost high performance platform [personal computers (PC) and Microsoft Windows NT], presents a very scaleable and distributed solution ideal for meeting the needs of client/server environments such as tele-medicine, tele-radiology, and PACS. These image based systems typically require storage capacities mirroring those of film based technology (multi-terabyte with 10+ years storage) and patient data retrieval times at near on-line performance as demanded by radiologists. With the scaleable architecture, storage requirements can be easily configured to meet the needs of the small clinic (multi-gigabyte) to those of a major hospital (multi-terabyte). The patient data retrieval performance requirement was achieved by employing system intelligence to manage migration and caching of archived data. Relevant information from HIS/RIS triggers prefetching of data whenever possible based on simple rules. System intelligence embedded in the migration manger allows the clustering of patient data onto a single tape during data migration from optical to tape medium. Clustering of patient data on the same tape eliminates multiple tape loading and associated seek time during patient data retrieval. Optimal tape performance can then be achieved by utilizing the tape drives high performance data streaming capabilities thereby reducing typical data retrieval delays associated with streaming tape devices.

  10. 40 CFR 91.504 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...

  11. 40 CFR 91.504 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...

  12. 40 CFR 90.704 - Maintenance of records; submission of information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  13. 40 CFR 90.704 - Maintenance of records; submission of information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  14. 40 CFR 90.704 - Maintenance of records; submission of information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  15. 40 CFR 90.704 - Maintenance of records; submission of information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the..., associated storage facility or port facility, and the date the engine was received at the testing facility...

  16. 40 CFR 91.504 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...

  17. 40 CFR 91.504 - Maintenance of records; submittal of information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... paper) or reduced to microfilm, floppy disk, or some other method of data storage, depending upon the... shipped from the assembly plant, associated storage facility or port facility, and the date the engine was...

  18. Low temperature Grüneisen parameter of cubic ionic crystals

    NASA Astrophysics Data System (ADS)

    Batana, Alicia; Monard, María C.; Rosario Soriano, María

    1987-02-01

    Title of program: CAROLINA Catalogue number: AATG Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland (see application form in this issue) Computer: IBM/370, Model 158; Installation: Centro de Tecnología y Ciencia de Sistemas, Universidad de Buenos Aires Operating system: VM/370 Programming language used: FORTRAN High speed storage required: 3 kwords No. of bits in a word: 32 Peripherals used: disk IBM 3340/70 MB No. of lines in combined program and test deck: 447

  19. Functional design specification: NASA form 1510

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The 1510 worksheet used to calculate approved facility project cost estimates is explained. Topics covered include data base considerations, program structure, relationship of the 1510 form to the 1509 form, and functions which the application must perform: WHATIF, TENENTER, TENTYPE, and data base utilities. A sample NASA form 1510 printout and a 1510 data dictionary are presented in the appendices along with the cost adjustment table, the floppy disk index, and methods for generating the calculated values (TENCALC) and for calculating cost adjustment (CONSTADJ). Storage requirements are given.

  20. Magnetic field sources and their threat to magnetic media

    NASA Technical Reports Server (NTRS)

    Jewell, Steve

    1993-01-01

    Magnetic storage media (tapes, disks, cards, etc.) may be damaged by external magnetic fields. The potential for such damage has been researched, but no objective standard exists for the protection of such media. This paper summarizes a magnetic storage facility standard, Publication 933, that ensures magnetic protection of data storage media.

  1. Emerging Network Storage Management Standards for Intelligent Data Storage Subsystems

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    This paper discusses the need for intelligent storage devices and subsystems that can provide data integrity metadata, the content of the existing data integrity standard for optical disks and techniques and metadata to verify stored data on optical tapes developed by the Association for Information and Image Management (AIIM) Optical Tape Committee.

  2. Multi-Level Bitmap Indexes for Flash Memory Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Madduri, Kamesh; Canon, Shane

    2010-07-23

    Due to their low access latency, high read speed, and power-efficient operation, flash memory storage devices are rapidly emerging as an attractive alternative to traditional magnetic storage devices. However, tests show that the most efficient indexing methods are not able to take advantage of the flash memory storage devices. In this paper, we present a set of multi-level bitmap indexes that can effectively take advantage of flash storage devices. These indexing methods use coarsely binned indexes to answer queries approximately, and then use finely binned indexes to refine the answers. Our new methods read significantly lower volumes of data atmore » the expense of an increased disk access count, thus taking full advantage of the improved read speed and low access latency of flash devices. To demonstrate the advantage of these new indexes, we measure their performance on a number of storage systems using a standard data warehousing benchmark called the Set Query Benchmark. We observe that multi-level strategies on flash drives are up to 3 times faster than traditional indexing strategies on magnetic disk drives.« less

  3. Inverted Signature Trees and Text Searching on CD-ROMs.

    ERIC Educational Resources Information Center

    Cooper, Lorraine K. D.; Tharp, Alan L.

    1989-01-01

    Explores the new storage technology of optical data disks and introduces a data structure, the inverted signature tree, for storing data on optical data disks for efficient text searching. The inverted signature tree approach is compared to the use of text signatures and the B+ tree. (22 references) (Author/CLB)

  4. Test methods for optical disk media characteristics (for 356 mm ruggedized magneto-optic media)

    NASA Technical Reports Server (NTRS)

    Podio, Fernando L.

    1991-01-01

    Standard test methods for computer storage media characteristics are essential and allow for conformance to media interchange standards. The test methods were developed for 356 mm two-sided laminated glass substrate with a magneto-optic active layer media technology. These test methods may be used for testing other media types, but in each case their applicability must be evaluated. Test methods are included for a series of different media characteristics, including operational, nonoperational, and storage environments; mechanical and physical characteristics; and substrate, recording layer, and preformat characteristics. Tests for environmental qualification and media lifetimes are also included. The best methods include testing conditions, testing procedures, a description of the testing setup, and the required calibration procedures.

  5. Electromagnetic scattering of large structures in layered earths using integral equations

    NASA Astrophysics Data System (ADS)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  6. Reference System of DNA and Protein Sequences on CD-ROM

    NASA Astrophysics Data System (ADS)

    Nasu, Hisanori; Ito, Toshiaki

    DNASIS-DBREF31 is a database for DNA and Protein sequences in the form of optical Compact Disk (CD) ROM, developed and commercialized by Hitachi Software Engineering Co., Ltd. Both nucleic acid base sequences and protein amino acid sequences can be retrieved from a single CD-ROM. Existing database is offered in the form of on-line service, floppy disks, or magnetic tape, all of which have some problems or other, such as usability or storage capacity. DNASIS-DBREF31 newly adopt a CD-ROM as a database device to realize a mass storage and personal use of the database.

  7. Proposal for a multilayer read-only-memory optical disk structure.

    PubMed

    Ichimura, Isao; Saito, Kimihiro; Yamasaki, Takeshi; Osato, Kiyoshi

    2006-03-10

    Coherent interlayer cross talk and stray-light intensity of multilayer read-only-memory (ROM) optical disks are investigated. From results of scalar diffraction analyses, we conclude that layer separations above 10 microm are preferred in a system using a 0.85 numerical aperture objective lens in terms of signal quality and stability in focusing control. Disk structures are optimized to prevent signal deterioration resulting from multiple reflections, and appropriate detectors are determined to maintain acceptable stray-light intensity. In the experiment, quadrilayer and octalayer high-density ROM disks are prepared by stacking UV-curable films onto polycarbonate substrates. Data-to-clock jitters of < or = 7% demonstrate the feasibility of multilayer disk storage up to 200 Gbytes.

  8. Managing People's Data

    NASA Technical Reports Server (NTRS)

    Le, Diana; Cooper, David M. (Technical Monitor)

    1994-01-01

    Just imagine a mass storage system that consists of a machine with 2 CPUs, 1 Gigabyte (GB) of memory, 400 GB of disk space, 16800 cartridge tapes in the automated tape silos, 88,000 tapes located in the vault, and the software to manage the system. This system is designed to be a data repository; it will always have disk space to store all the incoming data. Currently 9.14 GB of new data per day enters the system with this rate doubling each year. To assure there is always disk space available for new data, the system. has to move data reside from the expensive disk to a much less expensive medium such as the 3480 cartridge tapes. Once the data is archived to tape, it should be able to move back to disk when someone wants to access it and the data movement should be transparent to the user. Now imagine all the tasks that a system administrator must perform to keep this system running 24 hour a day, 7 days a week. Since the filesystem maintains the illusion of unlimited disk space, data that comes to the system must get moved to tapes in an efficient manner. This paper will describe the mass storage system running at the Numerical Aerodynamic Simulation (NAS) at NASA Ames Research Center in both software and hardware aspects, then it will describe all of the tasks the system administrator has to perform on this system.

  9. Simple, Script-Based Science Processing Archive

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle

    2007-01-01

    The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.

  10. Magnetic bearings for a high-performance optical disk buffer, volume 1

    NASA Technical Reports Server (NTRS)

    Hockney, Richard; Adler, Karen; Anastas, George, Jr.; Downer, James; Flynn, Frederick; Goldie, James; Gondhalekar, Vijay; Hawkey, Timothy; Johnson, Bruce

    1990-01-01

    The innovation investigated in this project was the application of magnetic bearing technology to the translator head of an optical-disk data storage device. Both the capability for space-based applications and improved performance are expected to result. The phase 1 effort produced: (1) detailed specifications for both the translator-head and rotary-spindel bearings; (2) candidate hardware configurations for both bearings with detail definition for the translator head; (3) required characteristics for the magnetic bearing control loops; (4) position sensor selection; and (5) definition of the required electronic functions. The principal objective of Phase 2 was the design, fabrication, assembly, and test of the magnetic bearing system for the translator head. The scope of work included: (1) mechanical design of each of the required components; (2) electrical design of the required circuitry; (3) fabrication of the component parts and bread-board electronics; (4) generation of a test plan; and (5) integration of the prototype unit and performance testing. The project has confirmed the applicability of magnetic bearing technology to suspension of the translator head of the optical disk device, and demonstrated the achievement of all performance objectives. The magnetic bearing control loops perform well, achieving 100 Hz nominal bandwidth with phase margins between 37 and 63 degrees. The worst-case position resolution is 0.02 micron in the displacement loops and 1 micron rad in the rotation loops, The system is very robust to shock disturbances, recovering smoothly even when collisions occur between the translator and frame. The unique start-up/shut-down circuit has proven very effective.

  11. How to Use Removable Mass Storage Memory Devices

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2004-01-01

    Mass storage refers to the variety of ways to keep large amounts of information that are used on a computer. Over the years, the removable storage devices have grown smaller, increased in capacity, and transferred the information to the computer faster. The 8" floppy disk of the 1960s stored 100 kilobytes, or about 60 typewritten, double-spaced…

  12. Facing the Limitations of Electronic Document Handling.

    ERIC Educational Resources Information Center

    Moralee, Dennis

    1985-01-01

    This essay addresses problems associated with technology used in the handling of high-resolution visual images in electronic document delivery. Highlights include visual fidelity, laser-driven optical disk storage, electronics versus micrographics for document storage, videomicrographics, and system configurations and peripherals. (EJS)

  13. Data storage for managing the health enterprise and achieving business continuity.

    PubMed

    Hinegardner, Sam

    2003-01-01

    As organizations move away from a silo mentality to a vision of enterprise-level information, more healthcare IT departments are rejecting the idea of information storage as an isolated, system-by-system solution. IT executives want storage solutions that act as a strategic element of an IT infrastructure, centralizing storage management activities to effectively reduce operational overhead and costs. This article focuses on three areas of enterprise storage: tape, disk, and disaster avoidance.

  14. Holographic Compact Disk Read-Only Memories

    NASA Technical Reports Server (NTRS)

    Liu, Tsuen-Hsi

    1996-01-01

    Compact disk read-only memories (CD-ROMs) of proposed type store digital data in volume holograms instead of in surface differentially reflective elements. Holographic CD-ROM consist largely of parts similar to those used in conventional CD-ROMs. However, achieves 10 or more times data-storage capacity and throughput by use of wavelength-multiplexing/volume-hologram scheme.

  15. An Optical Disk-Based Information Retrieval System.

    ERIC Educational Resources Information Center

    Bender, Avi

    1988-01-01

    Discusses a pilot project by the Nuclear Regulatory Commission to apply optical disk technology to the storage and retrieval of documents related to its high level waste management program. Components and features of the microcomputer-based system which provides full-text and image access to documents are described. A sample search is included.…

  16. Digital image archiving: challenges and choices.

    PubMed

    Dumery, Barbara

    2002-01-01

    In the last five years, imaging exam volume has grown rapidly. In addition to increased image acquisition, there is more patient information per study. RIS-PACS integration and information-rich DICOM headers now provide us with more patient information relative to each study. The volume of archived digital images is increasing and will continue to rise at a steeper incline than film-based storage of the past. Many filmless facilities have been caught off guard by this increase, which has been stimulated by many factors. The most significant factor is investment in new digital and DICOM-compliant modalities. A huge volume driver is the increase in images per study from multi-slice technology. Storage requirements also are affected by disaster recovery initiatives and state retention mandates. This burgeoning rate of imaging data volume presents many challenges: cost of ownership, data accessibility, storage media obsolescence, database considerations, physical limitations, reliability and redundancy. There are two basic approaches to archiving--single tier and multi-tier. Each has benefits. With a single-tier approach, all the data is stored on a single media that can be accessed very quickly. A redundant copy of the data is then stored onto another less expensive media. This is usually a removable media. In this approach, the on-line storage is increased incrementally as volume grows. In a multi-tier approach, storage levels are set up based on access speed and cost. In other words, all images are stored at the deepest archiving level, which is also the least expensive. Images are stored on or moved back to the intermediate and on-line levels if they will need to be accessed more quickly. It can be difficult to decide what the best approach is for your organization. The options include RAIDs (redundant array of independent disks), direct attached RAID storage (DAS), network storage using RAIDs (NAS and SAN), removable media such as different types of tape, compact disks (CDs and DVDs) and magneto-optical disks (MODs). As you evaluate the various options for storage, it is important to consider both performance and cost. For most imaging enterprises, a single-tier archiving approach is the best solution. With the cost of hard drives declining, NAS is a very feasible solution today. It is highly reliable, offers immediate access to all exams, and easily scales as imaging volume grows. Best of all, media obsolescence challenges need not be of concern. For back-up storage, removable media can be implemented, with a smaller investment needed as it will only be used for a redundant copy of the data. There is no need to keep it online and available. If further system redundancy is desired, multiple servers should be considered. The multi-tier approach still has its merits for smaller enterprises, but with a detailed long-term cost of ownership analysis, NAS will probably still come out on top as the solution of choice for many imaging facilities.

  17. Using dCache in Archiving Systems oriented to Earth Observation

    NASA Astrophysics Data System (ADS)

    Garcia Gil, I.; Perez Moreno, R.; Perez Navarro, O.; Platania, V.; Ozerov, D.; Leone, R.

    2012-04-01

    The object of LAST activity (Long term data Archive Study on new Technologies) is to perform an independent study on best practices and assessment of different archiving technologies mature for operation in the short and mid-term time frame, or available in the long-term with emphasis on technologies better suited to satisfy the requirements of ESA, LTDP and other European and Canadian EO partners in terms of digital information preservation and data accessibility and exploitation. During the last phase of the project, a testing of several archiving solutions has been performed in order to evaluate their suitability. In particular, dCache, aimed to provide a file system tree view of the data repository exchanging this data with backend (tertiary) Storage Systems as well as space management, pool attraction, dataset replication, hot spot determination and recovery from disk or node failures. Connected to a tertiary storage system, dCache simulates unlimited direct access storage space. Data exchanges to and from the underlying HSM are performed automatically and invisibly to the user Dcache was created to solve the requirements of big computer centers and universities with big amounts of data, putting their efforts together and founding EMI (European Middleware Initiative). At the moment being, Dcache is mature enough to be implemented, being used by several research centers of relevance (e.g. LHC storing up to 50TB/day). This solution has been not used so far in Earth Observation and the results of the study are summarized in this article, focusing on the capacities over a simulated environment to get in line with the ESA requirements for a geographically distributed storage. The challenge of a geographically distributed storage system can be summarized as the way to provide a maximum quality for storage and dissemination services with the minimum cost.

  18. ToF-SIMS images and spectra of biomimetic calcium silicate-based cements after storage in solutions simulating the effects of human biological fluids

    NASA Astrophysics Data System (ADS)

    Torrisi, A.; Torrisi, V.; Tuccitto, N.; Gandolfi, M. G.; Prati, C.; Licciardello, A.

    2010-01-01

    ToF-SIMS images were obtained from a section of a tooth, obturated by means of a new calcium-silicate based cement (wTCF) after storage for 1 month in a saline solutions (DPBS), in order to simulate the body fluid effects on the obturation. Afterwards, ToF-SIMS spectra were obtained from model samples, prepared by using the same cement paste, after storage for 1 month and 8 months in two different saline solutions (DPBS and HBSS). ToF-SIMS spectra were also obtained from fluorine-free cement (wTC) samples after storage in HBSS for 1 month and 8 months and used for comparison. It was found that the composition of both the saline solution and the cement influenced the composition of the surface of disks and that longer is the storage greater are the differences. Segregation phenomena occur both on the cement obturation of the tooth and on the surface of the disks prepared by using the same cement. Indirect evidences of formation of new crystalline phases are supplied.

  19. Improved memory loading techniques for the TSRV display system

    NASA Technical Reports Server (NTRS)

    Easley, W. C.; Lynn, W. A.; Mcluer, D. G.

    1986-01-01

    A recent upgrade of the TSRV research flight system at NASA Langley Research Center retained the original monochrome display system. However, the display memory loading equipment was replaced requiring design and development of new methods of performing this task. This paper describes the new techniques developed to load memory in the display system. An outdated paper tape method for loading the BOOTSTRAP control program was replaced by EPROM storage of the characters contained on the tape. Rather than move a tape past an optical reader, a counter was implemented which steps sequentially through EPROM addresses and presents the same data to the loader circuitry. A cumbersome cassette tape method for loading the applications software was replaced with a floppy disk method using a microprocessor terminal installed as part of the upgrade. The cassette memory image was transferred to disk and a specific software loader was written for the terminal which duplicates the function of the cassette loader.

  20. Economic impact of off-line PC viewer for private folder management

    NASA Astrophysics Data System (ADS)

    Song, Koun-Sik; Shin, Myung J.; Lee, Joo Hee; Auh, Yong H.

    1999-07-01

    We developed a PC-based clinical workstation and implemented at Asan Medical Center in Seoul, Korea, Hardwares used were Pentium-II, 8M video memory, 64-128 MB RAM, 19 inch color monitor, and 10/100Mbps network adaptor. One of the unique features of this workstation is management tool for folders reside both in PACS short-term storage unit and local hard disk. Users can copy the entire study or part of the study to local hard disk, removable storages, or CD recorder. Even the images in private folders in PACS short-term storage can be copied to local storage devices. All images are saved as DICOM 3.0 file format with 2:1 lossless compression. We compared the prices of copy films and storage medias considering the possible savings of expensive PACS short- term storage and network traffic. Price savings of copy film is most remarkable in MR exam. Price savings arising from minimal use of short-term unit was 50,000 dollars. It as hard to calculate the price savings arising from the network usage. Off-line PC viewer is a cost-effective way of handling private folder management under the PACS environment.

  1. IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM

    NASA Technical Reports Server (NTRS)

    Martin, M. D.

    1994-01-01

    The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.

  2. Global EOS: exploring the 300-ms-latency region

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Jericho, D.; Hsu, C.-Y.

    2017-10-01

    EOS, the CERN open-source distributed disk storage system, provides the highperformance storage solution for HEP analysis and the back-end for various work-flows. Recently EOS became the back-end of CERNBox, the cloud synchronisation service for CERN users. EOS can be used to take advantage of wide-area distributed installations: for the last few years CERN EOS uses a common deployment across two computer centres (Geneva-Meyrin and Budapest-Wigner) about 1,000 km apart (∼20-ms latency) with about 200 PB of disk (JBOD). In late 2015, the CERN-IT Storage group and AARNET (Australia) set-up a challenging R&D project: a single EOS instance between CERN and AARNET with more than 300ms latency (16,500 km apart). This paper will report about the success in deploy and run a distributed storage system between Europe (Geneva, Budapest), Australia (Melbourne) and later in Asia (ASGC Taipei), allowing different type of data placement and data access across these four sites.

  3. QualComp: a new lossy compressor for quality scores based on rate distortion theory

    PubMed Central

    2013-01-01

    Background Next Generation Sequencing technologies have revolutionized many fields in biology by reducing the time and cost required for sequencing. As a result, large amounts of sequencing data are being generated. A typical sequencing data file may occupy tens or even hundreds of gigabytes of disk space, prohibitively large for many users. This data consists of both the nucleotide sequences and per-base quality scores that indicate the level of confidence in the readout of these sequences. Quality scores account for about half of the required disk space in the commonly used FASTQ format (before compression), and therefore the compression of the quality scores can significantly reduce storage requirements and speed up analysis and transmission of sequencing data. Results In this paper, we present a new scheme for the lossy compression of the quality scores, to address the problem of storage. Our framework allows the user to specify the rate (bits per quality score) prior to compression, independent of the data to be compressed. Our algorithm can work at any rate, unlike other lossy compression algorithms. We envisage our algorithm as being part of a more general compression scheme that works with the entire FASTQ file. Numerical experiments show that we can achieve a better mean squared error (MSE) for small rates (bits per quality score) than other lossy compression schemes. For the organism PhiX, whose assembled genome is known and assumed to be correct, we show that it is possible to achieve a significant reduction in size with little compromise in performance on downstream applications (e.g., alignment). Conclusions QualComp is an open source software package, written in C and freely available for download at https://sourceforge.net/projects/qualcomp. PMID:23758828

  4. VMOMS — A computer code for finding moment solutions to the Grad-Shafranov equation

    NASA Astrophysics Data System (ADS)

    Lao, L. L.; Wieland, R. M.; Houlberg, W. A.; Hirshman, S. P.

    1982-08-01

    Title of program: VMOMS Catalogue number: ABSH Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland (See application form in this issue) Computer: PDP-10/KL10; Installation: ORNL Fusion Energy Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA Operating system: TOPS 10 Programming language used: FORTRAN High speed storage required: 9000 words No. of bits in a word: 36 Overlay structure: none Peripherals used: line printer, disk drive No. of cards in combined program and test deck: 2839 Card punching code: ASCII

  5. From Physics to industry: EOS outside HEP

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Lamanna, M.

    2017-10-01

    In the competitive market for large-scale storage solutions the current main disk storage system at CERN EOS has been showing its excellence in the multi-Petabyte high-concurrency regime. It has also shown a disruptive potential in powering the service in providing sync and share capabilities and in supporting innovative analysis environments along the storage of LHC data. EOS has also generated interest as generic storage solution ranging from university systems to very large installations for non-HEP applications.

  6. Online performance evaluation of RAID 5 using CPU utilization

    NASA Astrophysics Data System (ADS)

    Jin, Hai; Yang, Hua; Zhang, Jiangling

    1998-09-01

    Redundant arrays of independent disks (RAID) technology is the efficient way to solve the bottleneck problem between CPU processing ability and I/O subsystem. For the system point of view, the most important metric of on line performance is the utilization of CPU. This paper first employs the way to calculate the CPU utilization of system connected with RAID level 5 using statistic average method. From the simulation results of CPU utilization of system connected with RAID level 5 subsystem can we see that using multiple disks as an array to access data in parallel is the efficient way to enhance the on-line performance of disk storage system. USing high-end disk drivers to compose the disk array is the key to enhance the on-line performance of system.

  7. Russian-US collaboration on implementation of the active well coincidence counter (AWCC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mozhajev, V.; Pshakin, G.; Stewart, J.

    The feasibility of using a standard AWCC at the Obninsk IPPE has been demonstrated through active measurements of single UO{sub 2} (36% enriched) disks and through passive measurements of plutonium metal disks used for simulating reactor cores. The role of the measurements is to verify passport values assigned to the disks by the facility, and thereby facilitate the mass accountability procedures developed for the very large inventory of fuel disks at the facility. The AWCC is a very flexible instrument for verification measurements of the large variety of nuclear material items at the Obninsk IPPE and other Russian facilities. Futuremore » work at the IPPE will include calibration and verification measurements for other materials, both in individual disks and in multi-disk storage tubes; it will also include training in the use of the AWCC.« less

  8. ZFS on RBODs - Leveraging RAID Controllers for Metrics and Enclosure Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stearman, D. M.

    2015-03-30

    Traditionally, the Lustre file system has relied on the ldiskfs file system with reliable RAID (Redundant Array of Independent Disks) storage underneath. As of Lustre 2.4, ZFS was added as a backend file system, with built-in software RAID, thereby removing the need of expensive RAID controllers. ZFS was designed to work with JBOD (Just a Bunch Of Disks) storage enclosures under the Solaris Operating System, which provided a rich device management system. Long time users of the Lustre file system have relied on the RAID controllers to provide metrics and enclosure monitoring and management services, with rich APIs and commandmore » line interfaces. This paper will study a hybrid approach using an advanced full featured RAID enclosure which is presented to the host as a JBOD, This RBOD (RAIDed Bunch Of Disks) allows ZFS to do the RAID protection and error correction, while the RAID controller handles management of the disks and monitors the enclosure. It was hoped that the value of the RAID controller features would offset the additional cost, and that performance would not suffer in this mode. The test results revealed that the hybrid RBOD approach did suffer reduced performance.« less

  9. Hybrid RAID With Dual Control Architecture for SSD Reliability

    NASA Astrophysics Data System (ADS)

    Chatterjee, Santanu

    2010-10-01

    The Solid State Devices (SSD) which are increasingly being adopted in today's data storage Systems, have higher capacity and performance but lower reliability, which leads to more frequent rebuilds and to a higher risk. Although SSD is very energy efficient compared to Hard Disk Drives but Bit Error Rate (BER) of an SSD require expensive erase operations between successive writes. Parity based RAID (for Example RAID4,5,6)provides data integrity using parity information and supports losing of any one (RAID4, 5)or two drives(RAID6), but the parity blocks are updated more often than the data blocks due to random access pattern so SSD devices holding more parity receive more writes and consequently age faster. To address this problem, in this paper we propose a Model based System of hybrid disk array architecture in which we plan to use RAID 4(Stripping with Parity) technique and SSD drives as Data drives while any fastest Hard disk drives of same capacity can be used as dedicated parity drives. By this proposed architecture we can open the door to using commodity SSD's past their erasure limit and it can also reduce the need for expensive hardware Error Correction Code (ECC) in the devices.

  10. Curriculum Bank for Individualized Electronic Instruction. Final Report.

    ERIC Educational Resources Information Center

    Williamson, Bert; Pedersen, Joe F.

    Objectives of this project were to update and convert to disk storage appropriate handout materials for courses for the electronic technology open classroom. Project activities were an ERIC search for computer-managed instructional materials; updating of the course outline, lesson outlines, information handouts, and unit tests; and storage of the…

  11. The Stoner-Wohlfarth Model of Ferromagnetism

    ERIC Educational Resources Information Center

    Tannous, C.; Gieraltowski, J.

    2008-01-01

    The Stoner-Wohlfarth (SW) model is the simplest model that describes adequately the physics of fine magnetic grains, the magnetization of which can be used in digital magnetic storage (floppies, hard disks and tapes). Magnetic storage density is presently increasing steadily in almost the same way as electronic device size and circuitry are…

  12. The raw disk i/o performance of compaq storage works RAID arrays under tru64 unix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselton, A C

    2000-10-19

    We report on the raw disk i/o performance of a set of Compaq StorageWorks RAID arrays connected to our cluster of Compaq ES40 computers via Fibre Channel. The best cumulative peak sustained data rate is l17MB/s per node for reads and 77MB/s per node for writes. This value occurs for a configuration in which a node has two Fibre Channel interfaces to a switch, which in turn has two connections to each of two Compaq StorageWorks RAID arrays. Each RAID array has two HSG80 RAID controllers controlling (together) two 5+p RAID chains. A 10% more space efficient arrangement using amore » single 1l+p RAID chain in place of the two 5+P chains is 25% slower for reads and 40% slower for writes.« less

  13. Optical system storage design with diffractive optical elements

    NASA Technical Reports Server (NTRS)

    Kostuk, Raymond K.; Haggans, Charles W.

    1993-01-01

    Optical data storage systems are gaining widespread acceptance due to their high areal density and the ability to remove the high capacity hard disk from the system. In magneto-optical read-write systems, a small rotation of the polarization state in the return signal from the MO media is the signal which must be sensed. A typical arrangement used for detecting these signals and correcting for errors in tracking and focusing on the disk is illustrated. The components required to achieve these functions are listed. The assembly and alignment of this complex system has a direct impact on cost, and also affects the size, weight, and corresponding data access rates. As a result, integrating these optical components and improving packaging techniques is an active area of research and development. Most designs of binary optic elements have been concerned with optimizing grating efficiency. However, rigorous coupled wave models for vector field diffraction from grating surfaces can be extended to determine the phase and polarization state of the diffracted field, and the design of polarization components. A typical grating geometry and the phase and polarization angles associated with the incident and diffracted fields are shown. In our current stage of work, we are examining system configurations which cascade several polarization functions on a single substrate. In this design, the beam returning from the MO disk illuminates a cascaded grating element which first couples light into the substrate, then introduces a quarter wave retardation, then a polarization rotation, and finally separates s- and p-polarized fields through a polarization beam splitter. The input coupler and polarization beam splitter are formed in volume gratings, and the two intermediate elements are zero-order elements.

  14. Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.

    2016-12-01

    Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.

  15. Disk storage at CERN

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Cano, E.; Chan, B.; Espinal, X.; Fiorot, A.; González Labrador, H.; Iven, J.; Lamanna, M.; Lo Presti, G.; Mościcki, JT; Peters, AJ; Ponce, S.; Rousseau, H.; van der Ster, D.

    2015-12-01

    CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. The total usable space available on disk for users is about 100 PB (with relative ratios 1:20:120). EOS actively uses the two CERN Tier0 centres (Meyrin and Wigner) with 50:50 ratio. IT DSS also provide sizeable on-demand resources for IT services most notably OpenStack and NFS-based clients: this is provided by a Ceph infrastructure (3 PB) and few proprietary servers (NetApp). We will describe our operational experience and recent changes to these systems with special emphasis to the present usages for LHC data taking, the convergence to commodity hardware (nodes with 200-TB each with optional SSD) shared across all services. We also describe our experience in coupling commodity and home-grown solution (e.g. CERNBox integration in EOS, Ceph disk pools for AFS, CASTOR and NFS) and finally the future evolution of these systems for WLCG and beyond.

  16. Towards Transparent Throughput Elasticity for IaaS Cloud Storage: Exploring the Benefits of Adaptive Block-Level Caching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicolae, Bogdan; Riteau, Pierre; Keahey, Kate

    Storage elasticity on IaaS clouds is a crucial feature in the age of data-intensive computing, especially when considering fluctuations of I/O throughput. This paper provides a transparent solution that automatically boosts I/O bandwidth during peaks for underlying virtual disks, effectively avoiding over-provisioning without performance loss. The authors' proposal relies on the idea of leveraging short-lived virtual disks of better performance characteristics (and thus more expensive) to act during peaks as a caching layer for the persistent virtual disks where the application data is stored. Furthermore, they introduce a performance and cost prediction methodology that can be used both independently tomore » estimate in advance what trade-off between performance and cost is possible, as well as an optimization technique that enables better cache size selection to meet the desired performance level with minimal cost. The authors demonstrate the benefits of their proposal both for microbenchmarks and for two real-life applications using large-scale experiments.« less

  17. Multi-terabyte EIDE disk arrays running Linux RAID5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case ofmore » multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.« less

  18. A novel anti-piracy optical disk with photochromic diarylethene

    NASA Astrophysics Data System (ADS)

    Liu, Guodong; Cao, Guoqiang; Huang, Zhen; Wang, Shenqian; Zou, Daowen

    2005-09-01

    Diarylethene is one of photochromic material with many advantages and one of the most promising recording materials for huge optical data storage. Diarylethene has two forms, which can be converted to each other by laser beams of different wavelength. The material has been researched for rewritable optical disks. Volatile data storage is one of its properties, which was always considered as an obstacle to utility. Many researches have been done for combating the obstacle for a long time. In fact, volatile data storage is very useful for anti-piracy optical data storage. Piracy is a social and economical problem. One technology of anti-piracy optical data storage is to limit readout of the data recorded in the material by encryption software. By the development of computer technologies, this kind of software is more and more easily cracked. Using photochromic diarylethene as the optical recording material, the signals of the data recorded in the material are degraded when it is read, and readout of the data is limited. Because the method uses hardware to realize anti-piracy, it is impossible cracked. In this paper, we will introduce this usage of the material. Some experiments are presented for proving its feasibility.

  19. Effect of cleaning methods after reduced-pressure air abrasion on bonding to zirconia ceramic.

    PubMed

    Attia, Ahmed; Kern, Matthias

    2011-12-01

    To evaluate in vitro the influence of different cleaning methods after low-pressure air abrasion on the bond strength of a phosphate monomer-containing luting resin to zirconia ceramic. A total of 112 zirconia ceramic disks were divided into 7 groups (n = 16). In the test groups, disks were air abraded at low pressure (L) 0.05 MPa using 50-μm alumina particles. Prior to bonding, the disks were ultrasonically (U) cleaned either in isopropanol alcohol (AC), hydrofluoric acid (HF), demineralized water (DW), or tap water (TW), or they were used without ultrasonic cleaning. Disks air abraded at a high (H) pressure of 0.25 MPa and cleaned ultrasonically in isopropanol served as positive control; original (O) milled disks used without air abrasion served as the negative control group. Plexiglas tubes filled with composite resin were bonded with the adhesive luting resin Panavia 21 to the ceramic disks. Prior to testing tensile bond strength (TBS), each main group was further subdivided into 2 subgroups (n=8) which were stored in distilled water either at 37°C for 3 days or for 30 days with 7500 thermal cycles. Statistical analyses were conducted with two- and one-way analyses of variance (ANOVA) and Tukey's HSD test. Initial tensile bond strength (TBS) ranged from 32.6 to 42.8 MPa. After 30 days storage in water with thermocycling, TBS ranged from 21.9 to 36.3 MPa. Storage in water and thermocycling significantly decreased the TBS of test groups which were not air abraded (p = 0.05) or which were air abraded but cleaned in tap water (p = 0.002), but not the TBS of the other groups (p > 0.05). Also, the TBS of air-abraded groups were significantly higher than the TBS of the original milled (p < 0.01). Cleaning procedures did not significantly affect TBS either after 3 days or 30 days storage in water and thermocycling (p > 0.05). Air abrasion at 0.05 MPa and ultrasonic cleaning are important factors for improving bonding to zirconia ceramic.

  20. Digital Photography and Its Impact on Instruction.

    ERIC Educational Resources Information Center

    Lantz, Chris

    Today the chemical processing of film is being replaced by a virtual digital darkroom. Digital image storage makes new levels of consistency possible because its nature is less volatile and more mutable than traditional photography. The potential of digital imaging is great, but issues of disk storage, computer speed, camera sensor resolution,…

  1. $ANBA; a rapid, combined data acquisition and correction program for the SEMQ electron microprobe

    USGS Publications Warehouse

    McGee, James J.

    1983-01-01

    $ANBA is a program developed for rapid data acquisition and correction on an automated SEMQ electron microprobe. The program provides increased analytical speed and reduced disk read/write operations compared with the manufacturer's software, resulting in a doubling of analytical throughput. In addition, the program provides enhanced analytical features such as averaging, rapid and compact data storage, and on-line plotting. The program is described with design philosophy, flow charts, variable names, a complete program listing, and system requirements. A complete operating example and notes to assist in running the program are included.

  2. Moore's law realities for recording systems and memory storage components: HDD, tape, NAND, and optical

    NASA Astrophysics Data System (ADS)

    Fontana, Robert E.; Decad, Gary M.

    2018-05-01

    This paper describes trends in the storage technologies associated with Linear Tape Open (LTO) Tape cartridges, hard disk drives (HDD), and NAND Flash based storage devices including solid-state drives (SSD). This technology discussion centers on the relationship between cost/bit and bit density and, specifically on how the Moore's Law perception that areal density doubling and cost/bit halving every two years is no longer being achieved for storage based components. This observation and a Moore's Law Discussion are demonstrated with data from 9-year storage technology trends, assembled from publically available industry reporting sources.

  3. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  4. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  5. A Disk-Based System for Producing and Distributing Science Products from MODIS

    NASA Technical Reports Server (NTRS)

    Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael

    2007-01-01

    Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.

  6. Analysis of error-correction constraints in an optical disk.

    PubMed

    Roberts, J D; Ryley, A; Jones, D M; Burke, D

    1996-07-10

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  7. Analysis of error-correction constraints in an optical disk

    NASA Astrophysics Data System (ADS)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  8. The ATLAS Tier-3 in Geneva and the Trigger Development Facility

    NASA Astrophysics Data System (ADS)

    Gadomski, S.; Meunier, Y.; Pasche, P.; Baud, J.-P.; ATLAS Collaboration

    2011-12-01

    The ATLAS Tier-3 farm at the University of Geneva provides storage and processing power for analysis of ATLAS data. In addition the facility is used for development, validation and commissioning of the High Level Trigger of ATLAS [1]. The latter purpose leads to additional requirements on the availability of latest software and data, which will be presented. The farm is also a part of the WLCG [2], and is available to all members of the ATLAS Virtual Organization. The farm currently provides 268 CPU cores and 177 TB of storage space. A grid Storage Element, implemented with the Disk Pool Manager software [3], is available and integrated with the ATLAS Distributed Data Management system [4]. The batch system can be used directly by local users, or with a grid interface provided by NorduGrid ARC middleware [5]. In this article we will present the use cases that we support, as well as the experience with the software and the hardware we are using. Results of I/O benchmarking tests, which were done for our DPM Storage Element and for the NFS servers we are using, will also be presented.

  9. Development of an Aeroelastic Analysis Including a Viscous Flow Model

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Bakhle, Milind A.

    2001-01-01

    Under this grant, Version 4 of the three-dimensional Navier-Stokes aeroelastic code (TURBO-AE) has been developed and verified. The TURBO-AE Version 4 aeroelastic code allows flutter calculations for a fan, compressor, or turbine blade row. This code models a vibrating three-dimensional bladed disk configuration and the associated unsteady flow (including shocks, and viscous effects) to calculate the aeroelastic instability using a work-per-cycle approach. Phase-lagged (time-shift) periodic boundary conditions are used to model the phase lag between adjacent vibrating blades. The direct-store approach is used for this purpose to reduce the computational domain to a single interblade passage. A disk storage option, implemented using direct access files, is available to reduce the large memory requirements of the direct-store approach. Other researchers have implemented 3D inlet/exit boundary conditions based on eigen-analysis. Appendix A: Aeroelastic calculations based on three-dimensional euler analysis. Appendix B: Unsteady aerodynamic modeling of blade vibration using the turbo-V3.1 code.

  10. The Design and Evolution of Jefferson Lab's Jasmine Mass Storage System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryan Hess; M. Andrew Kowalski; Michael Haddox-Schatz

    We describe the Jasmine mass storage system, in operation since 2001. Jasmine has scaled to meet the challenges of grid applications, petabyte class storage, and hundreds of MB/sec throughput using commodity hardware, Java technologies, and a small but focused development team. The evolution of the integrated disk cache system, which provides a managed online subset of the tape contents, is examined in detail. We describe how the storage system has grown to meet the special needs of the batch farm, grid clients, and new performance demands.

  11. Digital Holographic Memories

    NASA Astrophysics Data System (ADS)

    Hesselink, Lambertus; Orlov, Sergei S.

    Optical data storage is a phenomenal success story. Since its introduction in the early 1980s, optical data storage devices have evolved from being focused primarily on music distribution, to becoming the prevailing data distribution and recording medium. Each year, billions of optical recordable and prerecorded disks are sold worldwide. Almost every computer today is shipped with a CD or DVD drive installed.

  12. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  13. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  14. Up-to-date state of storage techniques used for large numerical data files

    NASA Technical Reports Server (NTRS)

    Chlouba, V.

    1975-01-01

    Methods for data storage and output in data banks and memory files are discussed along with a survey of equipment available for this. Topics discussed include magnetic tapes, magnetic disks, Terabit magnetic tape memory, Unicon 690 laser memory, IBM 1360 photostore, microfilm recording equipment, holographic recording, film readers, optical character readers, digital data storage techniques, and photographic recording. The individual types of equipment are summarized in tables giving the basic technical parameters.

  15. Storage media pipelining: Making good use of fine-grained media

    NASA Technical Reports Server (NTRS)

    Vanmeter, Rodney

    1993-01-01

    This paper proposes a new high-performance paradigm for accessing removable media such as tapes and especially magneto-optical disks. In high-performance computing the striping of data across multiple devices is a common means of improving data transfer rates. Striping has been used very successfully for fixed magnetic disks improving overall system reliability as well as throughput. It has also been proposed as a solution for providing improved bandwidth for tape and magneto-optical subsystems. However, striping of removable media has shortcomings, particularly in the areas of latency to data and restricted system configurations, and is suitable primarily for very large I/Os. We propose that for fine-grained media, an alternative access method, media pipelining, may be used to provide high bandwidth for large requests while retaining the flexibility to support concurrent small requests and different system configurations. Its principal drawback is high buffering requirements in the host computer or file server. This paper discusses the possible organization of such a system including the hardware conditions under which it may be effective, and the flexibility of configuration. Its expected performance is discussed under varying workloads including large single I/O's and numerous smaller ones. Finally, a specific system incorporating a high-transfer-rate magneto-optical disk drive and autochanger is discussed.

  16. Computer Sciences and Data Systems, volume 2

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: data storage; information network architecture; VHSIC technology; fiber optics; laser applications; distributed processing; spaceborne optical disk controller; massively parallel processors; and advanced digital SAR processors.

  17. Automated Camouflage Pattern Generation Technology Survey.

    DTIC Science & Technology

    1985-08-07

    supported by high speed data communications? Costs: 9 What are your rates? $/CPU hour: $/MB disk storage/day: S/connect hour: other charges: What are your... data to the workstation, tape drives are needed for backing up and archiving completed patterns, 256 megabytes of on-line hard disk space as a minimum...is needed to support multiple processes and data files, and 4 megabytes of actual or virtual memory is needed to process the largest expected single

  18. Site Partitioning for Redundant Arrays of Distributed Disks

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.

    1996-01-01

    Redundant arrays of distributed disks (RADD) can be used in a distributed computing system or database system to provide recovery in the presence of disk crashes and temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites of a distributed storage system into redundant arrays in such a way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-hard. We then propose and evaluate several heuristic algorithms for finding approximate solutions. Simulation results show that significant reduction in remote parity update costs can be achieved by optimizing the site partitioning scheme.

  19. Long-Term file activity patterns in a UNIX workstation environment

    NASA Technical Reports Server (NTRS)

    Gibson, Timothy J.; Miller, Ethan L.

    1998-01-01

    As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.

  20. The Role of Comets as Possible Contributors of Water and Prebiotic Organics to Terrestrial Planets

    NASA Technical Reports Server (NTRS)

    Mumma, Michael J.; Charnley, S. B.

    2011-01-01

    The question of exogenous delivery of organics and water to Earth and other young planets is of critical importance for understanding the origin of Earth's water, and for assessing the prospects for existence of Earth-like exo-planets. Viewed from a cosmic perspective, Earth is a dry planet yet its oceans are enriched in deuterium by a large factor relative to nebular hydrogen. Can comets have delivered Earth's water? The deuterium content of comets is key to ,assessing their role as contributors of water to Earth. Icy bodies today reside in two distinct reservoirs, the Oort Cloud and the Kuiper Disk (divided into the classical disk, the scattered disk, and the detached or extended disk populations). Orbital parameters can indicate the cosmic storage reservoir for a given comet. Knowledge of the diversity of comets within a reservoir assists in assessing their possible contribution to early Earth, but requires quantitative knowledge of their components - dust and ice. Strong gradients in temperature and chemistry in the proto-planetary disk, coupled with dynamical dispersion of an outer disk of icy planetesimals, imply that comets from KD and OC reservoirs should have diverse composition. The primary volatiles (native to the nucleus) provide the preferred metric for building a taxonomy for comets, and the number of comets so quantified is growing rapidly. Taxonomies based on native species (primary volatiles) are now beginning to emerge [1, 2, 3]. The measurement of cosmic parameters such as the nuclear spin temperatures for H2O, NH3 and CH4, and of enrichment factors for isotopologues (D/H in water and hydrogen cyanide, N-14/N-15 in CN and hydrogen cyanide) provide additional tests of the origin of cometary material. I will provide an overview of these aspects, and implications for the origin of Earth's water and prebiotic organics.

  1. The Scalable Checkpoint/Restart Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  2. One-Dimensional Signal Extraction Of Paper-Written ECG Image And Its Archiving

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-ni; Zhang, Hong; Zhuang, Tian-ge

    1987-10-01

    A method for converting paper-written electrocardiograms to one dimensional (1-D) signals for archival storage on floppy disk is presented here. Appropriate image processing techniques were employed to remove the back-ground noise inherent to ECG recorder charts and to reconstruct the ECG waveform. The entire process consists of (1) digitization of paper-written ECGs with an image processing system via a TV camera; (2) image preprocessing, including histogram filtering and binary image generation; (3) ECG feature extraction and ECG wave tracing, and (4) transmission of the processed ECG data to IBM-PC compatible floppy disks for storage and retrieval. The algorithms employed here may also be used in the recognition of paper-written EEG or EMG and may be useful in robotic vision.

  3. A Future Accelerated Cognitive Distributed Hybrid Testbed for Big Data Science Analytics

    NASA Astrophysics Data System (ADS)

    Halem, M.; Prathapan, S.; Golpayegani, N.; Huang, Y.; Blattner, T.; Dorband, J. E.

    2016-12-01

    As increased sensor spectral data volumes from current and future Earth Observing satellites are assimilated into high-resolution climate models, intensive cognitive machine learning technologies are needed to data mine, extract and intercompare model outputs. It is clear today that the next generation of computers and storage, beyond petascale cluster architectures, will be data centric. They will manage data movement and process data in place. Future cluster nodes have been announced that integrate multiple CPUs with high-speed links to GPUs and MICS on their backplanes with massive non-volatile RAM and access to active flash RAM disk storage. Active Ethernet connected key value store disk storage drives with 10Ge or higher are now available through the Kinetic Open Storage Alliance. At the UMBC Center for Hybrid Multicore Productivity Research, a future state-of-the-art Accelerated Cognitive Computer System (ACCS) for Big Data science is being integrated into the current IBM iDataplex computational system `bluewave'. Based on the next gen IBM 200 PF Sierra processor, an interim two node IBM Power S822 testbed is being integrated with dual Power 8 processors with 10 cores, 1TB Ram, a PCIe to a K80 GPU and an FPGA Coherent Accelerated Processor Interface card to 20TB Flash Ram. This system is to be updated to the Power 8+, an NVlink 1.0 with the Pascal GPU late in 2016. Moreover, the Seagate 96TB Kinetic Disk system with 24 Ethernet connected active disks is integrated into the ACCS storage system. A Lightweight Virtual File System developed at the NASA GSFC is installed on bluewave. Since remote access to publicly available quantum annealing computers is available at several govt labs, the ACCS will offer an in-line Restricted Boltzmann Machine optimization capability to the D-Wave 2X quantum annealing processor over the campus high speed 100 Gb network to Internet 2 for large files. As an evaluation test of the cognitive functionality of the architecture, the following studies utilizing all the system components will be presented; (i) a near real time climate change study generating CO2 fluxes and (ii) a deep dive capability into an 8000 x8000 pixel image pyramid display and (iii) Large dense and sparse eigenvalue decomposition.

  4. Effect of storage temperature on survival and recovery of thermal and extrusion injured Escherichia coli K-12 in whey protein concentrate and corn meal.

    PubMed

    Ukuku, Dike O; Mukhopadhyay, Sudarsan; Onwulata, Charles

    2013-01-01

    Previously, we reported inactivation of Escherichia coli populations in corn product (CP) and whey protein product (WPP) extruded at different temperatures. However, information on the effect of storage temperatures on injured bacterial populations was not addressed. In this study, the effect of storage temperatures on the survival and recovery of thermal death time (TDT) disks and extrusion injured E. coli populations in CP and WPP was investigated. CP and WPP inoculated with E. coli bacteria at 7.8 log(10) CFU/g were conveyed separately into the extruder with a series 6300 digital type T-35 twin screw volumetric feeder set at a speed of 600 rpm and extruded at 35°C, 55°C, 75°C, and 95°C, or thermally treated with TDT disks submerged into water bath set at 35°C, 55°C, 75°C, and 95°C for 120 s. Populations of surviving bacteria including injured cells in all treated samples were determined immediately and every day for 5 days, and up to 10 days for untreated samples during storage at 5°C, 10°C, and 23°C. TDT disks treatment at 35°C and 55°C did not cause significant changes in the population of the surviving bacteria including injured populations. Extrusion treatment at 35°C and 55°C led to significant (p<0.05) reduction of E. coli populations in WPP as opposed to CP. The injured populations among the surviving E. coli cells in CP and WPP extruded at all temperatures tested were inactivated during storage. Population of E. coli inactivated in samples extruded at 75°C was significantly (p<0.05) different than 55°C during storage. Percent injured population could not be determined in samples extruded at 95°C due to absence of colony forming units on the agar plates. The results of this study showed that further inactivation of the injured populations occurred during storage at 5°C for 5 days suggesting the need for immediate storage of 75°C extruded CP and WPP at 5°C for at least 24 h to enhance their microbial safety.

  5. The LHCb Grid Simulation: Proof of Concept

    NASA Astrophysics Data System (ADS)

    Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.

    2017-10-01

    The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.

  6. The advantage of an alternative substrate over Al/NiP disks

    NASA Astrophysics Data System (ADS)

    Jiaa, Chi L.; Eltoukhy, Atef

    1994-02-01

    Compact-size disk drives with high storage densities are in high demand due to the popularity of portable computers and workstations. The contact-start-stop (CSS) endurance performance must improve in order to accomodate the higher number of on/off cycles. In this paper, we looked at 65 mm thin-film canasite substrate disks and evaluated their mechanical performance. We compared them with conventional aluminum NiP-plated disks in surface topography, take-off time with changes of skew angles and radius, CSS, drag test and glide height performance, and clamping effect. In addition, a new post-sputter process aimed at the improvement of take-off and glide as well as CSS performances was investigated and demonstrated for the canasite disks. From the test results, it is indicated that canasite achieved a lower take-off velocity, higher clamping resistance, and better glide height and CSS endurance performance. This study concludes that a new generation disk drive equipped with canasite substrate disks will consume less power from the motor due to faster take-off and lighter weight, achieve higher recording density since the head flies lower, can better withstand damage from sliding friction during the CSS operations, and will be less prone to disk distortion from clamping due to its superior mechanical properties.

  7. Converged photonic data storage and switch platform for exascale disaggregated data centers

    NASA Astrophysics Data System (ADS)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  8. In-Storage Embedded Accelerator for Sparse Pattern Processing

    DTIC Science & Technology

    2016-08-13

    performance of RAM disk. Since this configuration offloads most of processing onto the FPGA, the host software consists of only two threads for...more. Fig. 13. Document Processed vs CPU Threads Note that BlueDBM efficiency comes from our in-store processing paradigm that uses the FPGA...In-Storage Embedded Accelerator for Sparse Pattern Processing Sang-Woo Jun*, Huy T. Nguyen#, Vijay Gadepally#*, and Arvind* #MIT Lincoln Laboratory

  9. Storage resource manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Bakken, J.; Petravick, D.

    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less

  10. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  11. Free Vibration Analysis of a Spinning Flexible DISK-SPINDLE System Supported by Ball Bearing and Flexible Shaft Using the Finite Element Method and Substructure Synthesis

    NASA Astrophysics Data System (ADS)

    JANG, G. H.; LEE, S. H.; JUNG, M. S.

    2002-03-01

    Free vibration of a spinning flexible disk-spindle system supported by ball bearing and flexible shaft is analyzed by using Hamilton's principle, FEM and substructure synthesis. The spinning disk is described by using the Kirchhoff plate theory and von Karman non-linear strain. The rotating spindle and stationary shaft are modelled by Rayleigh beam and Euler beam respectively. Using Hamilton's principle and including the rigid body translation and tilting motion, partial differential equations of motion of the spinning flexible disk and spindle are derived consistently to satisfy the geometric compatibility in the internal boundary between substructures. FEM is used to discretize the derived governing equations, and substructure synthesis is introduced to assemble each component of the disk-spindle-bearing-shaft system. The developed method is applied to the spindle system of a computer hard disk drive with three disks, and modal testing is performed to verify the simulation results. The simulation result agrees very well with the experimental one. This research investigates critical design parameters in an HDD spindle system, i.e., the non-linearity of a spinning disk and the flexibility and boundary condition of a stationary shaft, to predict the free vibration characteristics accurately. The proposed method may be effectively applied to predict the vibration characteristics of a spinning flexible disk-spindle system supported by ball bearing and flexible shaft in the various forms of computer storage device, i.e., FDD, CD, HDD and DVD.

  12. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  13. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    NASA Astrophysics Data System (ADS)

    Zotes Resines, M.; Heikkila, S. S.; Duellmann, D.; Adde, G.; Toebbicke, R.; Hughes, J.; Wang, L.

    2014-06-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  14. Design and implementation of scalable tape archiver

    NASA Technical Reports Server (NTRS)

    Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio

    1996-01-01

    In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed cassettes, thus balancing the load of each archiver. Background migration occurs the robotics are idle. Both migration algorithms are based on access frequency and space utility of each element archiver. To normalize these parameters according to the number of drives in each element archiver, it is possible to maintain high performance even if some tape drives fail. We found that the foreground migration is efficient at reducing access response time. Beside the foreground migration, the background migration makes it possible to track the transition of spatial access locality quickly.

  15. High-density patterned media fabrication using jet and flash imprint lithography

    NASA Astrophysics Data System (ADS)

    Ye, Zhengmao; Ramos, Rick; Brooks, Cynthia; Simpson, Logan; Fretwell, John; Carden, Scott; Hellebrekers, Paul; LaBrake, Dwayne; Resnick, Douglas J.; Sreenivasan, S. V.

    2011-04-01

    The Jet and Flash Imprint Lithography (J-FIL®) process uses drop dispensing of UV curable resists for high resolution patterning. Several applications, including patterned media, are better, and more economically served by a full substrate patterning process since the alignment requirements are minimal. Patterned media is particularly challenging because of the aggressive feature sizes necessary to achieve storage densities required for manufacturing beyond the current technology of perpendicular recording. In this paper, the key process steps for the application of J-FIL to pattern media fabrication are reviewed with special attention to substrate cleaning, vapor adhesion of the adhesion layer and imprint performance at >300 disk per hour. Also discussed are recent results for imprinting discrete track patterns at half pitches of 24nm and bit patterned media patterns at densities of 1 Tb/in2.

  16. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    NASA Astrophysics Data System (ADS)

    Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-09-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  17. Horizontally scaling dChache SRM with the Terracotta platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Crawford, M.; Moibenko, A.

    2011-01-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a singlemore » node. Using the Terracotta platform, we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.« less

  18. MIDAS - ESO's new image processing system

    NASA Astrophysics Data System (ADS)

    Banse, K.; Crane, P.; Grosbol, P.; Middleburg, F.; Ounnas, C.; Ponz, D.; Waldthausen, H.

    1983-03-01

    The Munich Image Data Analysis System (MIDAS) is an image processing system whose heart is a pair of VAX 11/780 computers linked together via DECnet. One of these computers, VAX-A, is equipped with 3.5 Mbytes of memory, 1.2 Gbytes of disk storage, and two tape drives with 800/1600 bpi density. The other computer, VAX-B, has 4.0 Mbytes of memory, 688 Mbytes of disk storage, and one tape drive with 1600/6250 bpi density. MIDAS is a command-driven system geared toward the interactive user. The type and number of parameters in a command depends on the unique parameter invoked. MIDAS is a highly modular system that provides building blocks for the undertaking of more sophisticated applications. Presently, 175 commands are available. These include the modification of the color-lookup table interactively, to enhance various image features, and the interactive extraction of subimages.

  19. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1992-01-01

    In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.

  20. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1993-01-01

    In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.

  1. A review of high magnetic moment thin films for microscale and nanotechnology applications

    DOE PAGES

    Scheunert, Gunther; Heinonen, O.; Hardeman, R.; ...

    2016-02-17

    Here, the creation of large magnetic fields is a necessary component in many technologies, ranging from magnetic resonance imaging, electric motors and generators, and magnetic hard disk drives in information storage. This is typically done by inserting a ferromagnetic pole piece with a large magnetisation density M S in a solenoid. In addition to large M S, it is usually required or desired that the ferromagnet is magnetically soft and has a Curie temperature well above the operating temperature of the device. A variety of ferromagnetic materials are currently in use, ranging from FeCo alloys in, for example, hard diskmore » drives, to rare earth metals operating at cryogenic temperatures in superconducting solenoids. These latter can exceed the limit on M S for transition metal alloys given by the Slater-Pauling curve. This article reviews different materials and concepts in use or proposed for technological applications that require a large M S, with an emphasis on nanoscale material systems, such as thin and ultra-thin films. Attention is also paid to other requirements or properties, such as the Curie temperature and magnetic softness. In a final summary, we evaluate the actual applicability of the discussed materials for use as pole tips in electromagnets, in particular, in nanoscale magnetic hard disk drive read-write heads; the technological advancement of the latter has been a very strong driving force in the development of the field of nanomagnetism.« less

  2. Achieving cost/performance balance ratio using tiered storage caching techniques: A case study with CephFS

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.

    2017-10-01

    As demand for widely accessible storage capacity increases and usage is on the rise, steady IO performance is desired but tends to suffer within multi-user environments. Typical deployments use standard hard drives as the cost per/GB is quite low. On the other hand, HDD based solutions for storage is not known to scale well with process concurrency and soon enough, high rate of IOPs create a “random access” pattern killing performance. Though not all SSDs are alike, SSDs are an established technology often used to address this exact “random access” problem. In this contribution, we will first discuss the IO performance of many different SSD drives (tested in a comparable and standalone manner). We will then be discussing the performance and integrity of at least three low-level disk caching techniques (Flashcache, dm-cache, and bcache) including individual policies, procedures, and IO performance. Furthermore, the STAR online computing infrastructure currently hosts a POSIX-compliant Ceph distributed storage cluster - while caching is not a native feature of CephFS (only exists in the Ceph Object store), we will show how one can implement a caching mechanism profiting from an implementation at a lower level. As our illustration, we will present our CephFS setup, IO performance tests, and overall experience from such configuration. We hope this work will service the community’s interest for using disk-caching mechanisms with applicable uses such as distributed storage systems and seeking an overall IO performance gain.

  3. Onboard System Evaluation of Rotors Vibration, Engines (OBSERVE) monitoring System

    DTIC Science & Technology

    1992-07-01

    consists of a Data Acquisiiton Unit (DAU), Control and Display Unit ( CADU ), Universal Tracking Devices (UTD), Remote Cockpit Display (RCD) and a PC...and Display Unit ( CADU ) - The CADU provides data storage and a graphical user interface neccesary to display both the measured data and diagnostic...information. The CADU has an interface to a Credit Card Memory (CCM) which operates similar to a disk drive, allowing the storage of data and programs. The

  4. A performance analysis of advanced I/O architectures for PC-based network file servers

    NASA Astrophysics Data System (ADS)

    Huynh, K. D.; Khoshgoftaar, T. M.

    1994-12-01

    In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.

  5. Effect of silica coating on fracture strength of glass-infiltrated alumina ceramic cemented to dentin.

    PubMed

    Xie, Haifeng; Zhu, Ye; Chen, Chen; Gu, Ning; Zhang, Feimin

    2011-10-01

    To examine the availability of sol-gel processed silica coating for alumina-based ceramic bonding, and determine which silica sol concentration was appropriate for silica coating. Sixty disks of In-Ceram alumina ceramic were fabricated and randomly divided into 5 main groups. The disks received 5 different surface conditioning treatments: Group Al, sandblasted; Group AlC, sandblasted + silane coupling agent applied; Groups Al20C, Al30C, and Al40C, sandblasted, silica coating via sol-gel process prepared using 20 wt%, 30 wt%, and 40 wt% silica sols, and then silane coupling agent applied. Before bonding, one-step adhesives were applied on pre-prepared ceramic surfaces of all groups. Then, 60 dentin specimens were prepared and conditioned with phosphoric acid and one-step adhesive. Ceramic disks of all groups were cemented to dentin specimens with dual-curing resin cements. Fracture strength was determined at 24 h and after 20 days of storage in water. Groups Al20C, Al30C, and Al40C revealed significantly higher fracture strength than groups Al and AlC. No statistically significant difference in fracture strength was found between groups Al and AlC, or among groups Al20C, Al30C, and Al40C. Fracture strength values of all the groups did not change after 20 days of water storage. Sol-gel processed silica coating can enhance fracture strength of In-Ceram alumina ceramic after bonding to dentin, and different silica sol concentrations produced the same effects. Twenty days of water storage did not decrease the fracture strength.

  6. Research Studies on Advanced Optical Module/Head Designs for Optical Disk Recording Devices

    NASA Technical Reports Server (NTRS)

    Burke, James J.; Seery, Bernard D.

    1993-01-01

    The Annual Report of the Optical Data Storage Center of the University of Arizona is presented. Summary reports on continuing projects are presented. Research areas include: magneto-optic media, optical heads, and signal processing.

  7. Microcomputers in Libraries: The Quiet Revolution.

    ERIC Educational Resources Information Center

    Boss, Richard

    1985-01-01

    This article defines three separate categories of microcomputers--personal, desk-top, multi-user devices--and relates storage capabilities (expandability, floppy disks) to library applications. Highlghts include de facto standards, operating systems, database management systems, applications software, circulation control systems, dumb and…

  8. Faster, Better, Cheaper: A Decade of PC Progress.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1997-01-01

    Reviews the development of personal computers and how computer components have changed in price and value. Highlights include disk drives; keyboards; displays; memory; color graphics; modems; CPU (central processing unit); storage; direct mail vendors; and future possibilities. (LRW)

  9. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    PubMed Central

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  10. Head-disk Interface Study for Heat Assisted Magnetic Recording (HAMR) and Plasmonic Nanolithography for Patterned Media

    NASA Astrophysics Data System (ADS)

    Xiong, Shaomin

    The magnetic storage areal density keeps increasing every year, and magnetic recording-based hard disk drives provide a very cheap and effective solution to the ever increasing demand for data storage. Heat assisted magnetic recording (HAMR) and bit patterned media have been proposed to increase the magnetic storage density beyond 1 Tb/in2. In HAMR systems, high magnetic anisotropy materials are recommended to break the superparamagnetic limit for further scaling down the size of magnetic bits. However, the current magnetic transducers are not able to generate strong enough field to switch the magnetic orientations of the high magnetic anisotropy material so the data writing is not able to be achieved. So thermal heating has to be applied to reduce the coercivity for the magnetic writing. To provide the heating, a laser is focused using a near field transducer (NFT) to locally heat a ~(25 nm)2 spot on the magnetic disk to the Curie temperature, which is ~ 400 C-600°C, to assist in the data writing process. But this high temperature working condition is a great challenge for the traditional head-disk interface (HDI). The disk lubricant can be depleted by evaporation or decomposition. The protective carbon overcoat can be graphitized or oxidized. The surface quality, such as its roughness, can be changed as well. The NFT structure is also vulnerable to degradation under the large number of thermal load cycles. The changes of the HDI under the thermal conditions could significantly reduce the robustness and reliability of the HAMR products. In bit patterned media systems, instead of using the continuous magnetic granular material, physically isolated magnetic islands are used to store data. The size of the magnetic islands should be about or less than 25 nm in order to achieve the storage areal density beyond 1 Tb/in2. However, the manufacture of the patterned media disks is a great challenge for the current optical lithography technology. Alternative lithography solutions, such as nanoimprint, plasmonic nanolithography, could be potential candidates for the fabrication of patterned disks. This dissertation focuses mainly on: (1) an experimental study of the HDI under HAMR conditions (2) exploration of a plasmonic nanolithography technology. In this work, an experimental HAMR testbed (named "Cal stage") is developed to study different aspects of HAMR systems, including the tribological head-disk interface and heat transfer in the head-disk gap. A temperature calibration method based on magnetization decay is proposed to obtain the relationship between the laser power input and temperature increase on the disk. Furthermore, lubricant depletion tests under various laser heating conditions are performed. The effects of laser heating repetitions, laser power and disk speeds on lubricant depletion are discussed. Lubricant depletion under the optical focused laser beam heating and the NFT heating are compared, revealing that thermal gradient plays an important role for lubricant depletion. Lubricant reflow behavior under various conditions is also studied, and a power law dependency of lubricant depletion on laser heating repetitions is obtained from the experimental results. A conductive-AFM system is developed to measure the electrical properties of thin carbon films. The conductivity or resistivity is a good parameter for characterizing the sp2/sp3 components of the carbon films. Different heating modes are applied to study the degradation of the carbon films, including temperature-controlled electric heater heating, focused laser beam heating and NFT heating. It is revealed that the temperature and heating duration significantly affect the degradation of the carbon films. Surface reflectivity and roughness are changed under certain heating conditions. The failure of the NFT structure during slider flying is investigated using our in-house fabricated sliders. In order to extend the lifetime of the NFT, a two-stage heating scheme is proposed and a numerical simulation has verified the feasibility of this new scheme. The heat dissipated around the NFT structure causes a thermal protrusion. There is a chance for contact to occur between the protrusion and disk which can result in a failure of the NFT. A design method to combine both TFC protrusion and laser induced NFT protrusion is proposed to reduce the fly-height modulation and chance of head-disk contact. Finally, an integrated plasmonic nanolithography machine is introduced to fabricate the master template for patterned disks. The plasmonic nanolithography machine uses a flying slider with a plasmonic lens to expose the thermal resist on a spinning wafer. The system design, optimization and integration have been performed over the past few years. Several sub-systems of the plasmonic nanolithography machine, such as the radial and circumferential direction position control, high speed pattern generation, are presented in this work. The lithography results are shown as well.

  11. Processing of Bulk YBa2Cu3O(7-x) High Temperature Superconductor Materials for Gravity Modification Experiments and Performance Under AC Levitation

    NASA Technical Reports Server (NTRS)

    Koczor, Ronald; Noever, David; Hiser, Robert

    1999-01-01

    We have previously reported results using a high precision gravimeter to probe local gravity changes in the neighborhood of bulk-processed high temperature superconductor disks. Others have indicated that large annular disks (on the order of 25cm diameter) and AC levitation fields play an essential role in their observed experiments. We report experiments in processing such large bulk superconductors. Successful results depend on material mechanical characteristics, and pressure and heat treat protocols. Annular disks having rough dimensions of 30cm O.D., 7cm I.D. and 1 cm thickness have been routinely fabricated and tested under AC levitation fields ranging from 45 to 300OHz. Implications for space transportation initiatives and power storage flywheel technology will be discussed.

  12. Time-resolved scanning Kerr microscopy of flux beam formation in hard disk write heads

    NASA Astrophysics Data System (ADS)

    Valkass, Robert A. J.; Spicer, Timothy M.; Burgos Parra, Erick; Hicken, Robert J.; Bashir, Muhammad A.; Gubbins, Mark A.; Czoschke, Peter J.; Lopusnik, Radek

    2016-06-01

    To meet growing data storage needs, the density of data stored on hard disk drives must increase. In pursuit of this aim, the magnetodynamics of the hard disk write head must be characterized and understood, particularly the process of "flux beaming." In this study, seven different configurations of perpendicular magnetic recording (PMR) write heads were imaged using time-resolved scanning Kerr microscopy, revealing their detailed dynamic magnetic state during the write process. It was found that the precise position and number of driving coils can significantly alter the formation of flux beams during the write process. These results are applicable to the design and understanding of current PMR and next-generation heat-assisted magnetic recording devices, as well as being relevant to other magnetic devices.

  13. The EOSDIS software challenge

    NASA Astrophysics Data System (ADS)

    Jaworski, Allan

    1993-08-01

    The Earth Observing System (EOS) Data and Information System (EOSDIS) will serve as a major resource for the earth science community, supporting both command and control of complex instruments onboard the EOS spacecraft and the archiving, distribution, and analysis of data. The scale of EOSDIS and the volume of multidisciplinary research to be conducted using EOSDIS resources will produce unparalleled needs for technology transparency, data integration, and system interoperability. The scale of this effort far outscopes any previous scientific data system in its breadth or operational and performance needs. Modern hardware technology can meet the EOSDIS technical challenge. Multiprocessing speeds of many giga-flops are being realized by modern computers. Online storage disk, optical disk, and videocassette libraries with storage capacities of many terabytes are now commercially available. Radio frequency and fiber optics communications networks with gigabit rates are demonstrable today. It remains, of course, to perform the system engineering to establish the requirements, architectures, and designs that will implement the EOSDIS systems. Software technology, however, has not enjoyed the price/performance advances of hardware. Although we have learned to engineer hardware systems which have several orders of magnitude greater complexity and performance than those built in the 1960's, we have not made comparable progress in dramatically reducing the cost of software development. This lack of progress may significantly reduce our capabilities to achieve economically the types of highly interoperable, responsive, integraded, and productive environments which are needed by the earth science community. This paper describes some of the EOSDIS software requirements and current activities in the software community which are applicable to meeting the EOSDIS challenge. Some of these areas include intelligent user interfaces, software reuse libraries, and domain engineering. Also included are discussions of applicable standards in the areas of operating systems interfaces, user interfaces, communications interfaces, data transport, and science algorithm support, and their role in supporting the software development process.

  14. Evaluating Non-In-Place Update Techniques for Flash-Based Transaction Processing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yongkun; Goda, Kazuo; Kitsuregawa, Masaru

    Recently, flash memory is emerging as the storage device. With price sliding fast, the cost per capacity is approaching to that of SATA disk drives. So far flash memory has been widely deployed in consumer electronics even partly in mobile computing environments. For enterprise systems, the deployment has been studied by many researchers and developers. In terms of the access performance characteristics, flash memory is quite different from disk drives. Without the mechanical components, flash memory has very high random read performance, whereas it has a limited random write performance because of the erase-before-write design. The random write performance of flash memory is comparable with or even worse than that of disk drives. Due to such a performance asymmetry, naive deployment to enterprise systems may not exploit the potential performance of flash memory at full blast. This paper studies the effectiveness of using non-in-place-update (NIPU) techniques through the IO path of flash-based transaction processing systems. Our deliberate experiments using both open-source DBMS and commercial DBMS validated the potential benefits; x3.0 to x6.6 performance improvement was confirmed by incorporating non-in-place-update techniques into file system without any modification of applications or storage devices.

  15. Development of a software interface for optical disk archival storage for a new life sciences flight experiments computer

    NASA Technical Reports Server (NTRS)

    Bartram, Peter N.

    1989-01-01

    The current Life Sciences Laboratory Equipment (LSLE) microcomputer for life sciences experiment data acquisition is now obsolete. Among the weaknesses of the current microcomputer are small memory size, relatively slow analog data sampling rates, and the lack of a bulk data storage device. While life science investigators normally prefer data to be transmitted to Earth as it is taken, this is not always possible. No down-link exists for experiments performed in the Shuttle middeck region. One important aspect of a replacement microcomputer is provision for in-flight storage of experimental data. The Write Once, Read Many (WORM) optical disk was studied because of its high storage density, data integrity, and the availability of a space-qualified unit. In keeping with the goals for a replacement microcomputer based upon commercially available components and standard interfaces, the system studied includes a Small Computer System Interface (SCSI) for interfacing the WORM drive. The system itself is designed around the STD bus, using readily available boards. Configurations examined were: (1) master processor board and slave processor board with the SCSI interface; (2) master processor with SCSI interface; (3) master processor with SCSI and Direct Memory Access (DMA); (4) master processor controlling a separate STD bus SCSI board; and (5) master processor controlling a separate STD bus SCSI board with DMA.

  16. The structure and dynamics of interactive documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocha, J.T.

    1999-04-01

    Advances in information technology continue to accelerate as the new millennium approaches. With these advances, electronic information management is becoming increasingly important and is now supported by a seemingly bewildering array of hardware and software whose sole purpose is the design and implementation of interactive documents employing multimedia applications. Multimedia memory and storage applications such as Compact Disk-Read Only Memory (CD-ROMs) are already a familiar interactive tool in both the entertainment and business sectors. Even home enthusiasts now have the means at their disposal to design and produce CD-ROMs. More recently, Digital Video Disk (DVD) technology is carving its ownmore » niche in these markets and may (once application bugs are corrected and prices are lowered) eventually supplant CD-ROM technology. CD-ROM and DVD are not the only memory and storage applications capable of supporting interactive media. External, high-capacity drives and disks such as the Iomega{copyright} zip{reg_sign} and jaz{reg_sign} are also useful platforms for launching interactive documents without the need for additional hardware such as CD-ROM burners and copiers. The main drawback here, however, is the relatively high unit price per disk when compared to the unit cost of CD-ROMs. Regardless of the application chosen, there are fundamental structural characteristics that must be considered before effective interactive documents can be created. Additionally, the dynamics of interactive documents employing hypertext links are unique and bear only slight resemblance to those of their traditional hard-copy counterparts. These two considerations form the essential content of this paper.« less

  17. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment.

    PubMed

    Keller, Mark; Naue, Jana; Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols.

  18. DICOM implementation on online tape library storage system

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Dai, Hailei L.; Elghammer, David; Levine, Betty A.; Mun, Seong K.

    1998-07-01

    The main purpose of this project is to implement a Digital Image and Communications (DICOM) compliant online tape library system over the Internet. Once finished, the system will be used to store medical exams generated from U.S. ARMY Mobile ARMY Surgical Hospital (MASH) in Tuzla, Bosnia. A modified UC Davis implementation of DICOM storage class is used for this project. DICOM storage class user and provider are implemented as the system's interface to the Internet. The DICOM software provides flexible configuration options such as types of modalities and trusted remote DICOM hosts. Metadata is extracted from each exam and indexed in a relational database for query and retrieve purposes. The medical images are stored inside the Wolfcreek-9360 tape library system from StorageTek Corporation. The tape library system has nearline access to more than 1000 tapes. Each tape has a capacity of 800 megabytes making the total nearline tape access of around 1 terabyte. The tape library uses the Application Storage Manager (ASM) which provides cost-effective file management, storage, archival, and retrieval services. ASM automatically and transparently copies files from expensive magnetic disk to less expensive nearline tape library, and restores the files back when they are needed. The ASM also provides a crash recovery tool, which enable an entire file system restore in a short time. A graphical user interface (GUI) function is used to view the contents of the storage systems. This GUI also allows user to retrieve the stored exams and send the exams to anywhere on the Internet using DICOM protocols. With the integration of different components of the system, we have implemented a high capacity online tape library storage system that is flexible and easy to use. Using tape as an alternative storage media as opposed to the magnetic disk has the great potential of cost savings in terms of dollars per megabyte of storage. As this system matures, the Hospital Information Systems/Radiology Information Systems (HIS/RIS) or other components can be developed potentially as interfaces to the outside world thus widen the usage of the tape library system.

  19. Technology and the Online Catalog.

    ERIC Educational Resources Information Center

    Graham, Peter S.

    1983-01-01

    Discusses trends in computer technology and their use for library catalogs, noting the concept of bandwidth (describes quantity of information transmitted per given unit of time); computer hardware differences (micros, minis, maxis); distributed processing systems and databases; optical disk storage; networks; transmission media; and terminals.…

  20. A File Archival System

    NASA Technical Reports Server (NTRS)

    Fanselow, J. L.; Vavrus, J. L.

    1984-01-01

    ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.

  1. Climate Science Performance, Data and Productivity on Titan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, Benjamin W; Worley, Patrick H; Gaddis, Abigail L

    2015-01-01

    Climate Science models are flagship codes for the largest of high performance computing (HPC) resources, both in visibility, with the newly launched Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) effort, and in terms of significant fractions of system usage. The performance of the DOE ACME model is captured with application level timers and examined through a sizeable run archive. Performance and variability of compute, queue time and ancillary services are examined. As Climate Science advances in the use of HPC resources there has been an increase in the required human and data systems to achieve programs goals.more » A description of current workflow processes (hardware, software, human) and planned automation of the workflow, along with historical and projected data in motion and at rest data usage, are detailed. The combination of these two topics motivates a description of future systems requirements for DOE Climate Modeling efforts, focusing on the growth of data storage and network and disk bandwidth required to handle data at an acceptable rate.« less

  2. Storing, Browsing, Querying, and Sharing Data: the THREDDS Data Repository (TDR)

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D.; Baltzer, T.

    2005-12-01

    The Unidata Internet Data Distribution (IDD) network delivers gigabytes of data per day in near real time to sites across the U.S. and beyond. The THREDDS Data Server (TDS) supports public browsing of metadata and data access via OPeNDAP enabled URLs for datasets such as these. With such large quantities of data, sites generally employ a simple data management policy, keeping the data for a relatively short term on the order of hours to perhaps a week or two. In order to save interesting data in longer term storage and make it available for sharing, a user must move the data herself. In this case the user is responsible for determining where space is available, executing the data movement, generating any desired metadata, and setting access control to enable sharing. This task sequence is generally based on execution of a sequence of low level operating system specific commands with significant user involvement. The LEAD (Linked Environments for Atmospheric Discovery) project is building a cyberinfrastructure to support research and education in mesoscale meteorology. LEAD orchestrations require large, robust, and reliable storage with speedy access to stage data and store both intermediate and final results. These requirements suggest storage solutions that involve distributed storage, replication, and interfacing to archival storage systems such as mass storage systems and tape or removable disks. LEAD requirements also include metadata generation and access in order to support querying. In support of both THREDDS and LEAD requirements, Unidata is designing and prototyping the THREDDS Data Repository (TDR), a framework for a modular data repository to support distributed data storage and retrieval using a variety of back end storage media and interchangeable software components. The TDR interface will provide high level abstractions for long term storage, controlled, fast and reliable access, and data movement capabilities via a variety of technologies such as OPeNDAP and gridftp. The modular structure will allow substitution of software components so that both simple and complex storage media can be integrated into the repository. It will also allow integration of different varieties of supporting software. For example, if replication is desired, replica management could be handled via a simple hash table or a complex solution such as Replica Locater Service (RLS). In order to ensure that metadata is available for all the data in the repository, the TDR will also generate THREDDS metadata when necessary. Users will be able to establish levels of access control to their metadata and data. Coupled with a THREDDS Data Server, both browsing via THREDDS catalogs and querying capabilities will be supported. This presentation will describe the motivating factors, current status, and future plans of the TDR. References: IDD: http://www.unidata.ucar.edu/content/software/idd/index.html THREDDS: http://www.unidata.ucar.edu/content/projects/THREDDS/tech/server/ServerStatus.html LEAD: http://lead.ou.edu/ RLS: http://www.isi.edu/~annc/papers/chervenakRLSjournal05.pdf

  3. Reducing the Cost of System Administration of a Disk Storage System Built from Commodity Components

    DTIC Science & Technology

    2000-05-01

    quickly by using checkpointing and roll-forward logs. Microsoft Tiger is a video server built from commodity PCs which they call “cubs” [ BBD +96, BFD97...20 cents per megabyte using street prices of components. 3.2.2 Redundancy In designing the TD prototype, we have taken care to ensure it does not have... Td /GridPix/, 1999. [ATP99] Satoshi Asami, Nisha Talagala, and David Patterson. Designing a self-maintaining storage system. In Proceedings of the

  4. Careers and people

    NASA Astrophysics Data System (ADS)

    2009-09-01

    IBM scientist wins magnetism prizes Stuart Parkin, an applied physicist at IBM's Almaden Research Center, has won the European Geophysical Society's Néel Medal and the Magnetism Award from the International Union of Pure and Applied Physics (IUPAP) for his fundamental contributions to nanodevices used in information storage. Parkin's research on giant magnetoresistance in the late 1980s led IBM to develop computer hard drives that packed 1000 times more data onto a disk; his recent work focuses on increasing the storage capacity of solid-state electronic devices.

  5. Research Studies on Advanced Optical Module/Head Designs for Optical Data Storage

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Preprints are presented from the recent 1992 Optical Data Storage meeting in San Jose. The papers are divided into the following topical areas: Magneto-optical media (Modeling/design and fabrication/characterization/testing); Optical heads (holographic optical elements); and Optical heads (integrated optics). Some representative titles are as follow: Diffraction analysis and evaluation of several focus and track error detection schemes for magneto-optical disk systems; Proposal for massively parallel data storage system; Transfer function characteristics of super resolving systems; Modeling and measurement of a micro-optic beam deflector; Oxidation processes in magneto-optic and related materials; and A modal analysis of lamellar diffraction gratings in conical mountings.

  6. Safety Aspects of Big Cryogenic Systems Design

    NASA Astrophysics Data System (ADS)

    Chorowski, M.; Fydrych, J.; Poliński, J.

    2010-04-01

    Superconductivity and helium cryogenics are key technologies in the construction of large scientific instruments, like accelerators, fusion reactors or free electron lasers. Such cryogenic systems may contain more than hundred tons of helium, mostly in cold and high-density phases. In spite of the high reliability of the systems, accidental loss of the insulation vacuum, pipe rupture or rapid energy dissipation in the cold helium can not be overlooked. To avoid the danger of over-design pressure rise in the cryostats, they need to be equipped with a helium relief system. Such a system is comprised of safety valves, bursting disks and optionally cold or warm quench lines, collectors and storage tanks. Proper design of the helium safety relief system requires a good understanding of worst case scenarios. Such scenarios will be discussed, taking into account different possible failures of the cryogenic system. In any case it is necessary to estimate heat transfer through degraded vacuum superinsulation and mass flow through the valves and safety disks. Even if the design of the helium relief system does not foresee direct helium venting into the environment, an occasional emergency helium spill may happen. Helium propagation in the atmosphere and the origins of oxygen-deficiency hazards will be discussed.

  7. Twin disk composite flywheel

    NASA Astrophysics Data System (ADS)

    Ginsburg, B. R.

    The design criteria, materials, and initial test results of composite flywheels produced under DOE/Sandia contract are reported. The flywheels were required to store from 1-5 kWh with a total energy density of 80 W-h/kg at the maximum operational speed. The maximum diameter was set at 0.6 m, coupled to a maximum thickness of 0.2 m. A maximum running time at full speed of 1000 hr, in addition to a 10,000 cycle lifetime was mandated, together with a radial overlap in the material. The unit selected was a circumferentially wound composite rim made of graphite/epoxy mounted on an aluminum mandrel ring connected to an aluminum hub consisting of two constant stress disks. A tangentially wound graphite/epoxy overlap covered the rings. All conditions, i.e., rotation at 22,000 rpm and a measured storage of 1.94 kWh were verified in the first test series, although a second flywheel failed in subsequent tests when the temperature was inadvertantly allowed to rise from 15 F to over 200 F. Retest of the first flywheel again satisfied design goals. The units are considered as ideal for coupling with solar energy and wind turbine systems.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  9. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  10. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  11. Surface-Enhanced Raman Optical Data Storage system

    DOEpatents

    Vo-Dinh, T.

    1991-03-12

    A method and apparatus for a Surface-Enhanced Raman Optical Data Storage (SERODS) System are disclosed. A medium which exhibits the Surface Enhanced Raman Scattering (SERS) phenomenon has data written onto its surface of microenvironment by means of a write-on procedure which disturbs the surface or microenvironment of the medium and results in the medium having a changed SERS emission when excited. The write-on procedure is controlled by a signal that corresponds to the data to be stored so that the disturbed regions on the storage device (e.g., disk) represent the data. After the data is written onto the storage device it is read by exciting the surface of the storage device with an appropriate radiation source and detecting changes in the SERS emission to produce a detection signal. The data is then reproduced from the detection signal. 5 figures.

  12. Surface-enhanced raman optical data storage system

    DOEpatents

    Vo-Dinh, Tuan

    1991-01-01

    A method and apparatus for a Surface-Enhanced Raman Optical Data Storage (SERODS) System is disclosed. A medium which exhibits the Surface Enhanced Raman Scattering (SERS) phenomenon has data written onto its surface of microenvironment by means of a write-on procedure which disturbs the surface or microenvironment of the medium and results in the medium having a changed SERS emission when excited. The write-on procedure is controlled by a signal that corresponds to the data to be stored so that the disturbed regions on the storage device (e.g., disk) represent the data. After the data is written onto the storage device it is read by exciting the surface of the storage device with an appropriate radiation source and detecting changes in the SERS emission to produce a detection signal. The data is then reproduced from the detection signal.

  13. Time-resolved scanning Kerr microscopy of flux beam formation in hard disk write heads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valkass, Robert A. J., E-mail: rajv202@ex.ac.uk; Spicer, Timothy M.; Burgos Parra, Erick

    To meet growing data storage needs, the density of data stored on hard disk drives must increase. In pursuit of this aim, the magnetodynamics of the hard disk write head must be characterized and understood, particularly the process of “flux beaming.” In this study, seven different configurations of perpendicular magnetic recording (PMR) write heads were imaged using time-resolved scanning Kerr microscopy, revealing their detailed dynamic magnetic state during the write process. It was found that the precise position and number of driving coils can significantly alter the formation of flux beams during the write process. These results are applicable tomore » the design and understanding of current PMR and next-generation heat-assisted magnetic recording devices, as well as being relevant to other magnetic devices.« less

  14. Quiet, Computer at Work.

    ERIC Educational Resources Information Center

    Black, Claudia

    Libraries are becoming information access points, not just book repositories. With greater distribution of printed materials, increased use of optical disks and other compact storage techniques, the emergence of publication on demand, and the proliferation of electronic databases, libraries without large collections will be able to provide prompt…

  15. Geophysical data base

    NASA Technical Reports Server (NTRS)

    Williamson, M. R.; Kirschner, L. R.

    1975-01-01

    A general data-management system that provides a random-access capability for large amounts of data is described. The system operates on a CDC 6400 computer using a combination of magnetic tape and disk storage. A FORTRAN subroutine package is provided to simplify the maintenance and use of the data.

  16. Rotary Drum Separator and Pump for the Sabatier Carbon Dioxide Reduction System

    NASA Technical Reports Server (NTRS)

    Holder, Don; Fort, James; Barone, Michael; Murdoch, Karen

    2005-01-01

    A trade study conducted in 2001 selected a rotary disk separator as the best candidate to meet the requirements for an International Space Station (ISS) Carbon Dioxide Reduction Assembly (CRA). The selected technology must provide micro-gravity gasfliquid separation and pump the liquid from 10 psia at the gasfliquid interface to 18 psia at the wastewater bus storage tank. The rotary disk concept, which has pedigree in other systems currently being built for installation on the ISS, failed to achieve the required pumping head within the allotted power. The separator discussed in this paper is a new design that was tested to determine compliance with performance requirements in the CRA. The drum separator and pump @SP) design is similar to the Oxygen Generator Assembly (OGA) Rotary Separator Accumulator (RSA) in that it has a rotating assembly inside a stationary housing driven by a integral internal motor. The innovation of the DSP is the drum shaped rotating assembly that acts as the accumulator and also pumps the liquid at much less power than its predecessors. In the CRA application, the separator will rotate at slow speed while accumulating water. Once full, the separator will increase speed to generate sufficient head to pump the water to the wastewater bus. A proof-of- concept (POC) separator has been designed, fabricated and tested to assess the separation efficiency and pumping head of the design. This proof-of-concept item was flown aboard the KC135 to evaluate the effectiveness of the separator in a microgravity environment. This separator design has exceeded all of the performance requirements. The next step in the separator development is to integrate it into the Sabatier Carbon Dioxide Reduction System. This will be done with the Sabatier Engineering Development Unit at the Johnson Space Center.

  17. Ultraviolet light treatment for the restoration of age-related degradation of titanium bioactivity.

    PubMed

    Hori, Norio; Ueno, Takeshi; Suzuki, Takeo; Yamada, Masahiro; Att, Wael; Okada, Shunsaku; Ohno, Akinori; Aita, Hideki; Kimoto, Katsuhiko; Ogawa, Takahiro

    2010-01-01

    To examine the bioactivity of differently aged titanium (Ti) disks and to determine whether ultraviolet (UV) light treatment reverses the possible adverse effects of Ti aging. Ti disks with three different surface topographies were prepared: machined, acid-etched, and sandblasted. The disks were divided into three groups: disks tested for biologic capacity immediately after processing (fresh surfaces), disks stored under dark ambient conditions for 4 weeks, and disks stored for 4 weeks and treated with UV light. The protein adsorption capacity of Ti was examined using albumin and fibronectin. Cell attraction to Ti was evaluated by examining migration, attachment, and spreading behaviors of human osteoblasts on Ti disks. Osteoblast differentiation was evaluated by examining alkaline phosphatase activity, the expression of bone-related genes, and mineralized nodule area in the culture. Four-week-old Ti disks showed = or < 50% protein adsorption after 6 hours of incubation compared with fresh disks, regardless of surface topography. Total protein adsorption for 4-week-old surfaces did not reach the level of fresh surfaces, even after 24 hours of incubation. Fifty percent fewer human osteoblasts migrated and attached to 4-week-old surfaces compared with fresh surfaces. Alkaline phosphatase activity, gene expression, and mineralized nodule area were substantially reduced on the 4-week-old surfaces. The reduction of these biologic parameters was associated with the conversion of Ti disks from superhydrophilicity to hydrophobicity during storage for 4 weeks. UV-treated 4-week-old disks showed even higher protein adsorption, osteoblast migration, attachment, differentiation, and mineralization than fresh surfaces, and were associated with regenerated superhydrophilicity. Time-related degradation of Ti bioactivity is substantial and impairs the recruitment and function of human osteoblasts as compared to freshly prepared Ti surfaces, suggesting a "biologic aging"-like change of Ti. UV treatment of aged Ti, however, restores and even enhances bioactivity, exceeding its innate levels.

  18. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.

  19. A high-speed network for cardiac image review.

    PubMed Central

    Elion, J. L.; Petrocelli, R. R.

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  20. Online data handling and storage at the CMS experiment

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gómez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, RK; Morovic, S.; Nuñez-Barranco-Fernández, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

  1. Online Data Handling and Storage at the CMS Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J. M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced bymore » the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.« less

  2. Petabyte Class Storage at Jefferson Lab (CEBAF)

    NASA Technical Reports Server (NTRS)

    Chambers, Rita; Davis, Mark

    1996-01-01

    By 1997, the Thomas Jefferson National Accelerator Facility will collect over one Terabyte of raw information per day of Accelerator operation from three concurrently operating Experimental Halls. When post-processing is included, roughly 250 TB of raw and formatted experimental data will be generated each year. By the year 2000, a total of one Petabyte will be stored on-line. Critical to the experimental program at Jefferson Lab (JLab) is the networking and computational capability to collect, store, retrieve, and reconstruct data on this scale. The design criteria include support of a raw data stream of 10-12 MB/second from Experimental Hall B, which will operate the CEBAF (Continuous Electron Beam Accelerator Facility) Large Acceptance Spectrometer (CLAS). Keeping up with this data stream implies design strategies that provide storage guarantees during accelerator operation, minimize the number of times data is buffered allow seamless access to specific data sets for the researcher, synchronize data retrievals with the scheduling of postprocessing calculations on the data reconstruction CPU farms, as well as support the site capability to perform data reconstruction and reduction at the same overall rate at which new data is being collected. The current implementation employs state-of-the-art StorageTek Redwood tape drives and robotics library integrated with the Open Storage Manager (OSM) Hierarchical Storage Management software (Computer Associates, International), the use of Fibre Channel RAID disks dual-ported between Sun Microsystems SMP servers, and a network-based interface to a 10,000 SPECint92 data processing CPU farm. Issues of efficiency, scalability, and manageability will become critical to meet the year 2000 requirements for a Petabyte of near-line storage interfaced to over 30,000 SPECint92 of data processing power.

  3. Possible Rapid Gas Giant Planet Formation in the Solar Nebula and Other Protoplanetary Disks.

    PubMed

    Boss

    2000-06-20

    Gas giant planets have been detected in orbit around an increasing number of nearby stars. Two theories have been advanced for the formation of such planets: core accretion and disk instability. Core accretion, the generally accepted mechanism, requires several million years or more to form a gas giant planet in a protoplanetary disk like the solar nebula. Disk instability, on the other hand, can form a gas giant protoplanet in a few hundred years. However, disk instability has previously been thought to be important only in relatively massive disks. New three-dimensional, "locally isothermal," hydrodynamical models without velocity damping show that a disk instability can form Jupiter-mass clumps, even in a disk with a mass (0.091 M middle dot in circle within 20 AU) low enough to be in the range inferred for the solar nebula. The clumps form with initially eccentric orbits, and their survival will depend on their ability to contract to higher densities before they can be tidally disrupted at successive periastrons. Because the disk mass in these models is comparable to that apparently required for the core accretion mechanism to operate, the models imply that disk instability could obviate the core accretion mechanism in the solar nebula and elsewhere.

  4. Imaging Transitional Disks with TMT: Lessons Learned from the SEEDS Survey

    NASA Technical Reports Server (NTRS)

    Grady, Carol A.; Fukagawa, M.; Muto, T.; Hashimoto, J.

    2014-01-01

    TMT studies of the early phases of giant planet formation will build on studies carried out in this decade using 8-meter class telescopes. One such study is the Strategic Exploration of Exoplanets and Disks with Subaru transitional disk survey. We have found a wealth of indirect signatures of giant planet presence, including spiral arms, pericenter offsets of the outer disk from the star, and changes in disk color at the inner edge of the outer disk in intermediate-mass PMS star disks. T Tauri star transitional disks are less flamboyant, but are also dynamically colder: any spiral arms in these diskswill be more tightly wound. Imaging such features at the distance of the nearest star-forming regions requires higher angular resolution than achieved with HiCIAO+ AO188. Imaging such disks with extreme AO systems requires use of laser guide stars, and are infeasible with the extreme AO systems currently commissioning on 8-meter class telescopes. Similarly, the JWST and AFTAWFIRST coronagraphs being considered have inner working angles 0.2, and will occult the inner 28 atomic units of systems at d140pc, a region where both high-contrast imagery and ALMA data indicate that giant planets are located in transitional disks. However, studies of transitional disks associated with solar-mass stars and their planet complement are feasible with TMT using NFIRAOS.

  5. PHOTOIONIZATION MODELS OF THE INNER GASEOUS DISK OF THE HERBIG BE STAR BD+65 1637

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, P.; Sigut, T. A. A.; Landstreet, J. D., E-mail: ppatel54@uwo.ca

    2016-01-20

    We attempt to constrain the physical properties of the inner, gaseous disk of the Herbig Be star BD+65 1637 using non-LTE, circumstellar disk codes and observed spectra (3700–10500 Å) from the ESPaDOnS instrument on the Canada–France–Hawaii Telescope. The photoionizing radiation of the central star is assumed to be the sole source of input energy for the disk. We model optical and near-infrared emission lines that are thought to form in this region using standard techniques that have been successful in modeling the spectra of classical Be stars. By comparing synthetic line profiles of hydrogen, helium, iron, and calcium with themore » observed line profiles, we try to constrain the geometry, density structure, and kinematics of the gaseous disk. Reasonable matches have been found for all line profiles individually; however, no disk density model based on a single power law for the equatorial density was able to simultaneously fit all of the observed emission lines. Among the emission lines, the metal lines, especially the Ca ii IR triplet, seem to require higher disk densities than the other lines. Excluding the Ca ii lines, a model in which the equatorial disk density falls as 10{sup −10} (R{sub *}/R){sup 3} g cm{sup −3} seen at an inclination of 45° for a 50 R{sub *} disk provides reasonable matches to the overall line shapes and strengths. The Ca ii lines seem to require a shallower drop-off as 10{sup −10} (R{sub *}/R){sup 2} g cm{sup −3} to match their strength. More complex disk density models are likely required to refine the match to the BD+65 1637 spectrum.« less

  6. Photoionization Models of the Inner Gaseous Disk of the Herbig Be Star BD+65 1637

    NASA Astrophysics Data System (ADS)

    Patel, P.; Sigut, T. A. A.; Landstreet, J. D.

    2016-01-01

    We attempt to constrain the physical properties of the inner, gaseous disk of the Herbig Be star BD+65 1637 using non-LTE, circumstellar disk codes and observed spectra (3700-10500 Å) from the ESPaDOnS instrument on the Canada-France-Hawaii Telescope. The photoionizing radiation of the central star is assumed to be the sole source of input energy for the disk. We model optical and near-infrared emission lines that are thought to form in this region using standard techniques that have been successful in modeling the spectra of classical Be stars. By comparing synthetic line profiles of hydrogen, helium, iron, and calcium with the observed line profiles, we try to constrain the geometry, density structure, and kinematics of the gaseous disk. Reasonable matches have been found for all line profiles individually; however, no disk density model based on a single power law for the equatorial density was able to simultaneously fit all of the observed emission lines. Among the emission lines, the metal lines, especially the Ca II IR triplet, seem to require higher disk densities than the other lines. Excluding the Ca II lines, a model in which the equatorial disk density falls as 10-10 (R*/R)3 g cm-3 seen at an inclination of 45° for a 50 R* disk provides reasonable matches to the overall line shapes and strengths. The Ca II lines seem to require a shallower drop-off as 10-10 (R*/R)2 g cm-3 to match their strength. More complex disk density models are likely required to refine the match to the BD+65 1637 spectrum.

  7. Using compressed images in multimedia education

    NASA Astrophysics Data System (ADS)

    Guy, William L.; Hefner, Lance V.

    1996-04-01

    The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD-ROM depending on the quality of image required for the finding in question. This reduces the weight of a teaching file from that of a baby elephant to that of a single CD-ROM disc. Thus, with this method of teaching file preparation and storage the following advantages are realized: (1) Technically easier and less time consuming image reproduction. (2) Considerably less unwieldy and substantially more portable teaching files. (3) Novel ability to index files and then retrieve specific cases of choice based on descriptive text.

  8. Efficient micromagnetics for magnetic storage devices

    NASA Astrophysics Data System (ADS)

    Escobar Acevedo, Marco Antonio

    Micromagnetics is an important component for advancing the magnetic nanostructures understanding and design. Numerous existing and prospective magnetic devices rely on micromagnetic analysis, these include hard disk drives, magnetic sensors, memories, microwave generators, and magnetic logic. The ability to examine, describe, and predict the magnetic behavior, and macroscopic properties of nanoscale magnetic systems is essential for improving the existing devices, for progressing in their understanding, and for enabling new technologies. This dissertation describes efficient micromagnetic methods as required for magnetic storage analysis. Their performance and accuracy is demonstrated by studying realistic, complex, and relevant micromagnetic system case studies. An efficient methodology for dynamic micromagnetics in large scale simulations is used to study the writing process in a full scale model of a magnetic write head. An efficient scheme, tailored for micromagnetics, to find the minimum energy state on a magnetic system is presented. This scheme can be used to calculate hysteresis loops. An efficient scheme, tailored for micromagnetics, to find the minimum energy path between two stable states on a magnetic system is presented. This minimum energy path is intimately related to the thermal stability.

  9. AIIM '90: Themes and Trends.

    ERIC Educational Resources Information Center

    Cowan, Les

    1990-01-01

    Outlines and analyzes new trends and developments at the Association for Information and Image Management's 1990 spring conference. The growth of imaging and the optical storage industry is emphasized, and new developments that are discussed include hardware; optical disk drives; jukeboxes; local area networks (LANs); bar codes; image displays;…

  10. Document Indexing for Image-Based Optical Information Systems.

    ERIC Educational Resources Information Center

    Thiel, Thomas J.; And Others

    1991-01-01

    Discussion of image-based information retrieval systems focuses on indexing. Highlights include computerized information retrieval; multimedia optical systems; optical mass storage and personal computers; and a case study that describes an optical disk system which was developed to preserve, access, and disseminate military documents. (19…

  11. Digital Audio Tape: Yet Another Archival Media?

    ERIC Educational Resources Information Center

    Vanker, Anthony D.

    1989-01-01

    Provides an introduction to the technical aspects of digital audiotape and compares it to other computer storage devices such as optical data disks and magnetic tape cartridges in terms of capacity, transfer rate, and cost. The current development of digital audiotape standards is also discussed. (five references) (CLB)

  12. Manufacturing Methods and Technology Project Summary Reports

    DTIC Science & Technology

    1983-06-01

    Proposal will be prepared by Solar Turbines, Inc. for introduction of cast titanium impellers into T62T-40 production. Detroit Diesel Allison will...microprocessor con- trol, RS 232 serial zommunications ports, binary I/O ports, floppy disk mass storage and cor.-rol panal . A component pickup

  13. Physical principles and current status of emerging non-volatile solid state memories

    NASA Astrophysics Data System (ADS)

    Wang, L.; Yang, C.-H.; Wen, J.

    2015-07-01

    Today the influence of non-volatile solid-state memories on persons' lives has become more prominent because of their non-volatility, low data latency, and high robustness. As a pioneering technology that is representative of non-volatile solidstate memories, flash memory has recently seen widespread application in many areas ranging from electronic appliances, such as cell phones and digital cameras, to external storage devices such as universal serial bus (USB) memory. Moreover, owing to its large storage capacity, it is expected that in the near future, flash memory will replace hard-disk drives as a dominant technology in the mass storage market, especially because of recently emerging solid-state drives. However, the rapid growth of the global digital data has led to the need for flash memories to have larger storage capacity, thus requiring a further downscaling of the cell size. Such a miniaturization is expected to be extremely difficult because of the well-known scaling limit of flash memories. It is therefore necessary to either explore innovative technologies that can extend the areal density of flash memories beyond the scaling limits, or to vigorously develop alternative non-volatile solid-state memories including ferroelectric random-access memory, magnetoresistive random-access memory, phase-change random-access memory, and resistive random-access memory. In this paper, we review the physical principles of flash memories and their technical challenges that affect our ability to enhance the storage capacity. We then present a detailed discussion of novel technologies that can extend the storage density of flash memories beyond the commonly accepted limits. In each case, we subsequently discuss the physical principles of these new types of non-volatile solid-state memories as well as their respective merits and weakness when utilized for data storage applications. Finally, we predict the future prospects for the aforementioned solid-state memories for the next generation of data-storage devices based on a comparison of their performance. [Figure not available: see fulltext.

  14. The Photorefractive Effect and its Application in Optical Computing

    NASA Astrophysics Data System (ADS)

    Li, Guo

    This Ph.D dissertation includes the fanning effect and the temperature dependence of the diffraction efficiency and response time using different addressing configurations, and evaluation of the limitations and capacity of a holographic storage in BaTiO_3 crystals. Also, we designed a digital holographic optical disk and made an associate memory. The beam fanning effect in a BaTiO_3 crystal was investigated in detail. The effect depends on the crystal faces illuminated. In particular, for the +c face of illumination we found that the fanning effect strongly depends on angle of incidence, polarization and wavelength of the incident light, crystal temperature, laser beam profile, but only weakly depends on input laser power. In the case of the -c face and a-face illumination dependence of the ring angle on wavelength and input power was observed. We found that the intensity of the reflected beam in NDFWM, the intensity of self phase conjugate beam and the response time of the fanning effect decrease with temperature exponentially and there being a major change around 60 ^circ-80^circ C. A random bistability and oscillation of the SPPC occur around 80^circC. We also present a theoretical analysis for the dependence of the photorefractive effect on temperature. We experimentally evaluate the capacity and limitation of optical storage in BaTiO_3 crystals using self-pumped phase conjugation (SPPC) and two-wave mixing. The storage capacity is different with different face of illumination, polarization, beam profile and input power. We demonstrate that using two wave mixing, three dimensional volume holograms can be stored. The information -bearing beam diameter for storage and recall can be about 0.25mm or less. By these techniques we demonstrate that at least 10^5 holograms can be stored in a 3.5 inch photorefractive disk. We evaluate an optimal optical architecture for exploiting the photorefractive effect for digital holographic disk storage. An image with many pixels was used for this experimental evaluation. By using a raytracing program, we traced a beam with a Gaussian profile through our optical system. We also estimated the Seidel aberration of our optical system in order to determine the quality of the stored digital data.

  15. Inner Disk Structure and Transport Mechanisms in the Transitional Disk around T Cha

    NASA Astrophysics Data System (ADS)

    Brown, Alexander

    2017-08-01

    To better understand how Earth-like planets form around low-mass stars, we propose to study the UV (HST), X-ray (XMM), and optical (LCOGT) variability of the young star T Cha. This variability is caused by obscuration of the star by clumpy material in the rim of its inner disk. Changing sight lines through the disk allow measurement of the temperature and column density of both molecular and atomic gas and the physical properties of the dust grains in the well-mixed inner disk, as well as determining the gas-to-dust ratio. The gas-to-dust ratio affects planetesimal growth and disk stability but is difficult to measure in local regions of disks. Three 5 orbit visits, separated by 3-7 days, are required for use of analysis techniques comprising both differential pair-method comparison of spectra with differing A_v (particularly important for determining the dust extinction curve, A_lambda, where removal of the foreground extinction requires multiple epochs) and detailed spectral fitting of gas absorption features at each epoch. The inner disk of T Cha is particularly interesting, because T Cha has a transitional disk with a large gap at 0.2-15 AU in the dust disk and allows study of the gas and dust structure in the terrestrial planet formation zone during this important rapid phase of protoplanetary disk evolution. Characterizing the high energy (UV/X-ray) radiation field is also essential for in-depth studies of the disk in other spectral regions. Results from these observations will have wide relevance to the modeling and understanding of protoplanetary disk structure and evolution, and the complex gas and dust physics and chemistry in disk surface layers.

  16. Dynamic stability and slider-lubricant interactions in hard disk drives

    NASA Astrophysics Data System (ADS)

    Ambekar, Rohit Pradeep

    2007-12-01

    Hard disk drives (HDD) have played a significant role in the current information age and have become the backbone of storage. The soaring demand for mass data storage drives the necessity for increasing capacity of the drives and hence the areal density on the disks as well as the reliability of the HDD. To achieve greater areal density in hard disk drives, the flying height of the airbearing slider continually decreases. Different proximity forces and interactions influence the air bearing slider resulting in fly height modulation and instability. This poses several challenges to increasing the areal density (current goal is 2Tb/in.2) as well as making the head-disk interface (HDI) more reliable. Identifying and characterizing these forces or interactions has become important for achieving a stable fly height at proximity and realizing the goals of areal density and reliability. Several proximity forces or interactions influencing the slider are identified through the study of touchdown-takeoff hysteresis. Slider-lubricant interaction which causes meniscus force between the slider and disk as well as airbearing surface contamination seems to be the most important factor affecting stability and reliability at proximity. In addition, intermolecular forces and disk topography are identified as important factors. Disk-to-slider lubricant transfer leads to lubricant pickup on the slider and also causes depletion of lubricant on the disk, affecting stability and reliability of the HDI. Experimental and numerical investigation as well as a parametric study of the process of lubricant transfer has been done using a half-delubed disk. In the first part of this parametric study, dependence on the disk lubricant thickness, lubricant type and slider ABS design has been investigated. It is concluded that the lubricant transfer can occur without slider-disk contact and there can be more than one timescale associated with the transfer. Further, the transfer increases non-linearly with increasing disk lubricant thickness. Also, the transfer depends on the type of lubricant used, and is less for Ztetraol than for Zdol. The slider ABS design also plays an important role, and a few suggestions are made to improve the ABS design for better lubricant performance. In the second part of the parametric study, the effect of carbon overcoat, lubricant molecular weight and inclusion of X-1P and A20H on the slider-lubricant interactions is investigated using a half-delubed disk approach. Based on the results, it is concluded that there exists a critical head-disk clearance above which there is negligible slider-lubricant interaction. The interaction starts at this critical clearance and increases in intensity as the head-disk clearance is further decreased below the critical clearance. Using shear stress simulations and previously published work a theory is developed to support the experimental observations. The critical clearance depends on various HDI parameters and hence can be reduced through proper design of the interface. Comparison of critical clearance on CHx and CHxNy media indicates that presence of nitrogen is better for HDI as it reduces the critical clearance, which is found to increase with increasing lubricant molecular weight and in presence of additives X-1P and A20H. Further experiments maintaining a fixed slider-disk clearance suggest that two different mechanisms dominate the disk-to-slider and slider-to-disk lubricant transfer. One of the key factors influencing the slider stability at proximity is the disk topography, since it provides dynamic excitation to the low-flying sliders and strongly influences its dynamics. The effect of circumferential as well as radial disk topography is investigated using a new method to measure the 2-D (true) disk topography. Simulations using CMLAir dynamic simulator indicate a strong dependence on the circumferential roughness and waviness features as well as radial features, which have not been studied intensively till now. The simulations with 2-D disk topography are viewed as more realistic than the 1-D simulations. Further, it is also seen that the effect of the radial features can be reduced through effective ABS design. Finally, an attempt has been made to establish correlations between some of the proximity interactions as well as others which may affect the HDI reliability by creating a relational chart. Such an organization serves to give a bigger picture of the various efforts being made in the field of HDI reliability and link them together. From this chart, a causal relationship is suggested between the electrostatic, intermolecular and meniscus forces.

  17. Design and evaluation of a hybrid storage system in HEP environment

    NASA Astrophysics Data System (ADS)

    Xu, Qi; Cheng, Yaodong; Chen, Gang

    2017-10-01

    Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.

  18. Uniform Interfaces for Distributed Systems.

    DTIC Science & Technology

    1980-05-01

    in data str ’.ctures on stable storage (such as disk). The Virtual Terminals associated with a particular user (i.e., rM display terminal) are all...vec MESSAGESIZE let error = nil [S ReceiveAny (msg) // The copy is made so that lower-level routines may // munge the message template without losing

  19. The Dag Hammarskjold Library Reaches Out to the World.

    ERIC Educational Resources Information Center

    Chepesiuk, Ron

    1998-01-01

    Describes services offered at the Dag Hammarskjold Library at the United Nations (UN). Highlights include adopting new technology for a virtual library; the international law collection which is now accessible through the World Wide Web; UN depository libraries; material available on the Internet; the Optical Disk System, a storage/retrieval…

  20. 48 CFR 1552.215-72 - Instructions for the Preparation of Proposals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of the information, to expedite review of the proposal, submit an IBM-compatible software or storage... offeror used another spreadsheet program, indicate the software program used to create this information... submission of a compatible software or device will expedite review, failure to submit a disk will not affect...

  1. 48 CFR 1552.215-72 - Instructions for the Preparation of Proposals.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of the information, to expedite review of the proposal, submit an IBM-compatible software or storage... offeror used another spreadsheet program, indicate the software program used to create this information... submission of a compatible software or device will expedite review, failure to submit a disk will not affect...

  2. CD-ROMs: Volumes of Books on a Single 4 3/4-Inch Disk.

    ERIC Educational Resources Information Center

    Angle, Melanie

    1992-01-01

    Summarizes the storage capacity, advantages, disadvantages, hardware configurations, and costs of CD-ROMs. Several available titles are described, including "Books in Print," literature study guides, the works of Shakespeare, a historical almanac of "Time Magazine" articles, a scientific dictionary and encyclopedia, and a…

  3. Lubricant depletion under various laser heating conditions in Heat Assisted Magnetic Recording (HAMR)

    NASA Astrophysics Data System (ADS)

    Xiong, Shaomin; Wu, Haoyu; Bogy, David

    2014-09-01

    Heat assisted magnetic recording (HAMR) is expected to increase the storage areal density to more than 1 Tb/in2 in hard disk drives (HDDs). In this technology, a laser is used to heat the magnetic media to the Curie point (~400-600 °C) during the writing process. The lubricant on the top of a magnetic disk could evaporate and be depleted under the laser heating. The change of the lubricant can lead to instability of the flying slider and failure of the head-disk interface (HDI). In this study, a HAMR test stage is developed to study the lubricant thermal behavior. Various heating conditions are controlled for the study of the lubricant thermal depletion. The effects of laser heating repetitions and power levels on the lubricant depletion are investigated experimentally. The lubricant reflow behavior is discussed as well.

  4. Finite difference model for aquifer simulation in two dimensions with results of numerical experiments

    USGS Publications Warehouse

    Trescott, Peter C.; Pinder, George Francis; Larson, S.P.

    1976-01-01

    The model will simulate ground-water flow in an artesian aquifer, a water-table aquifer, or a combined artesian and water-table aquifer. The aquifer may be heterogeneous and anisotropic and have irregular boundaries. The source term in the flow equation may include well discharge, constant recharge, leakage from confining beds in which the effects of storage are considered, and evapotranspiration as a linear function of depth to water. The theoretical development includes presentation of the appropriate flow equations and derivation of the finite-difference approximations (written for a variable grid). The documentation emphasizes the numerical techniques that can be used for solving the simultaneous equations and describes the results of numerical experiments using these techniques. Of the three numerical techniques available in the model, the strongly implicit procedure, in general, requires less computer time and has fewer numerical difficulties than do the iterative alternating direction implicit procedure and line successive overrelaxation (which includes a two-dimensional correction procedure to accelerate convergence). The documentation includes a flow chart, program listing, an example simulation, and sections on designing an aquifer model and requirements for data input. It illustrates how model results can be presented on the line printer and pen plotters with a program that utilizes the graphical display software available from the Geological Survey Computer Center Division. In addition the model includes options for reading input data from a disk and writing intermediate results on a disk.

  5. Permanent-File-Validation Utility Computer Program

    NASA Technical Reports Server (NTRS)

    Derry, Stephen D.

    1988-01-01

    Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

  6. Vortical structures for nanomagnetic memory induced by dipole-dipole interaction in monolayer disks

    NASA Astrophysics Data System (ADS)

    Liu, Zhaosen; Ciftja, Orion; Zhang, Xichao; Zhou, Yan; Ian, Hou

    2018-05-01

    It is well known that magnetic domains in nanodisks can be used as storage units for computer memory. Using two quantum simulation approaches, we show here that spin vortices on magnetic monolayer nanodisks, which are chirality-free, can be induced by dipole-dipole interaction (DDI) on the disk-plane. When DDI is sufficiently strong, vortical and anti-vortical multi-domain textures can be generated simultaneously. Especially, a spin vortex can be easily created and deleted through either external magnetic or electrical signals, making them ideal to be used in nanomagnetic memory and logical devices. We demonstrate these properties in our simulations.

  7. Helicity-dependent all-optical switching in hybrid metal-ferromagnet structures for ultrafast magnetic data storage

    NASA Astrophysics Data System (ADS)

    Cheng, Feng

    The emerging Big Data era demands the rapidly increasing need for speed and capacity of storing and processing information. Standalone magnetic recording devices, such as hard disk drives (HDDs), have always been playing a central role in modern data storage and continuously advancing. Recognizing the growing capacity gap between the demand and production, industry has pushed the bit areal density in HDDs to 900 Giga-bit/square-inch, a remarkable 450-million-fold increase since the invention of the first hard disk drive in 1956. However, the further development of HDD capacity is facing a pressing challenge, the so-called superparamagnetic effect, that leads to the loss of information when a single bit becomes too small to preserve the magnetization. This requires new magnetic recording technologies that can write more stable magnetic bits into hard magnetic materials. Recent research has shown that it is possible to use ultrafast laser pulses to switch the magnetization in certain types of magnetic thin films. Surprisingly, such a process does not require an externally applied magnetic field that always exists in conventional HDDs. Furthermore, the optically induced magnetization switching is extremely fast, up to sub-picosecond (10 -12 s) level, while with traditional recording method the deterministic switching does not take place shorter than 20 ps. It's worth noting that the direction of magnetization is related to the helicity of the incident laser pulses. Namely, the right-handed polarized laser pulses will generate magnetization pointing in one direction while left-handed polarized laser pulses generate magnetization pointing in the other direction. This so-called helicity-dependent all-optical switching (HD-AOS) phenomenon can be potentially used in the next-generation of magnetic storage systems. In this thesis, I explore the HD-AOS phenomenon in hybrid metal-ferromagnet structures, which consist of gold and Co/Pt multilayers. The experiment results show that such CoPtAu hybrid structures have stable HD-AOS phenomenon over a wild range of repetition rates and peak powers. A macroscopic three-temperature model is developed to explain the experiment results. In order to reduce the magnetic bit size and power consumption to transform future magnetic data storage techniques, I further propose plasmonic-enhanced all-optical switching (PE-AOS) by utilizing the unique properties of the tight field confinement and strong local field enhancement that arise from the excitation of surface plasmons supported by judiciously designed metallic nanostructures. The preliminary results on PE-AOS are presented. Finally, I provide a discussion on the future work to explore the underline mechanism of the HD-AOS phenomenon in hybrid metal-ferromagnetic thin films. Different materials and plasmonic nanostructures are also proposed as further work.

  8. Automated Forensic Animal Family Identification by Nested PCR and Melt Curve Analysis on an Off-the-Shelf Thermocycler Augmented with a Centrifugal Microfluidic Disk Segment

    PubMed Central

    Zengerle, Roland; von Stetten, Felix; Schmidt, Ulrike

    2015-01-01

    Nested PCR remains a labor-intensive and error-prone biomolecular analysis. Laboratory workflow automation by precise control of minute liquid volumes in centrifugal microfluidic Lab-on-a-Chip systems holds great potential for such applications. However, the majority of these systems require costly custom-made processing devices. Our idea is to augment a standard laboratory device, here a centrifugal real-time PCR thermocycler, with inbuilt liquid handling capabilities for automation. We have developed a microfluidic disk segment enabling an automated nested real-time PCR assay for identification of common European animal groups adapted to forensic standards. For the first time we utilize a novel combination of fluidic elements, including pre-storage of reagents, to automate the assay at constant rotational frequency of an off-the-shelf thermocycler. It provides a universal duplex pre-amplification of short fragments of the mitochondrial 12S rRNA and cytochrome b genes, animal-group-specific main-amplifications, and melting curve analysis for differentiation. The system was characterized with respect to assay sensitivity, specificity, risk of cross-contamination, and detection of minor components in mixtures. 92.2% of the performed tests were recognized as fluidically failure-free sample handling and used for evaluation. Altogether, augmentation of the standard real-time thermocycler with a self-contained centrifugal microfluidic disk segment resulted in an accelerated and automated analysis reducing hands-on time, and circumventing the risk of contamination associated with regular nested PCR protocols. PMID:26147196

  9. The convertible flywheel

    NASA Astrophysics Data System (ADS)

    Ginsburg, B. R.

    The design and testing of a new twin-disk composite flywheel is described. It is the first flywheel to store 2 kW-hr of energy and the first to successfully combine the advantages of composite materials with metal hubs, thus providing a system-ready flywheel with high energy storage and high torque capabilities. The use of flywheels in space for energy storage in satellites and space stations is examined. The convertibility of the present flywheel to provide the next generation Annular Momentum Control Device or Annular Suspension and Pointing System is discussed.

  10. Factors Affecting the Nonlinear Force Versus Distraction Height Curves in an In Vitro C5-C6 Anterior Cervical Distraction Model.

    PubMed

    Wen, Junxiang; Xu, Jianwei; Li, Lijun; Yang, Mingjie; Pan, Jie; Chen, Deyu; Jia, Lianshun; Tan, Jun

    2017-06-01

    In vitro biomechanical study of cervical intervertebral distraction. To investigate the forces required for distraction to different heights in an in vitro C5-C6 anterior cervical distraction model, focusing on the influence of the intervertebral disk, posterior longitudinal ligament (PLL), and ligamentum flavum (LF). No previous studies have reported on the forces required for distraction to various heights or the factors resisting distraction in anterior cervical discectomy and fusion. Anterior cervical distraction at C5-C6 was performed in 6 cadaveric specimens using a biomechanical testing machine, under 4 conditions: A, before disk removal; B, after disk removal; C, after disk and PLL removal; and D, after disk and PLL removal and cutting of the LF. Distraction was performed from 0 to 10 mm at a constant velocity (5 mm/min). Force and distraction height were recorded automatically. The force required increased with distraction height under all 4 conditions. There was a sudden increase in force required at 6-7 mm under conditions B and C, but not D. Under condition A, distraction to 5 mm required a force of 268.3±38.87 N. Under conditions B and C, distraction to 6 mm required <15 N, and further distraction required dramatically increased force, with distraction to 10 mm requiring 115.4±10.67 and 68.4±9.67 N, respectively. Under condition D, no marked increase in force was recorded. Distraction of the intervertebral space was much easier after disk removal. An intact LF caused a sudden marked increase in the force required for distraction, possibly indicating the point at which the LF was fully stretched. This increase in resistance may help to determine the optimal distraction height to avoid stress to the endplate spacer.

  11. Focus on the post-DVD formats

    NASA Astrophysics Data System (ADS)

    He, Hong; Wei, Jingsong

    2005-09-01

    As the digital TV(DTV) technologies are developing rapidly on its standard system, hardware desktop, software model, and interfaces between DTV and the home net, High Definition TV (HDTV) program worldwide broadcasting is scheduled. Enjoying high quality TV program at home is not a far-off dream for people. As for the main recording media, what would the main stream be for the optical storage technology to meet the HDTV requirements is becoming a great concern. At present, there are a few kinds of Post-DVD formats which are competing on technology, standard and market. Here we give a review on the co-existing Post-DVD formats in the world. We will discuss on the basic parameters for optical disk, video /audio coding strategy and system performance for HDTV program.

  12. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    NASA Astrophysics Data System (ADS)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16 channels with 240MB/s network throughput.When it is integrated,sharing system can provide 1020MB/s write speed simultaneously.When the master storage server fails, the backup storage server takes over the normal service.The literacy of client will not be affected,in which switching time is less than 5s.The design and integrated storage system meet users requirements. Anyway, all-fiber way is too expensive in SAN; SCSI hard disk transfer rate may still be the bottleneck in the development of the entire storage system. Stornext can provide users with efficient sharing, management, automatic archiving of large numbers of files and hardware solutions. It occupies a leading position in big data management. Storage is the most popular sharing shareware, and there are drawbacks in Stornext: Firstly, Stornext software is expensive, in which charge by the sites. When the network scale is large, the purchase cost will be very high. Secondly, the parameters of Stornext software are more demands on the skills of technical staff. If there is a problem, it is difficult to exclude.

  13. 75 FR 13045 - Airworthiness Directives; CFM International, S.A. CFM56-5, -5B, and -7B Series Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-18

    ... (LPT) disks, identified by serial number (S/N). This proposed AD results from the discovery of a... discovery of a material nonconformity requiring removal of the disk before the certified disk life of...

  14. Security of patient data when decommissioning ultrasound systems.

    PubMed

    Moggridge, James

    2017-02-01

    Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system's own delete function; full reinstallation of system software; the manufacturer's own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system's own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer's wiping service or the open source wiping software. The typical method of reinstalling an ultrasound system's software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system's hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state.

  15. 77 FR 6859 - Proposed Collection; Comment Request for Revenue Procedure 97-22

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-09

    ... system that either images their paper books and records or transfers their computerized books and records to an electronic storage media, such as an optical disk. The information requested in the revenue... being made to the revenue procedure at this time. Type of Review: Extension of a currently approved...

  16. The State of the Art in Information Handling. Operation PEP/Executive Information Systems.

    ERIC Educational Resources Information Center

    Summers, J. K.; Sullivan, J. E.

    This document explains recent developments in computer science and information systems of interest to the educational manager. A brief history of computers is included, together with an examination of modern computers' capabilities. Various features of card, tape, and disk information storage systems are presented. The importance of time-sharing…

  17. 75 FR 1625 - Privacy Act of 1974; Report of Amended or Altered System; Medical, Health and Billing Records System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-12

    ...., desktop, laptop, handheld or other computer types) containing protected personal identifiers or PHI is... as the National Indian Women's Resource Center, to conduct analytical and evaluation studies. 8... SYSTEM: STORAGE: File folders, ledgers, card files, microfiche, microfilm, computer tapes, disk packs...

  18. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  19. I/O performance evaluation of a Linux-based network-attached storage device

    NASA Astrophysics Data System (ADS)

    Sun, Zhaoyan; Dong, Yonggui; Wu, Jinglian; Jia, Huibo; Feng, Guanping

    2002-09-01

    In a Local Area Network (LAN), clients are permitted to access the files on high-density optical disks via a network server. But the quality of read service offered by the conventional server is not satisfied because of the multiple functions on the server and the overmuch caller. This paper develops a Linux-based Network-Attached Storage (NAS) server. The Operation System (OS), composed of an optimized kernel and a miniaturized file system, is stored in a flash memory. After initialization, the NAS device is connected into the LAN. The administrator and users could configure the access the server through the web page respectively. In order to enhance the quality of access, the management of buffer cache in file system is optimized. Some benchmark programs are peformed to evaluate the I/O performance of the NAS device. Since data recorded in optical disks are usually for reading accesses, our attention is focused on the reading throughput of the device. The experimental results indicate that the I/O performance of our NAS device is excellent.

  20. Planet Formation in Stellar Binaries: How Disk Gravity Can Lower theFragmentation Barrier

    NASA Astrophysics Data System (ADS)

    Silsbee, Kedron; Rafikov, Roman R.

    2014-11-01

    Binary star systems present a challenge to current theories of planet formation. Perturbations from the companion star dynamically excite the protoplanetary disk, which can lead to destructive collisions between planetesimals, and prevent growth from 1 km to 100 km sized planetesimals. Despite this apparent barrier to coagulation, planets have been discovered within several small-separation (<20 AU), eccentric (eb 0.4) binaries, such as alpha Cen and gamma Cep. We address this problem by analytically exploring planetesimal dynamics under the simultaneous action of (1) binary perturbation, (2) gas drag (which tends to align planetesimal orbits), and (3), the gravity of an eccentric protoplanetary disk. We then use our dynamical solutions to assess the outcomes of planetesimal collisions (growth, destruction, erosion) for a variety of disk models. We find that planets in small-separation binaries can form at their present locations if the primordial protoplanetary disks were massive (>0.01M⊙) and not very eccentric (eccentricity of order several per cent at the location of planet). This constraint on the disk mass is compatible with the high masses of the giant planets in known gamma Cep-like binaries, which require a large mass reservoir for their formation. We show that for these massive disks, disk gravity is dominant over the gravity of the binary companion at the location of the observed planets. Therefore, planetesimal growth is highly sensitive to disk properties. The requirement of low disk eccentricity is in line with the recent hydrodynamic simulations that tend to show gaseous disks in eccentric binaries developing very low eccentricity, at the level of a few percent. A massive purely axisymmetric disk makes for a friendlier environment for planetesimal growth by driving rapid apsidal precession of planetesimals, and averaging out the eccentricity excitation from the binary companion. When the protoplanetary disk is eccentric we find that the most favorable conditions for planetesimal growth emerge when the disk is non-precessing and is apsidally aligned with the orbit of the binary.

  1. Virtual file system for PSDS

    NASA Technical Reports Server (NTRS)

    Runnels, Tyson D.

    1993-01-01

    This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

  2. Performance of a distributed superscalar storage server

    NASA Technical Reports Server (NTRS)

    Finestead, Arlan; Yeager, Nancy

    1993-01-01

    The RS/6000 performed well in our test environment. The potential exists for the RS/6000 to act as a departmental server for a small number of users, rather than as a high speed archival server. Multiple UniTree Disk Server's utilizing one UniTree Disk Server's utilizing one UniTree Name Server could be developed that would allow for a cost effective archival system. Our performance tests were clearly limited by the network bandwidth. The performance gathered by the LibUnix testing shows that UniTree is capable of exceeding ethernet speeds on an RS/6000 Model 550. The performance of FTP might be significantly faster if asked to perform across a higher bandwidth network. The UniTree Name Server also showed signs of being a potential bottleneck. UniTree sites that would require a high ratio of file creations and deletions to reads and writes would run into this bottleneck. It is possible to improve the UniTree Name Server performance by bypassing the UniTree LibUnix Library altogether and communicating directly with the UniTree Name Server and optimizing creations. Although testing was performed in a less than ideal environment, hopefully the performance statistics stated in this paper will give end-users a realistic idea as to what performance they can expect in this type of setup.

  3. FORMATION OF CLOSE IN SUPER-EARTHS AND MINI-NEPTUNES: REQUIRED DISK MASSES AND THEIR IMPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlichting, Hilke E., E-mail: hilke@mit.edu

    Recent observations by the Kepler space telescope have led to the discovery of more than 4000 exoplanet candidates consisting of many systems with Earth- to Neptune-sized objects that reside well inside the orbit of Mercury around their respective host stars. How and where these close-in planets formed is one of the major unanswered questions in planet formation. Here, we calculate the required disk masses for in situ formation of the Kepler planets. We find that if close-in planets formed as isolation masses, then standard gas-to-dust ratios yield corresponding gas disks that are gravitationally unstable for a significant fraction of systems,more » ruling out such a scenario. We show that the maximum width of a planet's accretion region in the absence of any migration is 2v {sub esc}/Ω, where v {sub esc} is the escape velocity of the planet and Ω is the Keplerian frequency, and we use it to calculate the required disk masses for in situ formation with giant impacts. Even with giant impacts, formation without migration requires disk surface densities in solids at semi-major axes of less than 0.1 AU of 10{sup 3}-10{sup 5} g cm{sup –2}, implying typical enhancements above the minimum-mass solar nebular (MMSN) by at least a factor of 20. Corresponding gas disks are below but not far from the gravitational stability limit. In contrast, formation beyond a few AU is consistent with MMSN disk masses. This suggests that the migration of either solids or fully assembled planets is likely to have played a major role in the formation of close-in super-Earths and mini-Neptunes.« less

  4. Formation of Close in Super-Earths and Mini-Neptunes: Required Disk Masses and their Implications

    NASA Astrophysics Data System (ADS)

    Schlichting, Hilke E.

    2014-11-01

    Recent observations by the Kepler space telescope have led to the discovery of more than 4000 exoplanet candidates consisting of many systems with Earth- to Neptune-sized objects that reside well inside the orbit of Mercury around their respective host stars. How and where these close-in planets formed is one of the major unanswered questions in planet formation. Here, we calculate the required disk masses for in situ formation of the Kepler planets. We find that if close-in planets formed as isolation masses, then standard gas-to-dust ratios yield corresponding gas disks that are gravitationally unstable for a significant fraction of systems, ruling out such a scenario. We show that the maximum width of a planet's accretion region in the absence of any migration is 2v esc/Ω, where v esc is the escape velocity of the planet and Ω is the Keplerian frequency, and we use it to calculate the required disk masses for in situ formation with giant impacts. Even with giant impacts, formation without migration requires disk surface densities in solids at semi-major axes of less than 0.1 AU of 103-105 g cm-2, implying typical enhancements above the minimum-mass solar nebular (MMSN) by at least a factor of 20. Corresponding gas disks are below but not far from the gravitational stability limit. In contrast, formation beyond a few AU is consistent with MMSN disk masses. This suggests that the migration of either solids or fully assembled planets is likely to have played a major role in the formation of close-in super-Earths and mini-Neptunes.

  5. Network issues for large mass storage requirements

    NASA Technical Reports Server (NTRS)

    Perdue, James

    1992-01-01

    File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.

  6. Adaptive Signal Processing Testbed: VME-based DSP board market survey

    NASA Astrophysics Data System (ADS)

    Ingram, Rick E.

    1992-04-01

    The Adaptive Signal Processing Testbed (ASPT) is a real-time multiprocessor system utilizing digital signal processor technology on VMEbus based printed circuit boards installed on a Sun workstation. The ASPT has specific requirements, particularly as regards to the signal excision application, with respect to interfacing with current and planned data generation equipment, processing of the data, storage to disk of final and intermediate results, and the development tools for applications development and integration into the overall EW/COM computing environment. A prototype ASPT was implemented using three VME-C-30 boards from Applied Silicon. Experience gained during the prototype development led to the conclusions that interprocessor communications capability is the most significant contributor to overall ASPT performance. In addition, the host involvement should be minimized. Boards using different processors were evaluated with respect to the ASPT system requirements, pricing, and availability. Specific recommendations based on various priorities are made as well as recommendations concerning the integration and interaction of various tools developed during the prototype implementation.

  7. Testing an Open Source installation and server provisioning tool for the INFN CNAF Tierl Storage system

    NASA Astrophysics Data System (ADS)

    Pezzi, M.; Favaro, M.; Gregori, D.; Ricci, P. P.; Sapunenko, V.

    2014-06-01

    In large computing centers, such as the INFN CNAF Tier1 [1], is essential to be able to configure all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor[2], a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation and configuration features and also offer a proper full customizable solution as an alternative to Quattor. Our choice at the moment fell on integration between two tools: Cobbler [3] for the installation phase and Puppet [4] for the server provisioning and management operation. The tool should provide the following properties in order to replicate and gradually improve the current system features: implement a system check for storage specific constraints such as kernel modules black list at boot time to avoid undesired SAN (Storage Area Network) access during disk partitioning; a simple and effective mechanism for kernel upgrade and downgrade; the ability of setting package provider using yum, rpm or apt; easy to use Virtual Machine installation support including bonding and specific Ethernet configuration; scalability for managing thousands of nodes and parallel installations. This paper describes the results of the comparison and the tests carried out to verify the requirements and the new system suitability in the INFN-T1 environment.

  8. Glass rupture disk

    DOEpatents

    Glass, S. Jill; Nicolaysen, Scott D.; Beauchamp, Edwin K.

    2002-01-01

    A frangible rupture disk and mounting apparatus for use in blocking fluid flow, generally in a fluid conducting conduit such as a well casing, a well tubing string or other conduits within subterranean boreholes. The disk can also be utilized in above-surface pipes or tanks where temporary and controllable fluid blockage is required. The frangible rupture disk is made from a pre-stressed glass with controllable rupture properties wherein the strength distribution has a standard deviation less than approximately 5% from the mean strength. The frangible rupture disk has controllable operating pressures and rupture pressures.

  9. Dynamic stability of stacked disk type flywheels

    NASA Astrophysics Data System (ADS)

    Younger, F. C.

    1981-04-01

    A flywheel assembly formed from adhesively bonded stacked fiber composite disks was analyzed. The stiffness and rigidity of the assembly required to prevent uncontrolled growth in the deformations due to centrifugal force was determined. It is shown that stacked disk type flywheels become unstable when the speed exceeds a critical value. This critical value of speed depends upon the stiffness of the bonded attachments between the disks. It is found that elastomeric bonds do not provide adequate stiffness to insure dynamic stability for high speed stacked disk type flywheels.

  10. High-capacity high-speed recording

    NASA Astrophysics Data System (ADS)

    Jamberdino, A. A.

    1981-06-01

    Continuing advances in wideband communications and information handling are leading to extremely large volume digital data systems for which conventional data storage techniques are becoming inadequate. The paper presents an assessment of alternative recording technologies for the extremely wideband, high capacity storage and retrieval systems currently under development. Attention is given to longitudinal and rotary head high density magnetic recording, laser holography in human readable/machine readable devices and a wideband recorder, digital optical disks, and spot recording in microfiche formats. The electro-optical technologies considered are noted to be capable of providing data bandwidths up to 1000 megabits/sec and total data storage capacities in the 10 to the 11th to 10 to the 12th bit range, an order of magnitude improvement over conventional technologies.

  11. The use of the cannibalistic habit and elevated relative humidity to improve the storage and shipment of the predatory mite Neoseiulus californicus (Acari: Phytoseiidae).

    PubMed

    Ghazy, Noureldin Abuelfadl; Amano, Hiroshi

    2016-07-01

    This study investigated the feasibility of using the cannibalistic habits of the mite Neoseiulus californicus (McGregor) and controlling the relative humidity (RH) to prolong the survival time during the storage or shipment of this predatory mite. Three-day-old mated and unmated females were individually kept at 25 ± 1 °C in polypropylene vials (1.5 mL), each containing one of the following items or combinations of items: a kidney bean leaf disk (L), N. californicus eggs (E), and both a leaf disk and the eggs (LE). Because the leaf disk increased the RH in the vials, the RH was 95 ± 2 % under the L and LE treatments and 56 ± 6 % under the E treatment. The median lethal time (LT50) exceeded 50 days for the mated and unmated females under the LE treatment. However, it did not exceed 11 or 3 days for all females under the L or E treatments, respectively. Under the LE treatment, the mated and unmated females showed cannibalistic behavior and consumed an average of 5.2 and 4.6 eggs/female/10 days. Some of the females that survived for LT50 under each treatment were transferred and fed normally with a constant supply of Tetranychus urticae Koch. Unmated females were provided with adult males for 24 h for mating. Only females previously kept at LE treatment produced numbers of eggs equivalent to the control females (no treatment is applied). The results suggested that a supply of predator eggs and leaf material might have furnished nutrition and water vapor, respectively, and that this combination prolonged the survival time of N. californicus during storage. Moreover, this approach poses no risk of pest contamination in commercial products.

  12. Coevolution of Binaries and Circumbinary Gaseous Disks

    NASA Astrophysics Data System (ADS)

    Fleming, David; Quinn, Thomas R.

    2018-04-01

    The recent discoveries of circumbinary planets by Kepler raise questions for contemporary planet formation models. Understanding how these planets form requires characterizing their formation environment, the circumbinary protoplanetary disk, and how the disk and binary interact. The central binary excites resonances in the surrounding protoplanetary disk that drive evolution in both the binary orbital elements and in the disk. To probe how these interactions impact both binary eccentricity and disk structure evolution, we ran N-body smooth particle hydrodynamics (SPH) simulations of gaseous protoplanetary disks surrounding binaries based on Kepler 38 for 10^4 binary orbital periods for several initial binary eccentricities. We find that nearly circular binaries weakly couple to the disk via a parametric instability and excite disk eccentricity growth. Eccentric binaries strongly couple to the disk causing eccentricity growth for both the disk and binary. Disks around sufficiently eccentric binaries strongly couple to the disk and develop an m = 1 spiral wave launched from the 1:3 eccentric outer Lindblad resonance (EOLR). This wave corresponds to an alignment of gas particle longitude of periastrons. We find that in all simulations, the binary semi-major axis decays due to dissipation from the viscous disk.

  13. High fold computer disk storage DATABASE for fast extended analysis of γ-rays events

    NASA Astrophysics Data System (ADS)

    Stézowski, O.; Finck, Ch.; Prévost, D.

    1999-03-01

    Recently spectacular technical developments have been achieved to increase the resolving power of large γ-ray spectrometers. With these new eyes, physicists are able to study the intricate nature of atomic nuclei. Concurrently more and more complex multidimensional analyses are needed to investigate very weak phenomena. In this article, we first present a software (DATABASE) allowing high fold coincidences γ-rays events to be stored on hard disk. Then, a non-conventional method of analysis, anti-gating procedure, is described. Two physical examples are given to explain how it can be used and Monte Carlo simulations have been performed to test the validity of this method.

  14. 47 CFR 14.45 - Motions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION GENERAL ACCESS TO ADVANCED COMMUNICATIONS SERVICES AND EQUIPMENT BY... a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this subpart... submitted both as a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this...

  15. 47 CFR 14.45 - Motions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION GENERAL ACCESS TO ADVANCED COMMUNICATIONS SERVICES AND EQUIPMENT BY... a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this subpart... submitted both as a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this...

  16. 47 CFR 14.45 - Motions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION GENERAL ACCESS TO ADVANCED COMMUNICATIONS SERVICES AND EQUIPMENT BY... a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this subpart... submitted both as a hard copy and on computer disk in accordance with the requirements of § 14.51(d) of this...

  17. Assessment of disk MHD generators for a base load powerplant

    NASA Technical Reports Server (NTRS)

    Chubb, D. L.; Retallick, F. D.; Lu, C. L.; Stella, M.; Teare, J. D.; Loubsky, W. J.; Louis, J. F.; Misra, B.

    1981-01-01

    Results from a study of the disk MHD generator are presented. Both open and closed cycle disk systems were investigated. Costing of the open cycle disk components (nozzle, channel, diffuser, radiant boiler, magnet and power management) was done. However, no detailed costing was done for the closed cycle systems. Preliminary plant design for the open cycle systems was also completed. Based on the system study results, an economic assessment of the open cycle systems is presented. Costs of the open cycle disk conponents are less than comparable linear generator components. Also, costs of electricity for the open cycle disk systems are competitive with comparable linear systems. Advantages of the disk design simplicity are considered. Improvements in the channel availability or a reduction in the channel lifetime requirement are possible as a result of the disk design.

  18. Data storage and retrieval system abstract

    NASA Technical Reports Server (NTRS)

    Matheson, Barbara

    1992-01-01

    The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.

  19. Data storage and retrieval system abstract

    NASA Astrophysics Data System (ADS)

    Matheson, Barbara

    1992-09-01

    The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.

  20. Estimation of limit strains in disk-type flywheels made of a compliant elastomeric matrix composite undergoing radial creep

    NASA Astrophysics Data System (ADS)

    Portnov, G. G.; Bakis, Ch. E.

    2000-01-01

    Fiber reinforced elastomeric matrix composites (EMCs) offer several potential advantages for construction of rotors for flywheel energy storage systems. One potential advantage, for safety considerations, is the existence of maximum stresses near the outside radius of thick circumferentially wound EMC disks, which could lead to a desirable self-arresting failure mode at ultimate speeds. Certain unidirectionally reinforced EMCs, however, have been noted to creep readily under the influence of stress transverse to the fibers. In this paper, stress redistribution in a spinning thick disk made of a circumferentially filament wound EMC material on a small rigid hub has been analyzed with the assumption of total radial stress relaxation due to radial creep. It is shown that, following complete relaxation, the circumferential strains and stresses are maximized at the outside radius of the disk. Importantly, the radial tensile strains are three times greater than the circumferential strains at any given radius. Therefore, a unidirectional EMC material system that can safely endure transverse tensile creep strains of at least three times the elastic longitudinal strain capacity of the same material is likely to maintain the theoretically safe failure mode despite complete radial stress relaxation.

  1. Optical media standards for industry

    NASA Technical Reports Server (NTRS)

    Hallam, Kenneth J.

    1993-01-01

    Optical storage is a new and growing area of technology that can serve to meet some of the mass storage needs of the computer industry. Optical storage is characterized by information being stored and retrieved by means of diode lasers. When most people refer to optical storage, they mean rotating disk media, but there are 1 or 2 products that use lasers to read and write to tape. Optical media also usually means removable media. Because of its removability, there is a recognized need for standardization, both of the media and of the recording method. Industry standards can come about in one or more different ways. An industry supported body can sanction and publish a formal standard. A company may ship enough of a product that it so dominates an application or industry that it acquires 'standard' status without an official sanction. Such de facto standards are almost always copied by other companies with varying degrees of success. A governmental body can issue a rule or law that requires conformance to a standard. The standard may have been created by the government, or adopted from among many proposed by industry. These are often known as de jure standards. Standards are either open or proprietary. If approved by a government or sanctioning body, the standard is open. A de facto standard may be either open or proprietary. Optical media is too new to have de facto standards accepted by the marketplace yet. The proliferation of non-compatible media types in the last 5 years of optical market development have convinced many of the need for recognized media standards.

  2. Inner Structure in the TW Hya Circumstellar Disk

    NASA Astrophysics Data System (ADS)

    Akeson, Rachel L.; Millan-Gabet, R.; Ciardi, D.; Boden, A.; Sargent, A.; Monnier, J.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.

    2011-05-01

    TW Hya is a nearby (50 pc) young stellar object with an estimated age of 10 Myr and signs of active accretion. Previous modeling of the circumstellar disk has shown that the inner disk contains optically thin material, placing this object in the class of "transition disks". We present new near-infrared interferometric observations of the disk material and use these data, as well as previously published, spatially resolved data at 10 microns and 7 mm, to constrain disk models based on a standard flared disk structure. Our model demonstrates that the constraints imposed by the spatially resolved data can be met with a physically plausible disk but this requires a disk containing not only an inner gap in the optically thick disk as previously suggested, but also some optically thick material within this gap. Our model is consistent with the suggestion by previous authors of a planet with an orbital radius of a few AU. This work was conducted at the NASA Exoplanet Science Institute, California Institute of Technology.

  3. Development of superconducting magnetic bearing with superconducting coil and bulk superconductor for flywheel energy storage system

    NASA Astrophysics Data System (ADS)

    Arai, Y.; Seino, H.; Yoshizawa, K.; Nagashima, K.

    2013-11-01

    We have been developing superconducting magnetic bearing for flywheel energy storage system to be applied to the railway system. The bearing consists of a superconducting coil as a stator and bulk superconductors as a rotor. A flywheel disk connected to the bulk superconductors is suspended contactless by superconducting magnetic bearings (SMBs). We have manufactured a small scale device equipped with the SMB. The flywheel was rotated contactless over 2000 rpm which was a frequency between its rigid body mode and elastic mode. The feasibility of this SMB structure was demonstrated.

  4. Storage element performance optimization for CMS analysis jobs

    NASA Astrophysics Data System (ADS)

    Behrmann, G.; Dahlblom, J.; Guldmyr, J.; Happonen, K.; Lindén, T.

    2012-12-01

    Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I/O-performance is good.

  5. Encapsulation of alpha-amylase into starch-based biomaterials: an enzymatic approach to tailor their degradation rate.

    PubMed

    Azevedo, Helena S; Reis, Rui L

    2009-10-01

    This paper reports the effect of alpha-amylase encapsulation on the degradation rate of a starch-based biomaterial. The encapsulation method consisted in mixing a thermostable alpha-amylase with a blend of corn starch and polycaprolactone (SPCL), which were processed by compression moulding to produce circular disks. The presence of water was avoided to keep the water activity low and consequently to minimize the enzyme activity during the encapsulation process. No degradation of the starch matrix occurred during processing and storage (the encapsulated enzyme remained inactive due to the absence of water), since no significant amount of reducing sugars was detected in solution. After the encapsulation process, the released enzyme activity from the SPCL disks after 28days was found to be 40% comparatively to the free enzyme (unprocessed). Degradation studies on SPCL disks, with alpha-amylase encapsulated or free in solution, showed no significant differences on the degradation behaviour between both conditions. This indicates that alpha-amylase enzyme was successfully encapsulated with almost full retention of its enzymatic activity and the encapsulation of alpha-amylase clearly accelerates the degradation rate of the SPCL disks, when compared with the enzyme-free disks. The results obtained in this work show that degradation kinetics of the starch polymer can be controlled by the amount of encapsulated alpha-amylase into the matrix.

  6. Motivation and Design of the Sirocco Storage System Version 1.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew Leon; Ward, H. Lee; Danielson, Geoffrey Charles

    Sirocco is a massively parallel, high performance storage system for the exascale era. It emphasizes client-to-client coordination, low server-side coupling, and free data movement to improve resilience and performance. Its architecture is inspired by peer-to-peer and victim- cache architectures. By leveraging these ideas, Sirocco natively supports several media types, including RAM, flash, disk, and archival storage, with automatic migration between levels. Sirocco also includes storage interfaces and support that are more advanced than typical block storage. Sirocco enables clients to efficiently use key-value storage or block-based storage with the same interface. It also provides several levels of transactional data updatesmore » within a single storage command, including full ACID-compliant updates. This transaction support extends to updating several objects within a single transaction. Further support is provided for con- currency control, enabling greater performance for workloads while providing safe concurrent modification. By pioneering these and other technologies and techniques in the storage system, Sirocco is poised to fulfill a need for a massively scalable, write-optimized storage system for exascale systems. This is version 1.0 of a document reflecting the current and planned state of Sirocco. Further versions of this document will be accessible at http://www.cs.sandia.gov/Scalable_IO/ sirocco .« less

  7. Fixed-base flywheel storage systems for electric-utility applications: An assessment of economic viability and R and D priorities

    NASA Astrophysics Data System (ADS)

    Olszewski, M.; Steele, R. S.

    1983-02-01

    Electric utility side meter storage options were assessed for the daily 2 h peaking spike application. The storage options considered included compressed air, batteries, and flywheels. The potential role for flywheels in this application was assessed and research and development (R and D) priorities were established for fixed base flywheel systems. Results of the worth cost analysis indicate that where geologic conditions are favorable, compressed air energy storage (CAES) is a strong competitor against combustion turbines. Existing battery and flywheel systems rated about equal, both being, at best, marginally uncompetitive with turbines. Advanced batteries, if existing cost and performance goals are met, could be competitive with CAES. A three task R and D effort for flywheel development appears warranted. The first task, directed at reducing fabrication coss and increasing performance of a chopped fiber, F-glass, solid disk concept, could produce a competitive flywheel system.

  8. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  9. Rotary-To-Axial Motion Converter For Valve

    NASA Technical Reports Server (NTRS)

    Reinicke, Robert H.; Mohtar, Rafic

    1991-01-01

    Nearly frictionless mechanism converts rotary motion into axial motion. Designed for use in electronically variable pressure-regulator valve. Changes rotary motion imparted by motor into translation that opens and closes valve poppet. Cables spaced equidistantly around edge of fixed disk support movable disk. As movable disk rotated, cables twist, lifting it. When rotated in opposite direction, cables untwist, lowering it. Spider disk helps to prevent cables from tangling. Requires no lubrication and insensitive to contamination in fluid flowing through valve.

  10. Flake storage effects on properties of laboratory-made flakeboards

    Treesearch

    C. G. Carll

    1998-01-01

    Aspen (Populus gradidentata) and loblolly pine (Pinus taeda) flakes were prepared with tangential-grain and radial-grain faces on a laboratory disk flaker. These were gently dried in a steam-heated rotary drum dryer. Approximately 1 week after drying, surface wettability was measured on a large sample of flakes using an aqueous dye solution. Three replicate boards of...

  11. The Computer and Its Functions; How to Communicate with the Computer.

    ERIC Educational Resources Information Center

    Ward, Peggy M.

    A brief discussion of why it is important for students to be familiar with computers and their functions and a list of some practical applications introduce this two-part paper. Focusing on how the computer works, the first part explains the various components of the computer, different kinds of memory storage devices, disk operating systems, and…

  12. Characterization of the Temperature Capabilities of Advanced Disk Alloy ME3

    NASA Technical Reports Server (NTRS)

    Gabb, Timothy P.; Telesman, Jack; Kantzos, Peter T.; OConnor, Kenneth

    2002-01-01

    The successful development of an advanced powder metallurgy disk alloy, ME3, was initiated in the NASA High Speed Research/Enabling Propulsion Materials (HSR/EPM) Compressor/Turbine Disk program in cooperation with General Electric Engine Company and Pratt & Whitney Aircraft Engines. This alloy was designed using statistical screening and optimization of composition and processing variables to have extended durability at 1200 F in large disks. Disks of this alloy were produced at the conclusion of the program using a realistic scaled-up disk shape and processing to enable demonstration of these properties. The objective of the Ultra-Efficient Engine Technologies disk program was to assess the mechanical properties of these ME3 disks as functions of temperature in order to estimate the maximum temperature capabilities of this advanced alloy. These disks were sectioned, machined into specimens, and extensively tested. Additional sub-scale disks and blanks were processed and selectively tested to explore the effects of several processing variations on mechanical properties. Results indicate the baseline ME3 alloy and process can produce 1300 to 1350 F temperature capabilities, dependent on detailed disk and engine design property requirements.

  13. Can Eccentric Debris Disks Be Long-lived? A First Numerical Investigation and Application to Zeta(exp 2) Reticuli

    NASA Technical Reports Server (NTRS)

    Faramaz, V.; Beust, H.; Thebault, P.; Augereau, J.-C.; Bonsor, A.; delBurgo, C.; Ertel, S.; Marshall, J. P.; Milli, J.; Montesinos, B.; hide

    2014-01-01

    Context. Imaging of debris disks has found evidence for both eccentric and offset disks. One hypothesis is that they provide evidence for massive perturbers, for example, planets or binary companions, which sculpt the observed structures. One such disk was recently observed in the far-IR by the Herschel Space Observatory around Zeta2 Reticuli. In contrast with previously reported systems, the disk is significantly eccentric, and the system is several Gyr old. Aims. We aim to investigate the long-term evolution of eccentric structures in debris disks caused by a perturber on an eccentric orbit around the star. We hypothesise that the observed eccentric disk around Zeta2 Reticuli might be evidence of such a scenario. If so, we are able to constrain the mass and orbit of a potential perturber, either a giant planet or a binary companion. Methods. Analytical techniques were used to predict the effects of a perturber on a debris disk. Numerical N-body simulations were used to verify these results and further investigate the observable structures that may be produced by eccentric perturbers. The long-term evolution of the disk geometry was examined, with particular application to the Zeta2 Reticuli system. In addition, synthetic images of the disk were produced for direct comparison with Herschel observations. Results. We show that an eccentric companion can produce both the observed offsets and eccentric disks. These effects are not immediate, and we characterise the timescale required for the disk to develop to an eccentric state (and any spirals to vanish). For Zeta2 Reticuli, we derive limits on the mass and orbit of the companion required to produce the observations. Synthetic images show that the pattern observed around Zeta2 Reticuli can be produced by an eccentric disk seen close to edge-on, and allow us to bring additional constraints on the disk parameters of our model (disk flux and extent). Conclusions. We conclude that eccentric planets or stellar companions can induce long-lived eccentric structures in debris disks. Observations of such eccentric structures thus provide potential evidence of the presence of such a companion in a planetary system. We considered the specific example of Zeta2 Reticuli, whose observed eccentric disk can be explained by a distant companion (at tens of AU) on an eccentric orbit (ep greater than approx. 0.3).

  14. Composite polymer: Glass edge cladding for laser disks

    DOEpatents

    Powell, H.T.; Wolfe, C.A.; Campbell, J.H.; Murray, J.E.; Riley, M.O.; Lyon, R.E.; Jessop, E.S.

    1987-11-02

    Large neodymium glass laser disks for disk amplifiers such as those used in the Nova laser require an edge cladding which absorbs at 1 micrometer. This cladding prevents edge reflections from causing parasitic oscillations which would otherwise deplete the gain. Nova now utilizes volume-absorbing monolithic-glass claddings which are fused at high temperature to the disks. These perform quite well but are expensive to produce. Absorbing glass strips are adhesively bonded to the edges of polygonal disks using a bonding agent whose index of refraction matches that of both the laser and absorbing glass. Optical finishing occurs after the strips are attached. Laser disks constructed with such claddings have shown identical gain performance to the previous Nova disks and have been tested for hundreds of shots without significant degradation. 18 figs.

  15. Composite polymer-glass edge cladding for laser disks

    DOEpatents

    Powell, Howard T.; Riley, Michael O.; Wolfe, Charles R.; Lyon, Richard E.; Campbell, John H.; Jessop, Edward S.; Murray, James E.

    1989-01-01

    Large neodymium glass laser disks for disk amplifiers such as those used in the Nova laser require an edge cladding which absorbs at 1 micrometer. This cladding prevents edge reflections from causing parasitic oscillations which would otherwise deplete the gain. Nova now utilizes volume-absorbing monolithic-glass claddings which are fused at high temperature to the disks. These perform quite well but are expensive to produce. Absorbing glass strips are adhesively bonded to the edges of polygonal disks using a bonding agent whose index of refraction matches that of both the laser and absorbing glass. Optical finishing occurs after the strips are attached. Laser disks constructed with such claddings have shown identical gain performance to the previous Nova disks and have been tested for hundreds of shots without significant degradation.

  16. Formation of Sharp Eccentric Rings in Debris Disks with Gas but Without Planets

    NASA Technical Reports Server (NTRS)

    Lyra, W.; Kuchner, M.

    2013-01-01

    'Debris disks' around young stars (analogues of the Kuiper Belt in our Solar System) show a variety of non-trivial structures attributed to planetary perturbations and used to constrain the properties of those planets. However, these analyses have largely ignored the fact that some debris disks are found to contain small quantities of gas, a component that all such disks should contain at some level. Several debris disks have been measured with a dust-to-gas ratio of about unity, at which the effect of hydrodynamics on the structure of the disk cannot be ignored. Here we report linear and nonlinear modelling that shows that dust-gas interactions can produce some of the key patterns attributed to planets. We find a robust clumping instability that organizes the dust into narrow, eccentric rings, similar to the Fomalhaut debris disk. The conclusion that such disks might contain planets is not necessarily required to explain these systems.

  17. Security of patient data when decommissioning ultrasound systems

    PubMed Central

    2017-01-01

    Background Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. Methods In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system’s own delete function; full reinstallation of system software; the manufacturer’s own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. Results All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system’s own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer’s wiping service or the open source wiping software. Conclusion The typical method of reinstalling an ultrasound system’s software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system’s hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state. PMID:28228821

  18. Eurogrid: a new glideinWMS based portal for CDF data analysis

    NASA Astrophysics Data System (ADS)

    Amerio, S.; Benjamin, D.; Dost, J.; Compostella, G.; Lucchesi, D.; Sfiligoi, I.

    2012-12-01

    The CDF experiment at Fermilab ended its Run-II phase on September 2011 after 11 years of operations and 10 fb-1 of collected data. CDF computing model is based on a Central Analysis Farm (CAF) consisting of local computing and storage resources, supported by OSG and LCG resources accessed through dedicated portals. At the beginning of 2011 a new portal, Eurogrid, has been developed to effectively exploit computing and disk resources in Europe: a dedicated farm and storage area at the TIER-1 CNAF computing center in Italy, and additional LCG computing resources at different TIER-2 sites in Italy, Spain, Germany and France, are accessed through a common interface. The goal of this project is to develop a portal easy to integrate in the existing CDF computing model, completely transparent to the user and requiring a minimum amount of maintenance support by the CDF collaboration. In this paper we will review the implementation of this new portal, and its performance in the first months of usage. Eurogrid is based on the glideinWMS software, a glidein based Workload Management System (WMS) that works on top of Condor. As CDF CAF is based on Condor, the choice of the glideinWMS software was natural and the implementation seamless. Thanks to the pilot jobs, user-specific requirements and site resources are matched in a very efficient way, completely transparent to the users. Official since June 2011, Eurogrid effectively complements and supports CDF computing resources offering an optimal solution for the future in terms of required manpower for administration, support and development.

  19. Development of a Remodeled Caspar Retractor and Its Application in the Measurement of Distractive Resistance in an In Vitro Anterior Cervical Distraction Model.

    PubMed

    Wen, Junxiang; Xu, Jianwei; Li, Lijun; Yang, Mingjie; Pan, Jie; Chen, Deyu; Jia, Lianshun; Tan, Jun

    2017-06-01

    In vitro biomechanical study of the cervical intervertebral distraction using a remodeled Caspar retractor. To investigate the torques required for distraction to different heights in an in vitro C3-C4 anterior cervical distraction model using a remodeled Caspar retractor, focusing on the influence of the intervertebral disk, posterior longitudinal ligament (PLL), and ligamentum flavum (LF). No previous studies have reported on the torques required for distraction to various heights or the factors resisting distraction in anterior cervical discectomy and fusion. Anterior cervical distractions at C3-C4 was performed in 6 cadaveric specimens using a remodeled Caspar retractor, under 4 conditions: A, before disk removal; B, after disk removal; C, after disk and PLL removal; and D, after disk and PLL removal and cutting of the LF. Distraction was performed for 5 teeth, and distractive torque of each tooth was recorded. The torque increased with distraction height under all conditions. There was a sudden increase in torque at the fourth tooth under conditions B and C, but not D. Under condition A, distraction to the third tooth required 84.8±13.3 cN m. Under conditions B and C, distraction to the third tooth required <13 cN m, and further distraction required dramatically increased torque. Under condition D, no marked increase in torque was recorded. Distraction of the intervertebral space was much easier after disk removal. An intact LF caused a sudden marked increase in the force required for distraction, possibly indicating the point at which the LF was fully stretched. This increase in resistance may help to determine the optimal distraction height to avoid excessive stress to the endplate spacer. The remodeled Caspar retractor in the present study may provide a feasible and convenient method for intraoperative measurement of distractive resistance.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nesvold, Erika R.; Naoz, Smadar; Vican, Laura

    The first indication of the presence of a circumstellar debris disk is usually the detection of excess infrared emission from the population of small dust grains orbiting the star. This dust is short-lived, requiring continual replenishment, and indicating that the disk must be excited by an unseen perturber. Previous theoretical studies have demonstrated that an eccentric planet orbiting interior to the disk will stir the larger bodies in the belt and produce dust via interparticle collisions. However, motivated by recent observations, we explore another possible mechanism for heating a debris disk: a stellar-mass perturber orbiting exterior to and inclined tomore » the disk and exciting the disk particles’ eccentricities and inclinations via the Kozai–Lidov mechanism. We explore the consequences of an exterior perturber on the evolution of a debris disk using secular analysis and collisional N -body simulations. We demonstrate that a Kozai–Lidov excited disk can generate a dust disk via collisions and we compare the results of the Kozai–Lidov excited disk with a simulated disk perturbed by an interior eccentric planet. Finally, we propose two observational tests of a dust disk that can distinguish whether the dust was produced by an exterior brown dwarf or stellar companion or an interior eccentric planet.« less

  1. International Ultraviolet Explorer Final Archive

    NASA Technical Reports Server (NTRS)

    1997-01-01

    CSC processed IUE images through the Final Archive Data Processing System. Raw images were obtained from both NDADS and the IUEGTC optical disk platters for processing on the Alpha cluster, and from the IUEGTC optical disk platters for DECstation processing. Input parameters were obtained from the IUE database. Backup tapes of data to send to VILSPA were routinely made on the Alpha cluster. IPC handled more than 263 requests for priority NEWSIPS processing during the contract. Staff members also answered various questions and requests for information and sent copies of IUE documents to requesters. CSC implemented new processing capabilities into the NEWSIPS processing systems as they became available. In addition, steps were taken to improve efficiency and throughput whenever possible. The node TORTE was reconfigured as the I/O server for Alpha processing in May. The number of Alpha nodes used for the NEWSIPS processing queue was increased to a maximum of six in measured fashion in order to understand the dependence of throughput on the number of nodes and to be able to recognize when a point of diminishing returns was reached. With Project approval, generation of the VD FITS files was dropped in July. This action not only saved processing time but, even more significantly, also reduced the archive storage media requirements, and the time required to perform the archiving, drastically. The throughput of images verified through CDIVS and processed through NEWSIPS for the contract period is summarized below. The number of images of a given dispersion type and camera that were processed in any given month reflects several factors, including the availability of the required NEWSIPS software system, the availability of the corresponding required calibrations (e.g., the LWR high-dispersion ripple correction and absolute calibration), and the occurrence of reprocessing efforts such as that conducted to incorporate the updated SWP sensitivity-degradation correction in May.

  2. Disk Diffusion Testing Using Candida sp. Colonies Taken Directly from CHROMagar Candida Medium May Decrease Time Required To Obtain Results

    PubMed Central

    Klevay, Michael; Ebinger, Alex; Diekema, Daniel; Messer, Shawn; Hollis, Richard; Pfaller, Michael

    2005-01-01

    We compared results of disk diffusion antifungal susceptibility testing from Candida sp. strains passaged on CHROMagar and on potato dextrose agar. The overall categorical agreements for fluconazole and voriconazole disk testing were 95% and 98% with 0% and 0.5% very major errors, respectively. Disk diffusion testing by the CLSI (formerly NCCLS) M44-A method can be performed accurately by taking inocula directly from CHROMagar. PMID:16000489

  3. Evaluation of the effectiveness of different brands' disks in antimicrobial disk susceptibility tests.

    PubMed

    Lam, C P; Tsai, W C

    1989-08-01

    A total of 813 routine isolates of aerobic and facultatively anaerobic bacteria were employed to determine the efficacy of different branded (Oxoid, Difco, BBL) antimicrobial disks, using disk antimicrobial susceptibility tests, for a total of 22 kinds of antimicrobial disks and 10,740 antibiotic-organism comparisons. Major positive and major negative discrepancies in results were defined as a change from "susceptible" to "both resistant", and a change from "resistant" to "both susceptible" according to the National Committee for Clinical Laboratory Standards' interpretive standards for zone diameters. Minor positive and minor negative discrepancies were defined as a change from "susceptible" to "both intermediate", or "intermediate" to "both resistant"; and a change from "resistant" to "both intermediate", or "intermediate" to "both susceptible". The overall agreements of Oxoid, Difco, and BBL systems were 98%, 98.7%, and 98.4% respectively, and their differences are not statistically significant. Different kinds of antimicrobial disks' representative patterns of these three brands are further analyzed: (A) In the Oxoid series, there were 220 discrepancies. Minor negative discrepancy is predominant, most frequently related to carbenicillin (25), gentamicin (13) and cephalothin (10). Besides minor negative discrepancy, carbenicillin also had six minor positive discrepancies. Tetracyclin had ten minor positive discrepancies. (B) In the Difco series, there were 137 discrepancies. The majority of them are minor positive discrepancies. Moxalactam (11) and cefotaxime (10) are the most common antibiotics involved. (C) In the BBL series, there were 170 discrepancies. Minor positive discrepancy was the predominant one, which mostly related to carbenicillin (24), amikacin (13), and ceftizoxime (12). In addition, tetracyclin had 24 times minor negative discrepancies. Laboratory workers must pay attention to these different patterns of representation. In order to evaluate the quality of 11 pairs of the give-away and the purchased BBL disks, we also compared the results for these 813 routine isolates (a total of 5,482 antibiotic-organism comparisons). The giveaway disks demonstrated 99.1% overall agreement with the purchased disks. There were 48 minor discrepancies [26 (0.47%) minor positive discrepancies and 22 (0.4%) minor negative discrepancies]. These results allow this study to emphasize the followings in order to raise the awareness of the laboratory workers: (i) alteration of disk efficacy during transportation and storage; (ii) major considerations in choosing different brands' antimicrobial disks, and (iii) the important roles played by salespersons and pharmaceutical companies in achieving sound results.

  4. Sharp Eccentric Rings in Planetless Hydrodynamical Models of Debris Disks

    NASA Technical Reports Server (NTRS)

    Lyra, W.; Kuchner, M. J.

    2013-01-01

    Exoplanets are often associated with disks of dust and debris, analogs of the Kuiper Belt in our solar system. These "debris disks" show a variety of non-trivial structures attributed to planetary perturbations and utilized to constrain the properties of the planets. However, analyses of these systems have largely ignored the fact that, increasingly, debris disks are found to contain small quantities of gas, a component all debris disks should contain at some level. Several debris disks have been measured with a dust-to-gas ratio around unity where the effect of hydrodynamics on the structure of the disk cannot be ignored. Here we report that dust-gas interactions can produce some of the key patterns seen in debris disks that were previously attributed to planets. Through linear and nonlinear modeling of the hydrodynamical problem, we find that a robust clumping instability exists in this configuration, organizing the dust into narrow, eccentric rings, similar to the Fomalhaut debris disk. The hypothesis that these disks might contain planets, though thrilling, is not necessarily required to explain these systems.

  5. Embedded optical interconnect technology in data storage systems

    NASA Astrophysics Data System (ADS)

    Pitwon, Richard C. A.; Hopkins, Ken; Milward, Dave; Muggeridge, Malcolm

    2010-05-01

    As both data storage interconnect speeds increase and form factors in hard disk drive technologies continue to shrink, the density of printed channels on the storage array midplane goes up. The dominant interconnect protocol on storage array midplanes is expected to increase to 12 Gb/s by 2012 thereby exacerbating the performance bottleneck in future digital data storage systems. The design challenges inherent to modern data storage systems are discussed and an embedded optical infrastructure proposed to mitigate this bottleneck. The proposed solution is based on the deployment of an electro-optical printed circuit board and active interconnect technology. The connection architecture adopted would allow for electronic line cards with active optical edge connectors to be plugged into and unplugged from a passive electro-optical midplane with embedded polymeric waveguides. A demonstration platform has been developed to assess the viability of embedded electro-optical midplane technology in dense data storage systems and successfully demonstrated at 10.3 Gb/s. Active connectors incorporate optical transceiver interfaces operating at 850 nm and are connected in an in-plane coupling configuration to the embedded waveguides in the midplane. In addition a novel method of passively aligning and assembling passive optical devices to embedded polymer waveguide arrays has also been demonstrated.

  6. Study on compensation algorithm of head skew in hard disk drives

    NASA Astrophysics Data System (ADS)

    Xiao, Yong; Ge, Xiaoyu; Sun, Jingna; Wang, Xiaoyan

    2011-10-01

    In hard disk drives (HDDs), head skew among multiple heads is pre-calibrated during manufacturing process. In real applications with high capacity of storage, the head stack may be tilted due to environmental change, resulting in additional head skew errors from outer diameter (OD) to inner diameter (ID). In case these errors are below the preset threshold for power on recalibration, the current strategy may not be aware, and drive performance under severe environment will be degraded. In this paper, in-the-field compensation of small DC head skew variation across stroke is proposed, where a zone table has been equipped. Test results demonstrating its effectiveness to reduce observer error and to enhance drive performance via accurate prediction of DC head skew are provided.

  7. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    NASA Astrophysics Data System (ADS)

    Bauerdick, L. A. T.; Bloom, K.; Bockelman, B.; Bradley, D. C.; Dasu, S.; Dost, J. M.; Sfiligoi, I.; Tadel, A.; Tadel, M.; Wuerthwein, F.; Yagil, A.; Cms Collaboration

    2014-06-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  8. Effects of higher order aberrations on beam shape in an optical recording system

    NASA Technical Reports Server (NTRS)

    Wang, Mark S.; Milster, Tom D.

    1992-01-01

    An unexpected irradiance pattern in the detector plane of an optical data storage system was observed. Through wavefront measurement and scalar diffraction modeling, it was discovered that the energy redistribution is due to residual third-order and fifth-order spherical aberration of the objective lens and cover-plate assembly. The amount of residual aberration is small, and the beam focused on the disk would be considered diffraction limited by several criteria. Since the detector is not in the focal plane, even this small amount of aberration has a significant effect on the energy distribution. We show that the energy redistribution can adversely affect focus error signals, which are responsible for maintaining sub-micron spot diameters on the spinning disk.

  9. Investigations of Air-cooled Turbine Rotors for Turbojet Engines II : Mechanical Design, Stress Analysis, and Burst Test of Modified J33 Split-disk Rotor / Richard H. Kemp and Merland L. Moseson

    NASA Technical Reports Server (NTRS)

    Kemp, Richard H; Moseson, Merland L

    1952-01-01

    A full-scale J33 air-cooled split turbine rotor was designed and spin-pit tested to destruction. Stress analysis and spin-pit results indicated that the rotor in a J33 turbojet engine, however, showed that the rear disk of the rotor operated at temperatures substantially higher than the forward disk. An extension of the stress analysis to include the temperature difference between the two disks indicated that engine modifications are required to permit operation of the two disks at more nearly the same temperature level.

  10. The broad applicability of the disk laser principle: from CW to ps

    NASA Astrophysics Data System (ADS)

    Killi, Alexander; Stolzenburg, Christian; Zawischa, Ivo; Sutter, Dirk; Kleinbauer, Jochen; Schad, Sven; Brockmann, Rüdiger; Weiler, Sascha; Neuhaus, Jörg; Kalfhues, Steffen; Mehner, Eva; Bauer, Dominik; Schlueter, Holger; Schmitz, Christian

    2009-02-01

    The quasi two-dimensional geometry of the disk laser results in conceptional advantages over other geometries. Fundamentally, the thin disk laser allows true power scaling by increasing the pump spot diameter on the disk while keeping the power density constant. This scaling procedure keeps optical peak intensity, temperature, stress profile, and optical path differences in the disk nearly unchanged. The required pump beam brightness - a main cost driver of DPSSL systems - also remains constant. We present these fundamental concepts and present results in the wide range of multi kW-class CW-sources, high power Q-switched sources and ultrashort pulsed sources.

  11. Towards a Global Evolutionary Model of Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Bai, Xue-Ning

    2016-04-01

    A global picture of the evolution of protoplanetary disks (PPDs) is key to understanding almost every aspect of planet formation, where standard α-disk models have been continually employed for their simplicity. In the meantime, disk mass loss has been conventionally attributed to photoevaporation, which controls disk dispersal. However, a paradigm shift toward accretion driven by magnetized disk winds has taken place in recent years, thanks to studies of non-ideal magnetohydrodynamic effects in PPDs. I present a framework of global PPD evolution aiming to incorporate these advances, highlighting the role of wind-driven accretion and wind mass loss. Disk evolution is found to be largely dominated by wind-driven processes, and viscous spreading is suppressed. The timescale of disk evolution is controlled primarily by the amount of external magnetic flux threading the disks, and how rapidly the disk loses the flux. Rapid disk dispersal can be achieved if the disk is able to hold most of its magnetic flux during the evolution. In addition, because wind launching requires a sufficient level of ionization at the disk surface (mainly via external far-UV (FUV) radiation), wind kinematics is also affected by the FUV penetration depth and disk geometry. For a typical disk lifetime of a few million years, the disk loses approximately the same amount of mass through the wind as through accretion onto the protostar, and most of the wind mass loss proceeds from the outer disk via a slow wind. Fractional wind mass loss increases with increasing disk lifetime. Significant wind mass loss likely substantially enhances the dust-to-gas mass ratio and promotes planet formation.

  12. 77 FR 6669 - Airworthiness Directives; Honeywell International Inc. TPE331-10 and TPE331-11 Series Turboprop...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-09

    ... failure of a first stage turbine disk that had a metallurgical defect. This AD requires inspecting certain...-1. We are issuing this AD to prevent uncontained failure of the first stage turbine disk and damage... failure of a first stage turbine disk that had a metallurgical defect. We are issuing this AD to prevent...

  13. OT1_ipascucc_1: Understanding the Origin of Transition Disks via Disk Mass Measurements

    NASA Astrophysics Data System (ADS)

    Pascucci, I.

    2010-07-01

    Transition disks are a distinguished group of few Myr-old systems caught in the phase of dispersing their inner dust disk. Three different processes have been proposed to explain this inside-out clearing: grain growth, photoevaporation driven by the central star, and dynamical clearing by a forming giant planet. Which of these processes lead to a transition disk? Distinguishing between them requires the combined knowledge of stellar accretion rates and disk masses. We propose here to use 43.8 hours of PACS spectroscopy to detect the [OI] 63 micron emission line from a sample of 21 well-known transition disks with measured mass accretion rates. We will use this line, in combination with ancillary CO millimeter lines, to measure their gas disk mass. Because gas dominates the mass of protoplanetary disks our approach and choice of lines will enable us to trace the bulk of the disk mass that resides beyond tens of AU from young stars. Our program will quadruple the number of transition disks currently observed with Herschel in this setting and for which disk masses can be measured. We will then place the transition and the ~100 classical/non-transition disks of similar age (from the Herschel KP "Gas in Protoplanetary Systems") in the mass accretion rate-disk mass diagram with two main goals: 1) reveal which gaps have been created by grain growth, photoevaporation, or giant planet formation and 2) from the statistics, determine the main disk dispersal mechanism leading to a transition disk.

  14. State-of-the-Art Applicability of Conventional Densification Techniques to Increase Disposal Area Storage Capacity

    DTIC Science & Technology

    1977-04-01

    Dipper dredge and silt. Approaches dry density incoarser material. Clam shell or orange peel. bucket d-redge SEndless chain bucket dredge Cutterhead...stage the big disk wheel is pulled by a tractor making ditches about 0.5 to 0.6 m deep and 10 m apart. When the first layer of mud has been

  15. 78 FR 79481 - Summary of Commission Practice Relating to Administrative Protective Orders

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-30

    ... breach of the Commission's APOs. APO breach inquiries are considered on a case-by- case basis. As part of... suitable container (N.B.: storage of BPI on so-called hard disk computer media is to be avoided, because mere erasure of data from such media may not irrecoverably destroy the BPI and may result in violation...

  16. Computer Modeling of Thin Film Growth.

    DTIC Science & Technology

    1984-12-01

    buried and inactive, the data can be transferred to disk storage, there- - by conserving internal memory. As for the accuracy of the results, two...and mobility of the incident particles. L’~laos if led 𔄁FCUR,!Y CLASSIFICATOn4 o: Twicz Paq * .* .. ; :* .. * :-.7 jX~M~ L~ N~*~ ~2J ~ NJtA!Z~ ~ -- x L .p. 646 p. - p...’ /1

  17. Effect of water storage and surface treatments on the tensile bond strength of IPS Empress 2 ceramic.

    PubMed

    Salvio, Luciana A; Correr-Sobrinho, Lourenço; Consani, Simonides; Sinhoreti, Mário A C; de Goes, Mario F; Knowles, Jonathan C

    2007-01-01

    The aim of this study was to evaluate the effect of water storage (24 hours and 1 year) on the tensile bond strength between the IPS Empress 2 ceramic and Variolink II resin cement under different superficial treatments. One hundred and eighty disks with diameters of 5.3 mm at the top and 7.0 mm at the bottom, and a thickness of 2.5 mm were made, embedded in resin, and randomly divided into six groups: Groups 1 and 4 = 10% hydrofluoric acid for 20 seconds; Groups 2 and 5 = sandblasting for 5 seconds with 50 microm aluminum oxide; and Groups 3 and 6 = sandblasting for 5 seconds with 100 microm aluminum oxide. Silane was applied on the treated ceramic surfaces, and the disks were bonded into pairs with adhesive resin cement. The samples of Groups 1 to 3 were stored in distilled water at 37 degrees C for 24 hours, and Groups 4 to 6 were stored for 1 year. The samples were subjected to a tensile strength test in an Instron universal testing machine at a crosshead speed of 1.0 mm/min, until failure. The data were submitted to analysis of variance and Tukey's test (5%). The means of the tensile bond strength of Groups 1, 2, and 3 (15.54 +/- 4.53, 10.60 +/- 3.32, and 7.87 +/- 2.26 MPa) for 24-hour storage time were significantly higher than those observed for the 1-year storage (Groups 4, 5, and 6: 10.10 +/- 3.17, 6.34 +/- 1.06, and 2.60 +/- 0.41 MPa). The surface treatments with 10% hydrofluoric acid (15.54 +/- 4.53 and 10.10 +/- 3.17 MPa) showed statistically higher tensile bond strengths compared with sandblasting with 50 microm(10.60 +/- 3.32 and 6.34 +/- 1.06 MPa) and 100 microm (7.87 +/- 2.26 and 2.60 +/- 0.41 MPa) aluminum oxide for the storage time 24 hours and 1 year. Storage time significantly decreased the tensile bond strength for both ceramic surface treatments. The application of 10% hydrofluoric acid resulted in stronger tensile bond strength values than those achieved with aluminum oxide.

  18. The inner-disk and stellar properties of the young stellar object WL 16

    NASA Technical Reports Server (NTRS)

    Carr, John S.; Tokunaga, Alan T.; Najita, Joan; Shu, Frank H.; Glassgold, Alfred E.

    1993-01-01

    We present kinematic evidence for a rapidly rotating circumstellar disk around the young stellar object WL 16, based on new high-velocity-resolution data of the v = 2-0 CO bandhead emission. A Keplerian disk provides an excellent fit to the observed profile and requires a projected velocity for the CO-emitting region of roughly 250 km/s at the inner radius and 140 km/s at the outer radius, giving a ratio of the inner to the outer radius of about 0.3. We show that satisfying the constraints imposed by the gas kinematics, the observed CO flux, and the total source luminosity requires the mass of WL 16 to lie between 1.4 and 2.5 solar mass. The inner disk radius for the CO emission must be less than 8 solar radii.

  19. Mark 6: A Next-Generation VLBI Data System

    NASA Astrophysics Data System (ADS)

    Whitney, A. R.; Lapsley, D. E.; Taveniku, M.

    2011-07-01

    A new real-time high-data-rate disk-array system based on entirely commercial-off-the-shelf hardware components is being evaluated for possible use as a next-generation VLBI data system. The system, developed by XCube Communications of Nashua, NH, USA was originally developed for the automotive industry for testing/evaluation of autonomous driving systems that require continuous capture of an array of video cameras and automotive sensors at ~8Gbps from multiple 10GigE data links and other data sources. In order to sustain the required recording data rate, the system is designed to account for slow and/or failed disks by shifting the load to other disks as necessary in order to maintain the target data rate. The system is based on a Linux OS with some modifications to memory management and drivers in order to guarantee the timely movement of data, and the hardware/software combination is highly tuned to achieve the target data rate; data are stored in standard Linux files. A kit is also being designed that will allow existing Mark 5 disk modules to be modified to be used with the XCube system (though PATA disks will need to be replaced by SATA disks). Demonstrations of the system at Haystack Observatory and NRAO Socorro have proved very encouraging; some modest software upgrades/revisions are being made by XCube in order to meet VLBI-specific requirements. The system is easily expandable, with sustained 16 Gbps likely to be supported before end CY2011.

  20. SANs and Large Scale Data Migration at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2004-01-01

    Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.

  1. Can eccentric debris disks be long-lived?. A first numerical investigation and application to ζ2 Reticuli

    NASA Astrophysics Data System (ADS)

    Faramaz, V.; Beust, H.; Thébault, P.; Augereau, J.-C.; Bonsor, A.; del Burgo, C.; Ertel, S.; Marshall, J. P.; Milli, J.; Montesinos, B.; Mora, A.; Bryden, G.; Danchi, W.; Eiroa, C.; White, G. J.; Wolf, S.

    2014-03-01

    Context. Imaging of debris disks has found evidence for both eccentric and offset disks. One hypothesis is that they provide evidence for massive perturbers, for example, planets or binary companions, which sculpt the observed structures. One such disk was recently observed in the far-IR by the Herschel Space Observatory around ζ2 Reticuli. In contrast with previously reported systems, the disk is significantly eccentric, and the system is several Gyr old. Aims: We aim to investigate the long-term evolution of eccentric structures in debris disks caused by a perturber on an eccentric orbit around the star. We hypothesise that the observed eccentric disk around ζ2 Reticuli might be evidence of such a scenario. If so, we are able to constrain the mass and orbit of a potential perturber, either a giant planet or a binary companion. Methods: Analytical techniques were used to predict the effects of a perturber on a debris disk. Numerical N-body simulations were used to verify these results and further investigate the observable structures that may be produced by eccentric perturbers. The long-term evolution of the disk geometry was examined, with particular application to the ζ2 Reticuli system. In addition, synthetic images of the disk were produced for direct comparison with Herschel observations. Results: We show that an eccentric companion can produce both the observed offsets and eccentric disks. These effects are not immediate, and we characterise the timescale required for the disk to develop to an eccentric state (and any spirals to vanish). For ζ2 Reticuli, we derive limits on the mass and orbit of the companion required to produce the observations. Synthetic images show that the pattern observed around ζ2 Reticuli can be produced by an eccentric disk seen close to edge-on, and allow us to bring additional constraints on the disk parameters of our model (disk flux and extent). Conclusions: We conclude that eccentric planets or stellar companions can induce long-lived eccentric structures in debris disks. Observations of such eccentric structures thus provide potential evidence of the presence of such a companion in a planetary system. We considered the specific example of ζ2 Reticuli, whose observed eccentric disk can be explained by a distant companion (at tens of AU) on an eccentric orbit (ep ≳ 0.3). Appendices are available in electronic form at http://www.aanda.orgHerschel Space Observatory is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  2. Intelligent holographic databases

    NASA Astrophysics Data System (ADS)

    Barbastathis, George

    Memory is a key component of intelligence. In the human brain, physical structure and functionality jointly provide diverse memory modalities at multiple time scales. How could we engineer artificial memories with similar faculties? In this thesis, we attack both hardware and algorithmic aspects of this problem. A good part is devoted to holographic memory architectures, because they meet high capacity and parallelism requirements. We develop and fully characterize shift multiplexing, a novel storage method that simplifies disk head design for holographic disks. We develop and optimize the design of compact refreshable holographic random access memories, showing several ways that 1 Tbit can be stored holographically in volume less than 1 m3, with surface density more than 20 times higher than conventional silicon DRAM integrated circuits. To address the issue of photorefractive volatility, we further develop the two-lambda (dual wavelength) method for shift multiplexing, and combine electrical fixing with angle multiplexing to demonstrate 1,000 multiplexed fixed holograms. Finally, we propose a noise model and an information theoretic metric to optimize the imaging system of a holographic memory, in terms of storage density and error rate. Motivated by the problem of interfacing sensors and memories to a complex system with limited computational resources, we construct a computer game of Desert Survival, built as a high-dimensional non-stationary virtual environment in a competitive setting. The efficacy of episodic learning, implemented as a reinforced Nearest Neighbor scheme, and the probability of winning against a control opponent improve significantly by concentrating the algorithmic effort to the virtual desert neighborhood that emerges as most significant at any time. The generalized computational model combines the autonomous neural network and von Neumann paradigms through a compact, dynamic central representation, which contains the most salient features of the sensory inputs, fused with relevant recollections, reminiscent of the hypothesized cognitive function of awareness. The Declarative Memory is searched both by content and address, suggesting a holographic implementation. The proposed computer architecture may lead to a novel paradigm that solves 'hard' cognitive problems at low cost.

  3. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  4. EOS developments

    NASA Astrophysics Data System (ADS)

    Sindrilaru, Elvin A.; Peters, Andreas J.; Adde, Geoffray M.; Duellmann, Dirk

    2017-10-01

    CERN has been developing and operating EOS as a disk storage solution successfully for over 6 years. The CERN deployment provides 135 PB and stores 1.2 billion replicas distributed over two computer centres. Deployment includes four LHC instances, a shared instance for smaller experiments and since last year an instance for individual user data as well. The user instance represents the backbone of the CERNBOX service for file sharing. New use cases like synchronisation and sharing, the planned migration to reduce AFS usage at CERN and the continuous growth has brought EOS to new challenges. Recent developments include the integration and evaluation of various technologies to do the transition from a single active in-memory namespace to a scale-out implementation distributed over many meta-data servers. The new architecture aims to separate the data from the application logic and user interface code, thus providing flexibility and scalability to the namespace component. Another important goal is to provide EOS as a CERN-wide mounted filesystem with strong authentication making it a single storage repository accessible via various services and front- ends (/eos initiative). This required new developments in the security infrastructure of the EOS FUSE implementation. Furthermore, there were a series of improvements targeting the end-user experience like tighter consistency and latency optimisations. In collaboration with Seagate as Openlab partner, EOS has a complete integration of OpenKinetic object drive cluster as a high-throughput, high-availability, low-cost storage solution. This contribution will discuss these three main development projects and present new performance metrics.

  5. Disk Alloy Development

    NASA Technical Reports Server (NTRS)

    Gabb, Tim; Gayda, John; Telesman, Jack

    2001-01-01

    The advanced powder metallurgy disk alloy ME3 was designed using statistical screening and optimization of composition and processing variables in the NASA HSR/EPM disk program to have extended durability at 1150 to 1250 "Fin large disks. Scaled-up disks of this alloy were produced at the conclusion of this program to demonstrate these properties in realistic disk shapes. The objective of the UEET disk program was to assess the mechanical properties of these ME3 disks as functions of temperature, in order to estimate the maximum temperature capabilities of this advanced alloy. Scaled-up disks processed in the HSR/EPM Compressor / Turbine Disk program were sectioned, machined into specimens, and tested in tensile, creep, fatigue, and fatigue crack growth tests by NASA Glenn Research Center, in cooperation with General Electric Engine Company and Pratt & Whitney Aircraft Engines. Additional sub-scale disks and blanks were processed and tested to explore the effects of several processing variations on mechanical properties. Scaled-up disks of an advanced regional disk alloy, Alloy 10, were used to evaluate dual microstructure heat treatments. This allowed demonstration of an improved balance of properties in disks with higher strength and fatigue resistance in the bores and higher creep and dwell fatigue crack growth resistance in the rims. Results indicate the baseline ME3 alloy and process has 1300 to 1350 O F temperature capabilities, dependent on detailed disk and engine design property requirements. Chemistry and process enhancements show promise for further increasing temperature capabilities.

  6. Spin Testing of Superalloy Disks With Dual Grain Structure

    NASA Technical Reports Server (NTRS)

    Hefferman, Tab M.

    2006-01-01

    This 24-month program was a joint effort between Allison Advanced Development Company (AADC), General Electric Aircraft (GEAE), and NASA Glenn Research Center (GRC). AADC led the disk and spin hardware design and analysis utilizing existing Rolls-Royce turbine disk forging tooling. Testing focused on spin testing four disks: two supplied by GEAE and two by AADC. The two AADC disks were made of Alloy 10, and each was subjected to a different heat treat process: one producing dual microstructure with coarse grain size at the rim and fine grain size at the bore and the other produced single fine grain structure throughout. The purpose of the spin tests was to provide data for evaluation of the impact of dual grain structure on disk overspeed integrity (yielding) and rotor burst criteria. The program culminated with analysis and correlation of the data to current rotor overspeed criteria and advanced criteria required for dual structure disks.

  7. A DWARF TRANSITIONAL PROTOPLANETARY DISK AROUND XZ TAU B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osorio, Mayra; Macías, Enrique; Anglada, Guillem

    We report the discovery of a dwarf protoplanetary disk around the star XZ Tau B that shows all the features of a classical transitional disk but on a much smaller scale. The disk has been imaged with the Atacama Large Millimeter/submillimeter Array (ALMA), revealing that its dust emission has a quite small radius of ∼3.4 au and presents a central cavity of ∼1.3 au in radius that we attribute to clearing by a compact system of orbiting (proto)planets. Given the very small radii involved, evolution is expected to be much faster in this disk (observable changes in a few months)more » than in classical disks (observable changes requiring decades) and easy to monitor with observations in the near future. From our modeling we estimate that the mass of the disk is large enough to form a compact planetary system.« less

  8. Moving mode shape function approach for spinning disk and asymmetric disc brake squeal

    NASA Astrophysics Data System (ADS)

    Kang, Jaeyoung

    2018-06-01

    The solution approach of an asymmetric spinning disk under stationary friction loads requires the mode shape function fixed in the disk in the assumed mode method when the equations of motion is described in the space-fixed frame. This model description will be termed the 'moving mode shape function approach' and it allows us to formulate the stationary contact load problem in both the axisymmetric and asymmetric disk cases. Numerical results show that the eigenvalues of the time-periodic axisymmetric disk system are time-invariant. When the axisymmetry of the disk is broken, the positive real parts of the eigenvalues highly vary with the rotation of the disk in the slow speeds in such application as disc brake squeal. By using the Floquet stability analysis, it is also shown that breaking the axisymmetry of the disc alters the stability boundaries of the system.

  9. SAN/CXFS test report to LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, T M; Eldel, A

    2000-01-01

    The primary objectives of this project were to evaluate the performance of the SGI CXFS File System in a Storage Area Network (SAN) and compare/contrast it to the performance of a locally attached XFS file system on the same computer and storage subsystems. The University of Minnesota participants were asked to verify that the performance of the SAN/CXFS configuration did not fall below 85% of the performance of the XFS local configuration. There were two basic hardware test configurations constructed from the following equipment: Two Onyx 2 computer systems each with two Qlogic-based Fibre Channel/XIO Host Bus Adapter (HBA); Onemore » 8-Port Brocade Silkworm 2400 Fibre Channel Switch; and Four Ciprico RF7000 RAID Disk Arrays populated Seagate Barracuda 50GB disk drives. The Operating System on each of the ONYX 2 computer systems was IRIX 6.5.6. The first hardware configuration consisted of directly connecting the Ciprico arrays to the Qlogic controllers without the Brocade switch. The purpose for this configuration was to establish baseline performance data on the Qlogic controllers / Ciprico disk raw subsystem. This baseline performance data would then be used to demonstrate any performance differences arising from the addition of the Brocade Fibre Channel Switch. Furthermore, the performance of the Qlogic controllers could be compared to that of the older, Adaptec-based XIO dual-channel Fibre Channel adapters previously used on these systems. It should be noted that only raw device tests were performed on this configuration. No file system testing was performed on this configuration. The second hardware configuration introduced the Brocade Fibre Channel Switch. Two FC ports from each of the ONYX2 computer systems were attached to four ports of the switch and the four Ciprico arrays were attached to the remaining four. Raw disk subsystem tests were performed on the SAN configuration in order to demonstrate the performance differences between the direct-connect and the switched configurations. After this testing was completed, the Ciprico arrays were formatted with an XFS file system and performance numbers were gathered to establish a File System Performance Baseline. Finally, the disks were formatted with CXFS and further tests were run to demonstrate the performance of the CXFS file system. A summary of the results of these tests is given.« less

  10. The role of disk self-gravity on gap formation of the HL Tau proto-planetary disk

    DOE PAGES

    Li, Shengtai; Li, Hui

    2016-05-31

    Here, we use extensive global hydrodynamic disk gas+dust simulations with embedded planets to model the dust ring and gap structures in the HL Tau protoplanetary disk observed with the Atacama Large Millimeter/Submillimeter Array (ALMA). Since the HL Tau is a relatively massive disk, we find the disk self-gravity (DSG) plays an important role in the gap formation induced by the planets. Our simulation results demonstrate that DSG is necessary in explaining of the dust ring and gap in HL Tau disk. The comparison of simulation results shows that the dust rings and gap structures are more evident when the fullymore » 2D DSG (non-axisymmetric components are included) is used than if 1D axisymmetric DSG (only the axisymetric component is included) is used, or the disk self-gravity is not considered. We also find that the couple dust+gas+planet simulations are required because the gap and ring structure is different between dust and gas surface density.« less

  11. Migration of accreting giant planets

    NASA Astrophysics Data System (ADS)

    Robert, C.; Crida, A.; Lega, E.; Méheut, H.

    2017-09-01

    Giant planets forming in protoplanetary disks migrate relative to their host star. By repelling the gas in their vicinity, they form gaps in the disk's structure. If they are effectively locked in their gap, it follows that their migration rate is governed by the accretion of the disk itself onto the star, in a so-called type II fashion. Recent results showed however that a locking mechanism was still lacking, and was required to understand how giant planets may survive their disk. We propose that planetary accretion may play this part, and help reach this slow migration regime.

  12. Low Cost Heat Treatment Process for Production of Dual Microstructure Superalloy Disks

    NASA Technical Reports Server (NTRS)

    Gayda, John; Gabb, Tim; Kantzos, Pete; Furrer, David

    2003-01-01

    There are numerous incidents where operating conditions imposed on a component mandate different and distinct mechanical property requirements from location to location within the component. Examples include a crankshaft in an internal combustion engine, gears for an automotive transmission, and disks for a gas turbine engine. Gas turbine disks are often made from nickel-base superalloys, because these disks need to withstand the temperature and stresses involved in the gas turbine cycle. In the bore of the disk where the operating temperature is somewhat lower, the limiting material properties are often tensile and fatigue strength. In the rim of the disk, where the operating temperatures are higher than those of the bore, because of the proximity to the combustion gases, resistance to creep and crack growth are often the limiting properties.

  13. Management of Lumbar Conditions in the Elite Athlete.

    PubMed

    Hsu, Wellington K; Jenkins, Tyler James

    2017-07-01

    Lumbar disk herniation, degenerative disk disease, and spondylolysis are the most prevalent lumbar conditions that result in missed playing time. Lumbar disk herniation has a good prognosis. After recovery from injury, professional athletes return to play 82% of the time. Surgical management of lumbar disk herniation has been shown to be a viable option in athletes in whom nonsurgical measures have failed. Degenerative disk disease is predominately genetic but may be accelerated in athletes secondary to increased physiologic loading. Nonsurgical management is the standard of care for lumbar degenerative disk disease in the elite athlete. Spondylolysis is more common in adolescent athletes with back pain than in adult athletes. Nonsurgical management of spondylolysis is typically successful. However, if surgery is required, fusion or direct pars repair can allow the patient to return to sports.

  14. Disk Evolution and the Fate of Water

    NASA Astrophysics Data System (ADS)

    Hartmann, Lee; Ciesla, Fred; Gressel, Oliver; Alexander, Richard

    2017-10-01

    We review the general theoretical concepts and observational constraints on the distribution and evolution of water vapor and ice in protoplanetary disks, with a focus on the Solar System. Water is expected to freeze out at distances greater than 1-3 AU from solar-type central stars; more precise estimates are difficult to obtain due to uncertainties in the complex processes involved in disk evolution, including dust growth, settling, and radial drift, and the level of turbulence and viscous dissipation within disks. Interferometric observations are now providing constraints on the positions of CO snow lines, but extrapolation to the unresolved regions where water ice sublimates will require much better theoretical understanding of mass and angular momentum transport in disks as well as more refined comparison of observations with sophisticated disk models.

  15. An intelligent data model for the storage of structured grids

    NASA Astrophysics Data System (ADS)

    Clyne, John; Norton, Alan

    2013-04-01

    With support from the U.S. National Science Foundation we have developed, and currently maintain, VAPOR: a geosciences-focused, open source visual data analysis package. VAPOR enables highly interactive exploration, as well as qualitative and quantitative analysis of high-resolution simulation outputs using only a commodity, desktop computer. The enabling technology behind VAPOR's ability to interact with a data set, whose size would overwhelm all but the largest analysis computing resources, is a progressive data access file format, called the VAPOR Data Collection (VDC). The VDC is based on the discrete wavelet transform and their information compaction properties. Prior to analysis, raw data undergo a wavelet transform, concentrating the information content into a fraction of the coefficients. The coefficients are then sorted by their information content (magnitude) into a small number of bins. Data are reconstructed by applying an inverse wavelet transform. If all of the coefficient bins are used during reconstruction the process is lossless (up to floating point round-off). If only a subset of the bins are used, an approximation of the original data is produced. A crucial point here is that the principal benefit to reconstruction from a subset of wavelet coefficients is a reduction in I/O. Further, if smaller coefficients are simply discarded, or perhaps stored on more capacious tertiary storage, secondary storage requirements (e.g. disk) can be reduced as well. In practice, these reductions in I/O or storage can be on the order of tens or even hundreds. This talk will briefly describe the VAPOR Data Collection, and will present real world success stories from the geosciences that illustrate how progressive data access enables highly interactive exploration of Big Data.

  16. Practical and Secure Recovery of Disk Encryption Key Using Smart Cards

    NASA Astrophysics Data System (ADS)

    Omote, Kazumasa; Kato, Kazuhiko

    In key-recovery methods using smart cards, a user can recover the disk encryption key in cooperation with the system administrator, even if the user has lost the smart card including the disk encryption key. However, the disk encryption key is known to the system administrator in advance in most key-recovery methods. Hence user's disk data may be read by the system administrator. Furthermore, if the disk encryption key is not known to the system administrator in advance, it is difficult to achieve a key authentication. In this paper, we propose a scheme which enables to recover the disk encryption key when the user's smart card is lost. In our scheme, the disk encryption key is not preserved anywhere and then the system administrator cannot know the key before key-recovery phase. Only someone who has a user's smart card and knows the user's password can decrypt that user's disk data. Furthermore, we measured the processing time required for user authentication in an experimental environment using a virtual machine monitor. As a result, we found that this processing time is short enough to be practical.

  17. TriageTools: tools for partitioning and prioritizing analysis of high-throughput sequencing data.

    PubMed

    Fimereli, Danai; Detours, Vincent; Konopka, Tomasz

    2013-04-01

    High-throughput sequencing is becoming a popular research tool but carries with it considerable costs in terms of computation time, data storage and bandwidth. Meanwhile, some research applications focusing on individual genes or pathways do not necessitate processing of a full sequencing dataset. Thus, it is desirable to partition a large dataset into smaller, manageable, but relevant pieces. We present a toolkit for partitioning raw sequencing data that includes a method for extracting reads that are likely to map onto pre-defined regions of interest. We show the method can be used to extract information about genes of interest from DNA or RNA sequencing samples in a fraction of the time and disk space required to process and store a full dataset. We report speedup factors between 2.6 and 96, depending on settings and samples used. The software is available at http://www.sourceforge.net/projects/triagetools/.

  18. Revealing the Structure of a Pre-Transitional Disk: The Case of the Herbig F Star SAO 206462 (HD 135344B)

    NASA Astrophysics Data System (ADS)

    Grady, C. A.; Schneider, G.; Sitko, M. L.; Williger, G. M.; Hamaguchi, K.; Brittain, S. D.; Ablordeppey, K.; Apai, D.; Beerman, L.; Carpenter, W. J.; Collins, K. A.; Fukagawa, M.; Hammel, H. B.; Henning, Th.; Hines, D.; Kimes, R.; Lynch, D. K.; Ménard, F.; Pearson, R.; Russell, R. W.; Silverstone, M.; Smith, P. S.; Troutman, M.; Wilner, D.; Woodgate, B.; Clampin, M.

    2009-07-01

    SAO 206462 (HD 135344B) has previously been identified as a Herbig F star with a circumstellar disk with a dip in its infrared excess near 10 μm. In combination with a low accretion rate estimated from Br γ, it may represent a gapped, but otherwise primordial or "pre-transitional" disk. We test this hypothesis with Hubble Space Telescope coronagraphic imagery, FUV spectroscopy and imagery and archival X-ray data, and spectral energy distribution (SED) modeling constrained by the observed system inclination, disk outer radius, and outer disk radial surface brightness (SB) profile using the Whitney Monte Carlo Radiative Transfer Code. The essentially face-on (i lsim 20°) disk is detected in scattered light from 0farcs4 to 1farcs15 (56-160 AU), with a steep (r -9.6) radial SB profile from 0farcs6 to 0farcs93. Fitting the SB data requires a concave upward or anti-flared outer disk, indicating substantial dust grain growth and settling by 8 ± 4 Myr. The warm dust component is significantly variable in near to mid-IR excess and in temperature. At its warmest, it appears confined to a narrow belt from 0.08 to 0.2 AU. The steep SED for this dust component is consistent with grains with a<= 2.5 μm. For cosmic carbon to silicate dust composition, conspicuous 10 μm silicate emission would be expected and is not observed. This may indicate an elevated carbon to silicate ratio for the warm dust, which is not required to fit the outer disk. At its coolest, the warm dust can be fit with a disk from 0.14 to 0.31 AU, but with a higher inclination than either the outer disk or the gaseous disk, providing confirmation of the high inclination inferred from mid-IR interferometry. In tandem, the compositional and inclination difference between the warm dust and the outer dust disk suggests that the warm dust may be of second-generation origin, rather than a remnant of a primordial disk component. With its near face-on inclination, SAO 206462's disk is a prime location for planet searches. Based in part on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  19. Near-infrared structure of fast and slow-rotating disk galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schechtman-Rook, Andrew; Bershady, Matthew A., E-mail: andrew@astro.wisc.edu

    We investigate the stellar disk structure of six nearby edge-on spiral galaxies using high-resolution JHK {sub s}-band images and three-dimensional radiative transfer models. To explore how mass and environment shape spiral disks, we selected galaxies with rotational velocities between 69 km s{sup –1} 150 km s{sup –1}) galaxies, only NGC 4013 has the super-thin+thin+thick nested disk structure seen in NGC 891 and the Milky Way, albeit with decreased oblateness, while NGC 1055, a disturbed massive spiral galaxy, contains disks with h{sub z} ≲ 200 pc. NGC 4565, another fast-rotator, contains a prominent ring at a radius ∼5 kpc but nomore » super-thin disk. Despite these differences, all fast-rotating galaxies in our sample have inner truncations in at least one of their disks. These truncations lead to Freeman Type II profiles when projected face-on. Slow-rotating galaxies are less complex, lacking inner disk truncations and requiring fewer disk components to reproduce their light distributions. Super-thin disk components in undisturbed disks contribute ∼25% of the total K {sub s}-band light, up to that of the thin-disk contribution. The presence of super-thin disks correlates with infrared flux ratios; galaxies with super-thin disks have f{sub K{sub s}}/f{sub 60} {sub μm}≤0.12 for integrated light, consistent with super-thin disks being regions of ongoing star-formation. Attenuation-corrected vertical color gradients in (J – K {sub s}) correlate with the observed disk structure and are consistent with population gradients with young-to-intermediate ages closer to the mid-plane, indicating that disk heating—or cooling—is a ubiquitous phenomenon.« less

  20. Joining the petabyte club with direct attached storage

    NASA Astrophysics Data System (ADS)

    Haupt, Andreas; Leffhalm, Kai; Wegner, Peter; Wiesand, Stephan

    2011-12-01

    Our site successfully runs more than a Petabyte of online disk, using nothing but Direct Attached Storage. The bulk of this capacity is grid-enabled and served by dCache, but sizable amounts are provided by traditional AFS or modern Lustre filesystems as well. While each of these storage flavors has a different purpose, owing to their respective strengths and weaknesses for certain use cases, their instances are all built from the same universal storage bricks. These are managed using the same scale-out techniques used for compute nodes, and run the same operating system as those, thus fully leveraging the existing know-how and infrastructure. As a result, this storage is cost effective especially regarding total cost of ownership. It is also competitive in terms of aggregate performance, performance per capacity, and - due to the possibility to make use of the latest technology early - density and power efficiency. Further advantages include a high degree of flexibility and complete avoidance of vendor lock-in. Availability and reliability in practice turn out to be more than adequate for a HENP site's major tasks. We present details about this Ansatz for online storage, hardware and software used, tweaking and tuning, lessons learned, and the actual result in practice.

  1. Landau-Lifshitz-Bloch equation for exchange-coupled grains

    NASA Astrophysics Data System (ADS)

    Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter

    2014-12-01

    Heat-assisted recording is a promising technique to further increase the storage density in hard disks. Multilayer recording grains with graded Curie temperature is discussed to further assist the write process. Describing the correct magnetization dynamics of these grains, from room temperature to far above the Curie point, during a write process is required for the calculation of bit error rates. We present a coarse-grained approach based on the Landau-Lifshitz-Bloch (LLB) equation to model exchange-coupled grains with low computational effort. The required temperature-dependent material properties such as the zero-field equilibrium magnetization as well as the parallel and normal susceptibilities are obtained by atomistic Landau-Lifshitz-Gilbert simulations. Each grain is described with one magnetization vector. In order to mimic the atomistic exchange interaction between the grains a special treatment of the exchange field in the coarse-grained approach is presented. With the coarse-grained LLB model the switching probability of a recording grain consisting of two layers with graded Curie temperature is investigated in detail by calculating phase diagrams for different applied heat pulses and external magnetic fields.

  2. Holographic storage of three-dimensional image and data using photopolymer and polymer dispersed liquid crystal films

    NASA Astrophysics Data System (ADS)

    Gao, Hong-Yue; Liu, Pan; Zeng, Chao; Yao, Qiu-Xiang; Zheng, Zhiqiang; Liu, Jicheng; Zheng, Huadong; Yu, Ying-Jie; Zeng, Zhen-Xiang; Sun, Tao

    2016-09-01

    We present holographic storage of three-dimensional (3D) images and data in a photopolymer film without any applied electric field. Its absorption and diffraction efficiency are measured, and reflective analog hologram of real object and image of digital information are recorded in the films. The photopolymer is compared with polymer dispersed liquid crystals as holographic materials. Besides holographic diffraction efficiency of the former is little lower than that of the latter, this work demonstrates that the photopolymer is more suitable for analog hologram and big data permanent storage because of its high definition and no need of high voltage electric field. Therefore, our study proposes a potential holographic storage material to apply in large size static 3D holographic displays, including analog hologram displays, digital hologram prints, and holographic disks. Project supported by the National Natural Science Foundation of China (Grant Nos. 11474194, 11004037, and 61101176) and the Natural Science Foundation of Shanghai, China (Grant No. 14ZR1415500).

  3. Calcine Waste Storage at the Idaho Nuclear Technology and Engineering Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staiger, Merle Daniel; M. C. Swenson

    2005-01-01

    This report documents an inventory of calcined waste produced at the Idaho Nuclear Technology and Engineering Center during the period from December 1963 to May 2000. The report was prepared based on calciner runs, operation of the calcined solids storage facilities, and miscellaneous operational information that establishes the range of chemical compositions of calcined waste stored at Idaho Nuclear Technology and Engineering Center. The report will be used to support obtaining permits for the calcined solids storage facilities, possible treatment of the calcined waste at the Idaho National Engineering and Environmental Laboratory, and to ship the waste to an off-sitemore » facility including a geologic repository. The information in this report was compiled from calciner operating data, waste solution analyses and volumes calcined, calciner operating schedules, calcine temperature monitoring records, and facility design of the calcined solids storage facilities. A compact disk copy of this report is provided to facilitate future data manipulations and analysis.« less

  4. Numerical and experimental analysis of heat pipes with application in concentrated solar power systems

    NASA Astrophysics Data System (ADS)

    Mahdavi, Mahboobe

    Thermal energy storage systems as an integral part of concentrated solar power plants improve the performance of the system by mitigating the mismatch between the energy supply and the energy demand. Using a phase change material (PCM) to store energy increases the energy density, hence, reduces the size and cost of the system. However, the performance is limited by the low thermal conductivity of the PCM, which decreases the heat transfer rate between the heat source and PCM, which therefore prolongs the melting, or solidification process, and results in overheating the interface wall. To address this issue, heat pipes are embedded in the PCM to enhance the heat transfer from the receiver to the PCM, and from the PCM to the heat sink during charging and discharging processes, respectively. In the current study, the thermal-fluid phenomenon inside a heat pipe was investigated. The heat pipe network is specifically configured to be implemented in a thermal energy storage unit for a concentrated solar power system. The configuration allows for simultaneous power generation and energy storage for later use. The network is composed of a main heat pipe and an array of secondary heat pipes. The primary heat pipe has a disk-shaped evaporator and a disk-shaped condenser, which are connected via an adiabatic section. The secondary heat pipes are attached to the condenser of the primary heat pipe and they are surrounded by PCM. The other side of the condenser is connected to a heat engine and serves as its heat acceptor. The applied thermal energy to the disk-shaped evaporator changes the phase of working fluid in the wick structure from liquid to vapor. The vapor pressure drives it through the adiabatic section to the condenser where the vapor condenses and releases its heat to a heat engine. It should be noted that the condensed working fluid is returned to the evaporator by the capillary forces of the wick. The extra heat is then delivered to the phase change material through the secondary heat pipes. During the discharging process, secondary heat pipes serve as evaporators and transfer the stored energy to the heat engine. (Abstract shortened by ProQuest.).

  5. Magnetically Induced Disk Winds and Transport in the HL Tau Disk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasegawa, Yasuhiro; Flock, Mario; Turner, Neal J.

    2017-08-10

    The mechanism of angular momentum transport in protoplanetary disks is fundamental to understanding the distributions of gas and dust in the disks. The unprecedented ALMA observations taken toward HL Tau at high spatial resolution and subsequent radiative transfer modeling reveal that a high degree of dust settling is currently achieved in the outer part of the HL Tau disk. Previous observations, however, suggest a high disk accretion rate onto the central star. This configuration is not necessarily intuitive in the framework of the conventional viscous disk model, since efficient accretion generally requires a high level of turbulence, which can suppressmore » dust settling considerably. We develop a simplified, semi-analytical disk model to examine under what condition these two properties can be realized in a single model. Recent, non-ideal MHD simulations are utilized to realistically model the angular momentum transport both radially via MHD turbulence and vertically via magnetically induced disk winds. We find that the HL Tau disk configuration can be reproduced well when disk winds are properly taken into account. While the resulting disk properties are likely consistent with other observational results, such an ideal situation can be established only if the plasma β at the disk midplane is β {sub 0} ≃ 2 × 10{sup 4} under the assumption of steady accretion. Equivalently, the vertical magnetic flux at 100 au is about 0.2 mG. More detailed modeling is needed to fully identify the origin of the disk accretion and quantitatively examine plausible mechanisms behind the observed gap structures in the HL Tau disk.« less

  6. Magnetically Induced Disk Winds and Transport in the HL Tau Disk

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yasuhiro; Okuzumi, Satoshi; Flock, Mario; Turner, Neal J.

    2017-08-01

    The mechanism of angular momentum transport in protoplanetary disks is fundamental to understanding the distributions of gas and dust in the disks. The unprecedented ALMA observations taken toward HL Tau at high spatial resolution and subsequent radiative transfer modeling reveal that a high degree of dust settling is currently achieved in the outer part of the HL Tau disk. Previous observations, however, suggest a high disk accretion rate onto the central star. This configuration is not necessarily intuitive in the framework of the conventional viscous disk model, since efficient accretion generally requires a high level of turbulence, which can suppress dust settling considerably. We develop a simplified, semi-analytical disk model to examine under what condition these two properties can be realized in a single model. Recent, non-ideal MHD simulations are utilized to realistically model the angular momentum transport both radially via MHD turbulence and vertically via magnetically induced disk winds. We find that the HL Tau disk configuration can be reproduced well when disk winds are properly taken into account. While the resulting disk properties are likely consistent with other observational results, such an ideal situation can be established only if the plasma β at the disk midplane is β 0 ≃ 2 × 104 under the assumption of steady accretion. Equivalently, the vertical magnetic flux at 100 au is about 0.2 mG. More detailed modeling is needed to fully identify the origin of the disk accretion and quantitatively examine plausible mechanisms behind the observed gap structures in the HL Tau disk.

  7. Electrifying the disk: a modular rotating platform for wireless power and data transmission for Lab on a disk application.

    PubMed

    Höfflin, Jens; Torres Delgado, Saraí M; Suárez Sandoval, Fralett; Korvink, Jan G; Mager, Dario

    2015-06-21

    We present a design for wireless power transfer, via inductively coupled coils, to a spinning disk. The rectified and stabilised power feeds an Arduino-compatible microcontroller (μC) on the disc, which in turn drives and monitors various sensors and actuators. The platform, which has been conceived to flexibly prototype such systems, demonstrates the feasibility of a wireless power supply and the use of a μC circuit, for example for Lab-on-a-disk applications, thereby eliminating the need for cumbersome slip rings or batteries, and adding a cogent and new degree of freedom to the setup. The large number of sensors and actuators included demonstrate that a wide range of physical parameters can be easily monitored and altered. All devices are connected to the μC via an I(2)C bus, therefore can be easily exchanged or augmented by other devices in order to perform a specific task on the disk. The wireless power supply takes up little additional physical space and should work in conjunction with most existing Lab-on-a-disk platforms as a straightforward add-on, since it does not require modification of the rotation axis and can be readily adapted to specific geometrical requirements.

  8. Magnetic Recording Media Technology for the Tb/in2 Era"

    ScienceCinema

    Bertero, Gerardo [Western Digital

    2017-12-09

    Magnetic recording has been the technology of choice of massive storage of information. The hard-disk drive industry has recently undergone a major technological transition from longitudinal magnetic recording (LMR) to perpendicular magnetic recording (PMR). However, convention perpendicular recording can only support a few new product generations before facing insurmountable physical limits. In order to support sustained recording areal density growth, new technological paradigms, such as energy-assisted recording and bit-patterined media recording are being contemplated and planned. In this talk, we will briefly discuss the LMR-to-PMR transition, the extendibility of current PMR recording, and the nature and merits of new enabling technologies. We will also discuss a technology roadmap toward recording densities approaching 10 Tv/in2, approximately 40 times higher than in current disk drives.

  9. Integral processing in beyond-Hartree-Fock calculations

    NASA Technical Reports Server (NTRS)

    Taylor, P. R.

    1986-01-01

    The increasing rate at which improvements in processing capacity outstrip improvements in input/output performance of large computers has led to recent attempts to bypass generation of a disk-based integral file. The direct self-consistent field (SCF) method of Almlof and co-workers represents a very successful implementation of this approach. This paper is concerned with the extension of this general approach to configuration interaction (CI) and multiconfiguration-self-consistent field (MCSCF) calculations. After a discussion of the particular types of molecular orbital (MO) integrals for which -- at least for most current generation machines -- disk-based storage seems unavoidable, it is shown how all the necessary integrals can be obtained as matrix elements of Coulomb and exchange operators that can be calculated using a direct approach. Computational implementations of such a scheme are discussed.

  10. Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters

    PubMed Central

    Bajaj, Chandrajit

    2009-01-01

    Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231

  11. Experiences with http/WebDAV protocols for data access in high throughput computing

    NASA Astrophysics Data System (ADS)

    Bernabeu, Gerard; Martinez, Francisco; Acción, Esther; Bria, Arnau; Caubet, Marc; Delfino, Manuel; Espinal, Xavier

    2011-12-01

    In the past, access to remote storage was considered to be at least one order of magnitude slower than local disk access. Improvement on network technologies provide the alternative of using remote disk. For those accesses one can today reach levels of throughput similar or exceeding those of local disks. Common choices as access protocols in the WLCG collaboration are RFIO, [GSI]DCAP, GRIDFTP, XROOTD and NFS. HTTP protocol shows a promising alternative as it is a simple, lightweight protocol. It also enables the use of standard technologies such as http caching or load balancing which can be used to improve service resilience and scalability or to boost performance for some use cases seen in HEP such as the "hot files". WebDAV extensions allow writing data, giving it enough functionality to work as a remote access protocol. This paper will show our experiences with the WebDAV door for dCache, in terms of functionality and performance, applied to some of the HEP work flows in the LHC Tier1 at PIC.

  12. [PACS: storage and retrieval of digital radiological image data].

    PubMed

    Wirth, S; Treitl, M; Villain, S; Lucke, A; Nissen-Meyer, S; Mittermaier, I; Pfeifer, K-J; Reiser, M

    2005-08-01

    Efficient handling of both picture archiving and retrieval is a crucial factor when new PACS installations as well as technical upgrades are planned. For a large PACS installation for 200 actual studies, the number, modality,and body region of available priors were evaluated. In addition, image access time of 100 CT studies from hard disk (RAID), magneto-optic disk (MOD), and tape archives (TAPE) were accessed. For current examinations priors existed in 61.1% with an averaged quantity of 7.7 studies. Thereof 56.3% were within 0-3 months, 84.9% within 12 months, 91.7% within 24 months, and 96.2% within 36 months. On average, access to images from the hard disk cache was more than 100 times faster then from MOD or TAPE. Since only PACS RAID provides online image access, at least current imaging of the past 12 months should be available from cache. An accurate prefetching mechanism facilitates effective use of the expensive online cache area. For that, however, close interaction of PACS, RIS, and KIS is an indispensable prerequisite.

  13. Electrochemical Studies of Redox Systems for Energy Storage

    NASA Technical Reports Server (NTRS)

    Wu, C. D.; Calvo, E. J.; Yeager, E.

    1983-01-01

    Particular attention was paid to the Cr(II)/Cr(III) redox couple in aqueous solutions in the presence of Cl(-) ions. The aim of this research has been to unravel the electrode kinetics of this redox couple and the effect of Cl(1) and electrode substrate. Gold and silver were studied as electrodes and the results show distinctive differences; this is probably due to the role Cl(-) ion may play as a mediator in the reaction and the difference in state of electrical charge on these two metals (difference in the potential of zero charge, pzc). The competition of hydrogen evolution with CrCl3 reduction on these surfaces was studied by means of the rotating ring disk electrode (RRDE). The ring downstream measures the flux of chromous ions from the disk and therefore separation of both Cr(III) and H2 generation can be achieved by analyzing ring and disk currents. The conditions for the quantitative detection of Cr(2+) at the ring electrode were established. Underpotential deposition of Pb on Ag and its effect on the electrokinetics of Cr(II)/Cr(III) reaction was studied.

  14. Integration experiences and performance studies of A COTS parallel archive systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Bary

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less

  15. Integration experiments and performance studies of a COTS parallel archive system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Gary

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less

  16. Heat-Assisted Magnetic Recording: Fundamental Limits to Inverse Electromagnetic Design

    NASA Astrophysics Data System (ADS)

    Bhargava, Samarth

    In this dissertation, we address the burgeoning fields of diffractive optics, metals-optics and plasmonics, and computational inverse problems in the engineering design of electromagnetic structures. We focus on the application of the optical nano-focusing system that will enable Heat-Assisted Magnetic Recording (HAMR), a higher density magnetic recording technology that will fulfill the exploding worldwide demand of digital data storage. The heart of HAMR is a system that focuses light to a nano- sub-diffraction-limit spot with an extremely high power density via an optical antenna. We approach this engineering problem by first discussing the fundamental limits of nano-focusing and the material limits for metal-optics and plasmonics. Then, we use efficient gradient-based optimization algorithms to computationally design shapes of 3D nanostructures that outperform human designs on the basis of mass-market product requirements. In 2014, the world manufactured ˜1 zettabyte (ZB), ie. 1 Billion terabytes (TBs), of data storage devices, including ˜560 million magnetic hard disk drives (HDDs). Global demand of storage will likely increase by 10x in the next 5-10 years, and manufacturing capacity cannot keep up with demand alone. We discuss the state-of-art HDD and why industry invented Heat-Assisted Magnetic Recording (HAMR) to overcome the data density limitations. HAMR leverages the temperature sensitivity of magnets, in which the coercivity suddenly and non-linearly falls at the Curie temperature. Data recording to high-density hard disks can be achieved by locally heating one bit of information while co-applying a magnetic field. The heating can be achieved by focusing 100 microW of light to a 30nm diameter spot on the hard disk. This is an enormous light intensity, roughly ˜100,000,000x the intensity of sunlight on the earth's surface! This power density is ˜1,000x the output of gold-coated tapered optical fibers used in Near-field Scanning Optical Microscopes (NSOM), which is the incumbent technology allowing the focus of light to the nano-scale. Even in these lower power NSOM probe tips, optical self-heating and deformation of the nano- gold tips are significant reliability and performance bottlenecks. Hence, the design and manufacture of the higher power optical nano-focusing system for HAMR must overcome great engineering challenges in optical and thermal performance. There has been much debate about alternative materials for metal-optics and plasmonics to cure the current plague of optical loss and thermal reliability in this burgeoning field. We clear the air. For an application like HAMR, where intense self-heating occurs, refractory metals and metals nitrides with high melting points but low optical and thermal conductivities are inferior to noble metals. This conclusion is contradictory to several claims and may be counter-intuitive to some, but the analysis is simple, evident and relevant to any engineer working on metal-optics and plasmonics. Indeed, the best metals for DC and RF electronics are also the best at optical frequencies. We also argue that the geometric design of electromagnetic structures (especially sub-wavelength devices) is too cumbersome for human designers, because the wave nature of light necessitates that this inverse problem be non-convex and non-linear. When the computation for one forward simulation is extremely demanding (hours on a high-performance computing cluster), typical designers constrain themselves to only 2 or 3 degrees of freedom. We attack the inverse electromagnetic design problem using gradient-based optimization after leveraging the adjoint-method to efficiently calculate the gradient (ie. the sensitivity) of an objective function with respect to thousands to millions of parameters. This approach results in creative computational designs of electromagnetic structures that human designers could not have conceived yet yield better optical performance. After gaining key insights from the fundamental limits and building our Inverse Electromagnetic Design software, we finally attempt to solve the challenges in enabling HAMR and the future supply of digital data storage hardware. In 2014, the hard disk industry spent ˜$200 million dollars in R&D but poor optical and thermal performance of the metallic nano-transducer continues to prevent commercial HAMR product. Via our design process, we successfully computationally-generated designs for the nano-focusing system that meets specifications for higher data density, lower adjacent track interference, lower laser power requirements and, most notably, lower self-heating of the crucial metallic nano-antenna. We believe that computational design will be a crucial component in commercial HAMR as well as many other commercially significant applications of micro- and nano- optics. If successful in commercializing HAMR, the hard disk industry may sell 1 billion HDDs per year by 2025, with an average of 6 semiconductor diode lasers and 6 optical chips per drive. The key players will become the largest manufacturers of integrated optical chips and nano-antennas in the world. This industry will perform millions of single-mode laser alignments per day. (Abstract shortened by UMI.).

  17. Effect of bioactive glass-containing resin composite on dentin remineralization.

    PubMed

    Lee, Myoung Geun; Jang, Ji-Hyun; Ferracane, Jack L; Davis, Harry; Bae, Han Eul; Choi, Dongseok; Kim, Duck-Su

    2018-05-25

    The purpose of this study was to evaluate the effect of bioactive glass (BAG)-containing composite on dentin remineralization. Sixty-six dentin disks with 3 mm thickness were prepared from thirty-three bovine incisors. The following six experimental groups were prepared according to type of composite (control and experimental) and storage solutions (simulated body fluid [SBF] and phosphate-buffered saline [PBS]): 1 (undemineralized); 2 (demineralized); 3 (demineralized with control in SBF); 4 (demineralized with control in PBS); 5 (demineralized with experimental composite in SBF); and 6 (demineralized with experimental composite in PBS). BAG65S (65% Si, 31% Ca, and 4% P) was prepared via the sol-gel method. The control composite was made with a 50:50 Bis-GMA:TEGDMA resin matrix, 57 wt% strontium glass, and 15 wt% aerosol silica. The experimental composite had the same resin and filler, but with 15 wt% BAG65S replacing the aerosol silica. For groups 3-6, composite disks (20 × 10 × 2 mm) were prepared and approximated to the dentin disks and stored in PBS or SBF for 2 weeks. Micro-hardness measurements, attenuated total reflection Fourier-transform infrared spectroscopy (ATR-FTIR) and field-emission scanning electron microscopy (FE-SEM) was investigated. The experimental BAG-containing composite significantly increased the micro-hardness of the adjacent demineralized dentin. ATR-FTIR revealed calcium phosphate peaks on the surface of the groups which used experimental composite. FE-SEM revealed surface deposits partially occluding the dentin surface. No significant difference was found between SBF and PBS storage. BAG-containing composites placed in close proximity can partially remineralize adjacent demineralized dentin. Copyright © 2018. Published by Elsevier Ltd.

  18. Effect of CO2 and Nd:YAG Lasers on Shear Bond Strength of Resin Cement to Zirconia Ceramic.

    PubMed

    Kasraei, Shahin; Rezaei-Soufi, Loghman; Yarmohamadi, Ebrahim; Shabani, Amanj

    2015-09-01

    Because of poor bond between resin cement and zirconia ceramics, laser surface treatments have been suggested to improve adhesion. The present study evaluated the effect of CO2 and Nd:YAG lasers on the shear bond strength (SBS) of resin cement to zirconia ceramic. Ninety zirconia disks (6×2 mm) were randomly divided into six groups of 15. In the control group, no surface treatment was used. In the test groups, laser surface treatment was accomplished using CO2 and Nd:YAG lasers, respectively (groups two and three). Composite resin disks (3×2 mm) were fabricated and cemented to zirconia disks with self-etch resin cement and stored in distilled water for 24 hours. In the test groups four-six, the samples were prepared as in groups one-three and then thermocycled and stored in distilled water for six months. The SBS tests were performed (strain rate of 0.5 mm/min). The fracture modes were observed via stereomicroscopy. Data were analyzed with one and two-way ANOVA, independent t and Tukey's tests. The SBS values of Nd:YAG group (18.95±3.46MPa) was significantly higher than that of the CO2 group (14.00±1.96MPa), but lower than that of controls (23.35±3.12MPa). After thermocycling and six months of water storage, the SBS of the untreated group (1.80±1.23 MPa) was significantly lower than that of the laser groups. In groups stored for 24 hours, 60% of the failures were adhesive; however, after thermocycling and six months of water storage, 100% of failures were adhesive. Bonding durability of resin cement to zirconia improved with CO2 and Nd:YAG laser surface treatment of zirconia ceramic.

  19. Hydrocarbon Emission Rings in Protoplanetary Disks Induced by Dust Evolution

    NASA Astrophysics Data System (ADS)

    Bergin, Edwin A.; Du, Fujun; Cleeves, L. Ilsedore; Blake, G. A.; Schwarz, K.; Visser, R.; Zhang, K.

    2016-11-01

    We report observations of resolved C2H emission rings within the gas-rich protoplanetary disks of TW Hya and DM Tau using the Atacama Large Millimeter Array. In each case the emission ring is found to arise at the edge of the observable disk of millimeter-sized grains (pebbles) traced by submillimeter-wave continuum emission. In addition, we detect a C3H2 emission ring with an identical spatial distribution to C2H in the TW Hya disk. This suggests that these are hydrocarbon rings (I.e., not limited to C2H). Using a detailed thermo-chemical model we show that reproducing the emission from C2H requires a strong UV field and C/O > 1 in the upper disk atmosphere and outer disk, beyond the edge of the pebble disk. This naturally arises in a disk where the ice-coated dust mass is spatially stratified due to the combined effects of coagulation, gravitational settling and drift. This stratification causes the disk surface and outer disk to have a greater permeability to UV photons. Furthermore the concentration of ices that transport key volatile carriers of oxygen and carbon in the midplane, along with photochemical erosion of CO, leads to an elemental C/O ratio that exceeds unity in the UV-dominated disk. Thus the motions of the grains, and not the gas, lead to a rich hydrocarbon chemistry in disk surface layers and in the outer disk midplane.

  20. High-resolution 25 μm Imaging of the Disks around Herbig Ae/Be Stars

    NASA Astrophysics Data System (ADS)

    Honda, M.; Maaskant, K.; Okamoto, Y. K.; Kataza, H.; Yamashita, T.; Miyata, T.; Sako, S.; Fujiyoshi, T.; Sakon, I.; Fujiwara, H.; Kamizuka, T.; Mulders, G. D.; Lopez-Rodriguez, E.; Packham, C.; Onaka, T.

    2015-05-01

    We imaged circumstellar disks around 22 Herbig Ae/Be stars at 25 μm using Subaru/COMICS and Gemini/T-ReCS. Our sample consists of an equal number of objects from each of the two categories defined by Meeus et al.; 11 group I (flaring disk) and II (flat disk) sources. We find that group I sources tend to show more extended emission than group II sources. Previous studies have shown that the continuous disk is difficult to resolve with 8 m class telescopes in the Q band due to the strong emission from the unresolved innermost region of the disk. This indicates that the resolved Q-band sources require a hole or gap in the disk material distribution to suppress the contribution from the innermost region of the disk. As many group I sources are resolved at 25 μm, we suggest that many, but not all, group I Herbig Ae/Be disks have a hole or gap and are (pre-)transitional disks. On the other hand, the unresolved nature of many group II sources at 25 μm supports the idea that group II disks have a continuous flat disk geometry. It has been inferred that group I disks may evolve into group II through the settling of dust grains into the mid-plane of the protoplanetary disk. However, considering the growing evidence for the presence of a hole or gap in the disk of group I sources, such an evolutionary scenario is unlikely. The difference between groups I and II may reflect different evolutionary pathways of protoplanetary disks. Based on data collected at the Subaru Telescope, via the time exchange program between Subaru and the Gemini Observatory. The Subaru Telescope is operated by the National Astronomical Observatory of Japan.

  1. The Transitional Protoplanetary Disk Frequency as a Function of Age: Disk Evolution in the Coronet Cluster, Taurus, and Other 1--8 Myr-old Regions

    NASA Technical Reports Server (NTRS)

    Currie, Thayne; Sicilia-Aguilar, Auora

    2011-01-01

    We present Spitzer 3.6-24 micron photometry and spectroscopy for stars in the 1-3 Myr-old Coronet Cluster, expanding upon the survey of Sicilia-Aguilar et al. (2008). Using sophisticated radiative transfer models, we analyze these new data and those from Sicilia-Aguilar et al. (2008) to identify disks with evidence for substantial dust evolution consistent with disk clearing: transitional disks. We then analyze data in Taurus and others young clusters - IC 348, NGC 2362, and eta Cha -- to constrain the transitional disk frequency as a function of time. Our analysis confirms previous results finding evidence for two types of transitional disks -- those with inner holes and those that are homologously depleted. The percentage of disks in the transitional phase increases from approx.15-20% at 1-2 Myr to > 50% at 5-8 Myr; the mean transitional disk lifetime is closer to approx. 1 Myr than 0.1-0.5 Myr, consistent with previous studies by Currie et al. (2009) and Sicilia-Aguilar et al. (2009). In the Coronet Cluster and IC 348, transitional disks are more numerous for very low-mass M3--M6 stars than for more massive K5-M2 stars, while Taurus lacks a strong spectral type-dependent frequency. Assuming standard values for the gas-to-dust ratio and other disk properties, the lower limit for the masses of optically-thick primordial disks is Mdisk approx. 0.001-0.003 M*. We find that single color-color diagrams do not by themselves uniquely identify transitional disks or primordial disks. Full SED modeling is required to accurately assess disk evolution for individual sources and inform statistical estimates of the transitional disk population in large samples using mid-IR colors.

  2. The Transitional Protoplanetary Disk Frequency as a Function of Age: Disk Evolution In the Coronet Cluster, Taurus, and Other 1-8 Myr Old Regions

    NASA Astrophysics Data System (ADS)

    Currie, Thayne; Sicilia-Aguilar, Aurora

    2011-05-01

    We present Spitzer 3.6-24 μm photometry and spectroscopy for stars in the 1-3 Myr old Coronet Cluster, expanding upon the survey of Sicilia-Aguilar et al. Using sophisticated radiative transfer models, we analyze these new data and those from Sicilia-Aguilar et al. to identify disks with evidence for substantial dust evolution consistent with disk clearing: transitional disks. We then analyze data in Taurus and others young clusters—IC 348, NGC 2362, and η Cha—to constrain the transitional disk frequency as a function of time. Our analysis confirms previous results finding evidence for two types of transitional disks—those with inner holes and those that are homologously depleted. The percentage of disks in the transitional phase increases from ~15%-20% at 1-2 Myr to >=50% at 5-8 Myr the mean transitional disk lifetime is closer to ~1 Myr than 0.1-0.5 Myr, consistent with previous studies by Currie et al. and Sicilia-Aguilar et al. In the Coronet Cluster and IC 348, transitional disks are more numerous for very low mass M3-M6 stars than for more massive K5-M2 stars, while Taurus lacks a strong spectral-type-dependent frequency. Assuming standard values for the gas-to-dust ratio and other disk properties, the lower limit for the masses of optically thick primordial disks is M disk ≈ 0.001-0.003 M sstarf. We find that single color-color diagrams do not by themselves uniquely identify transitional disks or primordial disks. Full spectral energy distribution modeling is required to accurately assess disk evolution for individual sources and inform statistical estimates of the transitional disk population in large samples using mid-IR colors.

  3. Apparatus for controlling fluid flow in a conduit wall

    DOEpatents

    Glass, S. Jill; Nicolaysen, Scott D.; Beauchamp, Edwin K.

    2003-05-13

    A frangible rupture disk and mounting apparatus for use in blocking fluid flow, generally in a fluid conducting conduit such as a well casing, a well tubing string or other conduits within subterranean boreholes. The disk can also be utilized in above-surface pipes or tanks where temporary and controllable fluid blockage is required. The frangible rupture disk is made from a pre-stressed glass with controllable rupture properties wherein the strength distribution has a standard deviation less than approximately 5% from the mean strength. The frangible rupture disk has controllable operating pressures and rupture pressures.

  4. Variable Dynamics in the Inner Disk of HD 135344B Revealed with Multi-epoch Scattered Light Imaging

    NASA Astrophysics Data System (ADS)

    Stolker, Tomas; Sitko, Mike; Lazareff, Bernard; Benisty, Myriam; Dominik, Carsten; Waters, Rens; Min, Michiel; Perez, Sebastian; Milli, Julien; Garufi, Antonio; de Boer, Jozua; Ginski, Christian; Kraus, Stefan; Berger, Jean-Philippe; Avenhaus, Henning

    2017-11-01

    We present multi-epoch Very Large Telescope/Spectro-Polarimetric High-contrast Exoplanet REsearch (VLT/SPHERE) observations of the protoplanetary disk around HD 135344B (SAO 206462). The J-band scattered light imagery reveal, with high spatial resolution (˜41 mas, 6.4 au), the disk surface beyond ˜20 au. Temporal variations are identified in the azimuthal brightness distributions of all epochs, presumably related to the asymmetrically shading dust distribution in the inner disk. These shadows manifest themselves as narrow lanes, cast by localized density enhancements, and broader features which possibly trace the larger scale dynamics of the inner disk. We acquired visible and near-infrared photometry which shows variations up to 10% in the JHK bands, possibly correlated with the presence of the shadows. Analysis of archival Very Large Telescope Interferometer/Precision Integrated-Optics Near-infrared Imaging ExpeRiment (VLTI/PIONIER) H-band visibilities constrain the orientation of the inner disk to I=18\\buildrel{\\circ}\\over{.} {2}-4.1+3.4 and {PA}=57\\buildrel{\\circ}\\over{.} 3+/- 5\\buildrel{\\circ}\\over{.} 7, consistent with an alignment with the outer disk or a minor disk warp of several degrees. The latter scenario could explain the broad, quasi-stationary shadowing in north-northwest direction in case the inclination of the outer disk is slightly larger. The correlation between the shadowing and the near-infrared excess is quantified with a grid of radiative transfer models. The variability of the scattered light contrast requires extended variations in the inner disk atmosphere (H/r≲ 0.2). Possible mechanisms that may cause asymmetric variations in the optical depth ({{Δ }}τ ≲ 1) through the atmosphere of the inner disk include turbulent fluctuations, planetesimal collisions, or a dusty disk wind, possibly enhanced by a minor disk warp. A fine temporal sampling is required to follow day-to-day changes of the shadow patterns which may be a face-on variant of the UX Orionis phenomenon. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programmes 087.C-0702(A,B), 087.C-0458(B,C), 087.C-0703(B), 088.C-0670(B), 088.D-0185(A), 088.C-0763(D), 089.C-0211(A), 091.C-0570(A), 095.C-0273(A), 097.C-0885(A), 097.C-0702(A), and 297.C-5023(A).

  5. Damming the genomic data flood using a comprehensive analysis and storage data structure

    PubMed Central

    Bouffard, Marc; Phillips, Michael S.; Brown, Andrew M.K.; Marsh, Sharon; Tardif, Jean-Claude; van Rooij, Tibor

    2010-01-01

    Data generation, driven by rapid advances in genomic technologies, is fast outpacing our analysis capabilities. Faced with this flood of data, more hardware and software resources are added to accommodate data sets whose structure has not specifically been designed for analysis. This leads to unnecessarily lengthy processing times and excessive data handling and storage costs. Current efforts to address this have centered on developing new indexing schemas and analysis algorithms, whereas the root of the problem lies in the format of the data itself. We have developed a new data structure for storing and analyzing genotype and phenotype data. By leveraging data normalization techniques, database management system capabilities and the use of a novel multi-table, multidimensional database structure we have eliminated the following: (i) unnecessarily large data set size due to high levels of redundancy, (ii) sequential access to these data sets and (iii) common bottlenecks in analysis times. The resulting novel data structure horizontally divides the data to circumvent traditional problems associated with the use of databases for very large genomic data sets. The resulting data set required 86% less disk space and performed analytical calculations 6248 times faster compared to a standard approach without any loss of information. Database URL: http://castor.pharmacogenomics.ca PMID:21159730

  6. Magnetic bearings for a high-performance optical disk buffer

    NASA Technical Reports Server (NTRS)

    Hockney, Richard; Hawkey, Timothy

    1993-01-01

    An optical disk buffer concept can provide gigabit-per-second data rates and terabit capacity through the use of arrays of solid state lasers applied to a stack of erasable/reusable optical disks. The RCA optical disk buffer has evoked interest by NASA for space applications. The porous graphite air bearings in the rotary spindle as well as those used in the linear translation of the read/write head would be replaced by magnetic bearings or mechanical (ball or roller) bearings. Based upon past experience, roller or ball bearings for the translation stages are not feasible. Unsatisfactory, although limited experience exists with ball bearing spindles also. Magnetic bearings, however, appear ideally suited for both applications. The use of magnetic bearings is advantageous in the optical disk buffer because of the absence of physical contact between the rotating and stationary members. This frictionless operation leads to extended life and reduced drag. The manufacturing tolerances that are required to fabricate magnetic bearings would also be relaxed from those required for precision ball and gas bearings. Since magnetic bearings require no lubricant, they are inherently compatible with a space (vacuum) environment. Magnetic bearings also allow the dynamics of the rotor/bearing system to be altered through the use of active control. This provides the potential for reduced vibration, extended regions of stable operation, and more precise control of position.

  7. The Evolution of the Accretion Disk Around 4U 1820-30 During a Superburst

    NASA Technical Reports Server (NTRS)

    Ballantyne, D. R.; Strohmayer, T. E.

    2004-01-01

    Accretion from a disk onto a collapsed, relativistic star - a neutron star or black hole - is the mechanism widely believed to be responsible for the emission from compact X-ray binaries. Because of the extreme spatial resolution required, it is not yet possible to directly observe the evolution or dynamics of the inner parts of the accretion disk where general relativistic effects are dominant. Here, we use the bright X-ray emission from a superburst on the surface of the neutron star 4U 1820-30 as a spotlight to illuminate the disk surface. The X-rays cause iron atoms in the disk t o fluoresce, allowing a determination of the ionization state, covering factor and inner radius of the disk over the course of the burst. The time-resolved spectral fitting shows that the inner region of the disk is disrupted by the burst, possibly being heated into a thicker, more tenuous flow, before recovering its previous form in approximately 1000 s. This marks the first instance that the evolution of the inner regions of an accretion disk has been observed in real-time.

  8. Radial Surface Density Profiles of Gas and Dust in the Debris Disk Around 49 Ceti

    NASA Technical Reports Server (NTRS)

    Hughes, A. Meredith; Lieman-Sifry, Jesse; Flaherty, Kevin M.; Daley, Cail M.; Roberge, Aki; Kospal, Agnes; Moor, Attila; Kamp, Inga; Wilner, David J.; Andrews, Sean M.; hide

    2017-01-01

    We present approximately 0".4 resolution images of CO(3-2) and associated continuum emission from the gas-bearing debris disk around the nearby A star 49 Ceti, observed with the Atacama Large Millimeter/Submillimeter Array (ALMA). We analyze the ALMA visibilities in tandem with the broadband spectral energy distribution to measure the radial surface density profiles of dust and gas emission from the system. The dust surface density decreases with radius between approximately 100 and 310 au, with a marginally significant enhancement of surface density at a radius of approximately 110 au. The SED requires an inner disk of small grains in addition to the outer disk of larger grains resolved by ALMA. The gas disk exhibits a surface density profile that increases with radius, contrary to most previous spatially resolved observations of circumstellar gas disks. While approximately 80% of the CO flux is well described by an axisymmetric power-law disk in Keplerian rotation about the central star, residuals at approximately 20% of the peak flux exhibit a departure from axisymmetry suggestive of spiral arms or a warp in the gas disk. The radial extent of the gas disk (approx. 220 au) is smaller than that of the dust disk (approx. 300 au), consistent with recent observations of other gasbearing debris disks. While there are so far only three broad debris disks with well characterized radial dust profiles at millimeter wavelengths, 49 Ceti's disk shows a markedly different structure from two radially resolved gas-poor debris disks, implying that the physical processes generating and sculpting the gas and dust are fundamentally different.

  9. MICE data handling on the Grid

    NASA Astrophysics Data System (ADS)

    Martyniak, J.; Mice Collaboration

    2014-06-01

    The international Muon Ionisation Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time, for application to a future Neutrino factory or Muon Collider. The experiment is currently under construction at the ISIS synchrotron at the Rutherford Appleton Laboratory (RAL), UK. In this paper we present a system - the Raw Data Mover, which allows us to store and distribute MICE raw data - and a framework for offline reconstruction and data management. The aim of the Raw Data Mover is to upload raw data files onto a safe tape storage as soon as the data have been written out by the DAQ system and marked as ready to be uploaded. Internal integrity of the files is verified and they are uploaded to the RAL Tier-1 Castor Storage Element (SE) and placed on two tapes for redundancy. We also make another copy at a separate disk-based SE at this stage to make it easier for users to access data quickly. Both copies are check-summed and the replicas are registered with an instance of the LCG File Catalog (LFC). On success a record with basic file properties is added to the MICE Metadata DB. The reconstruction process is triggered by new raw data records filled in by the mover system described above. Off-line reconstruction jobs for new raw files are submitted to RAL Tier-1 and the output is stored on tape. Batch reprocessing is done at multiple MICE enabled Grid sites and output files are shipped to central tape or disk storage at RAL using a custom File Transfer Controller.

  10. Concepts of flywheels for energy storage using autostable high-T(sub c) superconducting magnetic bearings

    NASA Technical Reports Server (NTRS)

    Bornemann, Hans J.; Zabka, R.; Boegler, P.; Urban, C.; Rietschel, H.

    1994-01-01

    A flywheel for energy storage using autostable high-T(sub c) superconducting magnetic bearings has been built. The rotating disk has a total weight of 2.8 kg. The maximum speed is 9240 rpm. A process that allows accelerated, reliable and reproducible production of melt-textured superconducting material used for the bearings has been developed. In order to define optimum configurations for radial and axial bearings, interaction forces in three dimensions and vertical and horizontal stiffness have been measured between superconductors and permanent magnets in different geometries and various shapes. Static as well as dynamic measurements have been performed. Results are being reported and compared to theoretical models.

  11. Some emerging applications of lasers

    NASA Astrophysics Data System (ADS)

    Christensen, C. P.

    1982-10-01

    Applications of lasers in photochemistry, advanced instrumentation, and information storage are discussed. Laser microchemistry offers a number of new methods for altering the morphology of a solid surface with high spatial resolution. Recent experiments in material deposition, material removal, and alloying and doping are reviewed. A basic optical disk storage system is described and the problems faced by this application are discussed, in particular those pertaining to recording media. An advanced erasable system based on the magnetooptic effect is described. Applications of lasers for remote sensing are discussed, including various lidar systems, the use of laser-induced fluorescence for oil spill characterization and uranium exploration, and the use of differential absorption for detection of atmospheric constituents, temperature, and humidity.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Currie, Thayne; Sicilia-Aguilar, Aurora

    We present Spitzer 3.6-24 {mu}m photometry and spectroscopy for stars in the 1-3 Myr old Coronet Cluster, expanding upon the survey of Sicilia-Aguilar et al. Using sophisticated radiative transfer models, we analyze these new data and those from Sicilia-Aguilar et al. to identify disks with evidence for substantial dust evolution consistent with disk clearing: transitional disks. We then analyze data in Taurus and others young clusters-IC 348, NGC 2362, and {eta} Cha-to constrain the transitional disk frequency as a function of time. Our analysis confirms previous results finding evidence for two types of transitional disks-those with inner holes and thosemore » that are homologously depleted. The percentage of disks in the transitional phase increases from {approx}15%-20% at 1-2 Myr to {>=}50% at 5-8 Myr; the mean transitional disk lifetime is closer to {approx}1 Myr than 0.1-0.5 Myr, consistent with previous studies by Currie et al. and Sicilia-Aguilar et al. In the Coronet Cluster and IC 348, transitional disks are more numerous for very low mass M3-M6 stars than for more massive K5-M2 stars, while Taurus lacks a strong spectral-type-dependent frequency. Assuming standard values for the gas-to-dust ratio and other disk properties, the lower limit for the masses of optically thick primordial disks is M{sub disk} {approx} 0.001-0.003 M{sub *}. We find that single color-color diagrams do not by themselves uniquely identify transitional disks or primordial disks. Full spectral energy distribution modeling is required to accurately assess disk evolution for individual sources and inform statistical estimates of the transitional disk population in large samples using mid-IR colors.« less

  13. Thin Disks Gone MAD: Magnetically Arrested Accretion in the Thin Regime

    NASA Astrophysics Data System (ADS)

    Avara, Mark J.; McKinney, Jonathan C.; Reynolds, Christopher S.

    2015-01-01

    The collection and concentration of surrounding large scale magnetic fields by black hole accretion disks may be required for production of powerful, spin driven jets. So far, accretion disks have not been shown to grow sufficient poloidal flux via the turbulent dynamo alone to produce such persistent jets. Also, there have been conflicting answers as to how, or even if, an accretion disk can collect enough magnetic flux from the ambient environment. Extending prior numerical studies of magnetically arrested disks (MAD) in the thick (angular height, H/R~1) and intermediate (H/R~.2-.6) accretion regimes, we present our latest results from fully general relativistic MHD simulations of the thinnest BH (H/R~.1) accretion disks to date exhibiting the MAD mode of accretion. We explore the significant deviations of this accretion mode from the standard picture of thin, MRI-driven accretion, and demonstrate the accumulation of large-scale magnetic flux.

  14. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE PAGES

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  15. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  16. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergin, Edwin A.; Du, Fujun; Schwarz, K.

    We report observations of resolved C{sub 2}H emission rings within the gas-rich protoplanetary disks of TW Hya and DM Tau using the Atacama Large Millimeter Array. In each case the emission ring is found to arise at the edge of the observable disk of millimeter-sized grains (pebbles) traced by submillimeter-wave continuum emission. In addition, we detect a C{sub 3}H{sub 2} emission ring with an identical spatial distribution to C{sub 2}H in the TW Hya disk. This suggests that these are hydrocarbon rings (i.e., not limited to C{sub 2}H). Using a detailed thermo-chemical model we show that reproducing the emission frommore » C{sub 2}H requires a strong UV field and C/O > 1 in the upper disk atmosphere and outer disk, beyond the edge of the pebble disk. This naturally arises in a disk where the ice-coated dust mass is spatially stratified due to the combined effects of coagulation, gravitational settling and drift. This stratification causes the disk surface and outer disk to have a greater permeability to UV photons. Furthermore the concentration of ices that transport key volatile carriers of oxygen and carbon in the midplane, along with photochemical erosion of CO, leads to an elemental C/O ratio that exceeds unity in the UV-dominated disk. Thus the motions of the grains, and not the gas, lead to a rich hydrocarbon chemistry in disk surface layers and in the outer disk midplane.« less

  18. Use of Optical Storage Devices as Shared Resources in Local Area Networks

    DTIC Science & Technology

    1989-09-01

    13 3. SERVICE CALLS FOR MS-DOS CD-ROM EXTENSIONS . 14 4. MS-DOS PRIMITIVE GROUPS ....................... 15 5. RAM USAGE FOR VARIOUS LAN...17 2. Service Call Translation to DOS Primitives ............. 19 3. MS-DOS Device Drivers ............................. 21 4. MS-DOS/ROM...directed to I/O devices will be referred to as primitive instruction groups). These primitive instruction groups include keyboard, video, disk, serial

  19. An overview of the education and training component of RICIS

    NASA Technical Reports Server (NTRS)

    Freedman, Glenn B.

    1987-01-01

    Research in education and training according to RICIS (Research Institute for Computing and Information Systems) program focuses on means to disseminate knowledge, skills, and technological advances rapidly, accurately, and effectively. A range of areas for study include: artificial intelligence, hypermedia and full-text retrieval strategies, use of mass storage and retrieval options such as CD-ROM and laser disks, and interactive video and interactive media presentations.

  20. Recent evolution of the offline computing model of the NOvA experiment

    DOE PAGES

    Habig, Alec; Norman, A.; Group, Craig

    2015-12-23

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study ν e appearance in a ν μ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files onmore » either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. In addition, the current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.« less

  1. Recent Evolution of the Offline Computing Model of the NOvA Experiment

    NASA Astrophysics Data System (ADS)

    Habig, Alec; Norman, A.

    2015-12-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study νe appearance in a νμ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files on either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. The current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.

  2. A two-stage heating scheme for heat assisted magnetic recording

    NASA Astrophysics Data System (ADS)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  3. 40 CFR 63.1517 - Records

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operating parameter value and corrective action taken. (6) For each continuous monitoring system, records... operator may retain records on microfilm, computer disks, magnetic tape, or microfiche; and (3) The owner or operator may report required information on paper or on a labeled computer disk using commonly...

  4. The evolution of a dead zone in a circumplanetary disk

    NASA Astrophysics Data System (ADS)

    Chen, Cheng; Martin, Rebecca; Zhu, Zhaohuan

    2018-01-01

    Studying the evolution of a circumplanetary disk can help us to understand the formation of Jupiter and the four Galilean satellites. With the grid-based hydrodynamic code, FARGO3D, we simulate the evolution of a circumplanetary disk with a dead zone, a region of low turbulence. Tidal torques from the sun constrain the size of the circumplanetary disk to about 0.4 R_H. The dead zone provides a cold environment for icy satellite formation. However, as material builds up there, the temperature of the dead zone may reach the critical temperature required for the magnetorotational instability to drive turbulence. Part of the dead zone accretes on to the planet in an accretion outburst. We explore possible disk parameters that provide a suitable environment for satellite formation.

  5. Grid data access on widely distributed worker nodes using scalla and SRM

    NASA Astrophysics Data System (ADS)

    Jakl, P.; Lauret, J.; Hanushevsky, A.; Shoshani, A.; Sim, A.; Gu, J.

    2008-07-01

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.

  6. Implementation of an Enterprise Information Portal (EIP) in the Loyola University Health System

    PubMed Central

    Price, Ronald N.; Hernandez, Kim

    2001-01-01

    Loyola University Chicago Stritch School of Medicine and Loyola University Medical Center have long histories in the development of applications to support the institutions' missions of education, research and clinical care. In late 1998, the institutions' application development group undertook an ambitious program to re-architecture more than 10 years of legacy application development (30+ core applications) into a unified World Wide Web (WWW) environment. The primary project objectives were to construct an environment that would support the rapid development of n-tier, web-based applications while providing standard methods for user authentication/validation, security/access control and definition of a user's organizational context. The project's efforts resulted in Loyola's Enterprise Information Portal (EIP), which meets the aforementioned objectives. This environment: 1) allows access to other vertical Intranet portals (e.g., electronic medical record, patient satisfaction information and faculty effort); 2) supports end-user desktop customization; and 3) provides a means for standardized application “look and feel.” The portal was constructed utilizing readily available hardware and software. Server hardware consists of multiprocessor (Intel Pentium 500Mhz) Compaq 6500 servers with one gigabyte of random access memory and 75 gigabytes of hard disk storage. Microsoft SQL Server was selected to house the portal's internal or security data structures. Netscape Enterprise Server was selected for the web server component of the environment and Allaire's ColdFusion was chosen for access and application tiers. Total costs for the portal environment was less than $40,000. User data storage is accomplished through two Microsoft SQL Servers and an existing SUN Microsystems enterprise server with eight processors, 750 gigabytes of disk storage operating Sybase relational database manager. Total storage capacity for all system exceeds one terabyte. In the past 12 months, the EIP has supported development of more than 88 applications and is utilized by more than 2,200 users.

  7. Improving Block-level Efficiency with scsi-mq

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caldwell, Blake A

    2015-01-01

    Current generation solid-state storage devices are exposing a new bottlenecks in the SCSI and block layers of the Linux kernel, where IO throughput is limited by lock contention, inefficient interrupt handling, and poor memory locality. To address these limitations, the Linux kernel block layer underwent a major rewrite with the blk-mq project to move from a single request queue to a multi-queue model. The Linux SCSI subsystem rework to make use of this new model, known as scsi-mq, has been merged into the Linux kernel and work is underway for dm-multipath support in the upcoming Linux 4.0 kernel. These piecesmore » were necessary to make use of the multi-queue block layer in a Lustre parallel filesystem with high availability requirements. We undertook adding support of the 3.18 kernel to Lustre with scsi-mq and dm-multipath patches to evaluate the potential of these efficiency improvements. In this paper we evaluate the block-level performance of scsi-mq with backing storage hardware representative of a HPC-targerted Lustre filesystem. Our findings show that SCSI write request latency is reduced by as much as 13.6%. Additionally, when profiling the CPU usage of our prototype Lustre filesystem, we found that CPU idle time increased by a factor of 7 with Linux 3.18 and blk-mq as compared to a standard 2.6.32 Linux kernel. Our findings demonstrate increased efficiency of the multi-queue block layer even with disk-based caching storage arrays used in existing parallel filesystems.« less

  8. Stress Measurement by Geometrical Optics

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Rossnagel, S. M.

    1986-01-01

    Fast, simple technique measures stresses in thin films. Sample disk bowed by stress into approximately spherical shape. Reflected image of disk magnified by amount related to curvature and, therefore, stress. Method requires sample substrate, such as cheap microscope cover slide, two mirrors, laser light beam, and screen.

  9. Automotive dual-mode hydrogen generation system

    NASA Astrophysics Data System (ADS)

    Kelly, D. A.

    The automotive dual mode hydrogen generation system is advocated as a supplementary hydrogen fuel means along with the current metallic hydride hydrogen storage method for vehicles. This system consists of utilizing conventional electrolysis cells with the low voltage dc electrical power supplied by two electrical generating sources within the vehicle. Since the automobile engine exhaust manifold(s) are presently an untapped useful source of thermal energy, they can be employed as the heat source for a simple heat engine/generator arrangement. The second, and minor electrical generating means consists of multiple, miniature air disk generators which are mounted directly under the vehicle's hood and at other convenient locations within the engine compartment. The air disk generators are revolved at a speed which is proportionate to the vehicles forward speed and do not impose a drag on the vehicles motion.

  10. Blue laser inorganic write-once media

    NASA Astrophysics Data System (ADS)

    Chen, Bing-Mau; Yeh, Ru-Lin

    2004-09-01

    With the advantages of low cost, portability and compliance with ROM disc, write once disk has become the most popular storage media for computer and audio/video application. In addition, write once media, like CD-R and DVD-/+ R, are used to store permanent or nonalterable information, such as financial data transitions, legal documentation, and medical data. Several write once recording materials, such as TeO[1], TeOPd[2] and Si/Cu [3] have been proposed to realize inorganic write once media. Moreover, we propose AlSi alloy [4] to be used for recording layer of write once media. It had good recording properties in DVD system although the reflectivity is too low to be used for DVD-R disk. In this paper, we report the further results in blue laser system, such as the static and dynamic characteristics of write once media.

  11. Particle Scattering in the Resonance Regime: Full-Wave Solution for Axisymmetric Particles with Large Aspect Ratios

    NASA Technical Reports Server (NTRS)

    Zuffada, Cinzia; Crisp, David

    1997-01-01

    Reliable descriptions of the optical properties of clouds and aerosols are essential for studies of radiative transfer in planetary atmospheres. The scattering algorithms provide accurate estimates of these properties for spherical particles with a wide range of sizes and refractive indices, but these methods are not valid for non-spherical particles (e.g., ice crystals, mineral dust, and smoke). Even though a host of methods exist for deriving the optical properties of nonspherical particles that are very small or very large compared with the wavelength, only a few methods are valid in the resonance regime, where the particle dimensions are comparable with the wavelength. Most such methods are not ideal for particles with sharp edges or large axial ratios. We explore the utility of an integral equation approach for deriving the single-scattering optical properties of axisymmetric particles with large axial ratios. The accuracy of this technique is shown for spheres of increasing size parameters and an ensemble of randomly oriented prolate spheroids of size parameter equal to 10.079368. In this last case our results are compared with published results obtained with the T-matrix approach. Next we derive cross sections, single-scattering albedos, and phase functions for cylinders, disks, and spheroids of ice with dimensions extending from the Rayleigh to the geometric optics regime. Compared with those for a standard surface integral equation method, the storage requirement and the computer time needed by this method are reduced, thus making it attractive for generating databases to be used in multiple-scattering calculations. Our results show that water ice disks and cylinders are more strongly absorbing than equivalent volume spheres at most infrared wavelengths. The geometry of these particles also affects the angular dependence of the scattering. Disks and columns with maximum linear dimensions larger than the wavelength scatter much more radiation in the forward and backward directions and much less radiation at intermediate phase angles than equivalent volume spheres.

  12. Hydrodynamic turbulence cannot transport angular momentum effectively in astrophysical disks.

    PubMed

    Ji, Hantao; Burin, Michael; Schartman, Ethan; Goodman, Jeremy

    2006-11-16

    The most efficient energy sources known in the Universe are accretion disks. Those around black holes convert 5-40 per cent of rest-mass energy to radiation. Like water circling a drain, inflowing mass must lose angular momentum, presumably by vigorous turbulence in disks, which are essentially inviscid. The origin of the turbulence is unclear. Hot disks of electrically conducting plasma can become turbulent by way of the linear magnetorotational instability. Cool disks, such as the planet-forming disks of protostars, may be too poorly ionized for the magnetorotational instability to occur, and therefore essentially unmagnetized and linearly stable. Nonlinear hydrodynamic instability often occurs in linearly stable flows (for example, pipe flows) at sufficiently large Reynolds numbers. Although planet-forming disks have extreme Reynolds numbers, keplerian rotation enhances their linear hydrodynamic stability, so the question of whether they can be turbulent and thereby transport angular momentum effectively is controversial. Here we report a laboratory experiment, demonstrating that non-magnetic quasi-keplerian flows at Reynolds numbers up to millions are essentially steady. Scaled to accretion disks, rates of angular momentum transport lie far below astrophysical requirements. By ruling out purely hydrodynamic turbulence, our results indirectly support the magnetorotational instability as the likely cause of turbulence, even in cool disks.

  13. WL 17: A Young Embedded Transition Disk

    NASA Astrophysics Data System (ADS)

    Sheehan, Patrick D.; Eisner, Josh A.

    2017-05-01

    We present the highest spatial resolution ALMA observations to date of the Class I protostar WL 17 in the ρ Ophiuchus L1688 molecular cloud complex, which show that it has a 12 au hole in the center of its disk. We consider whether WL 17 is actually a Class II disk being extincted by foreground material, but find that such models do not provide a good fit to the broadband spectral energy distribution (SED) and also require such high extinction that it would presumably arise from dense material close to the source, such as a remnant envelope. Self-consistent models of a disk embedded in a rotating collapsing envelope can nicely reproduce both the ALMA 3 mm observations and the broadband SED of WL 17. This suggests that WL 17 is a disk in the early stages of its formation, and yet even at this young age the inner disk has been depleted. Although there are multiple pathways for such a hole to be created in a disk, if this hole was produced by the formation of planets it could place constraints on the timescale for the growth of planets in protoplanetary disks.

  14. Mo100 to Mo99 Target Cooling Enhancements Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woloshun, Keith Albert; Dale, Gregory E.; Olivas, Eric Richard

    2016-02-16

    Target design requirements changed significantly over the past year to a much higher beam current on larger diameter disks, and with a beam impingement on both ends of the target. Scaling from the previous design, that required significantly more mass flow rate of helium coolant, and also thinner disks. A new Aerzen GM12.4 blower was selected that can deliver up to 400 g/s at 400 psi, compared to about 100 g/s possible with the Tuthill blower previously selected.Further, to accommodate the 42 MeV, 2.7 mA beam on each side of the target, the disk thickness and the coolant gaps weremore » halved to create the current baseline design: 0.5 mm disk thickness (at 29 mm diameter) and 0.25 mm coolant gap. Thermal-hydraulic analysis of this target, presented below for reference, gave very good results, suggesting that the target could be improved with fewer, thicker disks and with disk thickness increasing toward the target center. The total thickness of Mo100 in the target remaining the same, that reduces the number of coolant gaps. This allows for the gap width to be increased, increasing the mass flow in each gap and consequently increasing heat transfer. A preliminary geometry was selected and analyzed with variable disk thickness and wider coolant gaps. The result of analysis of this target shows that disk thickness increase near the window was too aggressive and further resizing of the disks is necessary, but it does illustrate the potential improvements that are possible. Experimental and analytical study of diffusers on the target exit has been done. This shows modest improvement in requcing pressure drop, as will be summarized below. However, the benefit is not significant, and implementation becomes problematic when disk thickness is varying. A bull nose at the entrance does offer significant benefit and is relatively easy to incorporate. A bull nose on both ends is now a feature of the baseline design, and will be a feature of any redesign or enhanced designs that follow.« less

  15. ICI optical data storage tape

    NASA Technical Reports Server (NTRS)

    Mclean, Robert A.; Duffy, Joseph F.

    1991-01-01

    Optical data storage tape is now a commercial reality. The world's first successful development of a digital optical tape system is complete. This is based on the Creo 1003 optical tape recorder with ICI 1012 write-once optical tape media. Several other optical tape drive development programs are underway, including one using the IBM 3480 style cartridge at LaserTape Systems. In order to understand the significance and potential of this step change in recording technology, it is useful to review the historical progress of optical storage. This has been slow to encroach on magnetic storage, and has not made any serious dent on the world's mountains of paper and microfilm. Some of the reasons for this are the long time needed for applications developers, systems integrators, and end users to take advantage of the potential storage capacity; access time and data transfer rate have traditionally been too slow for high-performance applications; and optical disk media has been expensive compared with magnetic tape. ICI's strategy in response to these concerns was to concentrate its efforts on flexible optical media; in particular optical tape. The manufacturing achievements, media characteristics, and media lifetime of optical media are discussed.

  16. Coronagraphic Imaging of Debris Disks from a High Altitude Balloon Platform

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen; Traub, Wesley; Bryden, Geoffrey; Brugarolas, Paul; Chen, Pin; Guyon, Olivier; Hillenbrand, Lynne; Kasdin, Jeremy; Krist, John; Macintosh, Bruce; hide

    2012-01-01

    Debris disks around nearby stars are tracers of the planet formation process, and they are a key element of our understanding of the formation and evolution of extrasolar planetary systems. With multi-color images of a significant number of disks, we can probe important questions: can we learn about planetary system evolution; what materials are the disks made of; and can they reveal the presence of planets? Most disks are known to exist only through their infrared flux excesses as measured by the Spitzer Space Telescope, and through images measured by Herschel. The brightest, most extended disks have been imaged with HST, and a few, such as Fomalhaut, can be observed using ground-based telescopes. But the number of good images is still very small, and there are none of disks with densities as low as the disk associated with the asteroid belt and Edgeworth-Kuiper belt in our own Solar System. Direct imaging of disks is a major observational challenge, demanding high angular resolution and extremely high dynamic range close to the parent star. The ultimate experiment requires a space-based platform, but demonstrating much of the needed technology, mitigating the technical risks of a space-based coronagrap, and performing valuable measurements of circumstellar debris disks, can be done from a high-altitude balloon platform. In this paper we present a balloon-borne telescope experiment based on the Zodiac II design that would undertake compelling studies of a sample of debris disks.

  17. Coronagraphic Imaging of Debris Disks from a High Altitude Balloon Platform

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen; Traub, Wesley; Bryden, Geoffrey; Brugarolas, Paul; Chen, Pin; Guyon, Olivier; Hillenbrand, Lynne; Krist, John; Macintosh, Bruce; Mawet, Dimitri; hide

    2012-01-01

    Debris disks around nearby stars are tracers of the planet formation process, and they are a key element of our understanding of the formation and evolution of extrasolar planetary systems. With multi-color images of a significant number of disks, we can probe important questions: can we learn about planetary system evolution; what materials are the disks made of; and can they reveal the presence of planets? Most disks are known to exist only through their infrared flux excesses as measured by the Spitzer Space Telescope, and through images measaured by Herschel. The brightest, most extended disks have been imaged with HST, and a few, such as Fomalhaut, can be observed using ground-based telescopes. But the number of good images is still very small, and there are none of disks with densities as low as the disk associated with the asteroid belt and Edgeworth-Kuiper belt in our own Solar System. Direct imaging of disks is major observational challenge, demanding high angular resolution and extremely high dynamic range close to the parent star. The ultimate experiment requires a space-based platform, but demonstrating much of the needed technology, mitigating the technical risks of a space-based coronagraph, and performing valuable measurements of circumstellar debris disks, can be done from a high-altitude balloon platform. In this paper we present a balloon-borne telescope concept based on the Zodiac II design that could undertake compelling studies of a sample of debris disks.

  18. Dynamics of binary-disk interaction. 1: Resonances and disk gap sizes

    NASA Technical Reports Server (NTRS)

    Artymowicz, Pawel; Lubow, Stephen H.

    1994-01-01

    We investigate the gravitational interaction of a generally eccentric binary star system with circumbinary and circumstellar gaseous disks. The disks are assumed to be coplanar with the binary, geometrically thin, and primarily governed by gas pressure and (turbulent) viscosity but not self-gravity. Both ordinary and eccentric Lindblad resonances are primarily responsible for truncating the disks in binaries with arbitrary eccentricity and nonextreme mass ratio. Starting from a smooth disk configuration, after the gravitational field of the binary truncates the disk on the dynamical timescale, a quasi-equilibrium is achieved, in which the resonant and viscous torques balance each other and any changes in the structure of the disk (e.g., due to global viscous evolution) occur slowly, preserving the average size of the gap. We analytically compute the approximate sizes of disks (or disk gaps) as a function of binary mass ratio and eccentricity in this quasi-equilibrium. Comparing the gap sizes with results of direct simulations using the smoothed particle hydrodynamics (SPH), we obtain a good agreement. As a by-product of the computations, we verify that standard SPH codes can adequately represent the dynamics of disks with moderate viscosity, Reynolds number R approximately 10(exp 3). For typical viscous disk parameters, and with a denoting the binary semimajor axis, the inner edge location of a circumbinary disk varies from 1.8a to 2.6a with binary eccentricity increasing from 0 to 0.25. For eccentricities 0 less than e less than 0.75, the minimum separation between a component star and the circumbinary disk inner edge is greater than a. Our calculations are relevant, among others, to protobinary stars and the recently discovered T Tau pre-main-sequence binaries. We briefly examine the case of a pre-main-sequence spectroscopic binary GW Ori and conclude that circumbinary disk truncation to the size required by one proposed spectroscopic model cannot be due to Linblad resonances, even if the disk is nonviscous.

  19. Parallel Readout of Optical Disks

    DTIC Science & Technology

    1992-08-01

    r(x,y) is the apparent reflectance function of the disk surface including the phase error. The illuminat - ing optics should be chosen so that Er(x,y...of the light uniformly illuminat - ing the chip, Ap = 474\\im 2 is the area of photodiode, and rs is the time required to switch the synapses. Figure...reference beam that is incident from the right. Once the hologram is recorded the input is blocked and the disk is illuminat - ed. Lens LI takes the

  20. Disk Density Tuning of a Maximal Random Packing

    PubMed Central

    Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.

    2016-01-01

    We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162

  1. Deciphering Debris Disk Structure with the Submillimeter Array

    NASA Astrophysics Data System (ADS)

    MacGregor, Meredith Ann

    2018-01-01

    More than 20% of nearby main sequence stars are surrounded by dusty disks continually replenished via the collisional erosion of planetesimals, larger bodies similar to asteroids and comets in our own Solar System. The material in these ‘debris disks’ is directly linked to the larger bodies such as planets in the system. As a result, the locations, morphologies, and physical properties of dust in these disks provide important probes of the processes of planet formation and subsequent dynamical evolution. Observations at millimeter wavelengths are especially critical to our understanding of these systems, since they are dominated by larger grains that do not travel far from their origin and therefore reliably trace the underlying planetesimal distribution. The Submillimeter Array (SMA) plays a key role in advancing our understanding of debris disks by providing sensitivity at the short baselines required to determine the structure of wide-field disks, such as the HR 8799 debris disk. Many of these wide-field disks are among the closest systems to us, and will serve as cornerstone templates for the interpretation of more distant, less accessible systems.

  2. Disk Density Tuning of a Maximal Random Packing.

    PubMed

    Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A

    2016-08-01

    We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.

  3. A Semiautomatic Pipeline for Be Star Light Curves

    NASA Astrophysics Data System (ADS)

    Rímulo, L. R.; Carciofi, A. C.; Rivinius, T.; Okazaki, A.

    2016-11-01

    Observational and theoretical studies from the last decade have shown that the Viscous Decretion Disk (VDD) scenario, in which turbulent viscosity is the physical mechanism responsible for the transport of material and angular momentum ejected from the star to the outer regions of the disk, is the only viable model for explaining the circumstellar disks of Be stars. In the α-disk approach applied to the VDD, the dimensionless parameter α is a measure of the turbulent viscosity. Recently, combining the time-dependent evolution of a VDD α-disk with non-LTE radiative transfer calculations, the first measurement of the α parameter was made, for the disk dissipation of the Be star ω CMa. It was found that α≍ 1 for that Be disk. The main motivation of this present work is the statistical determination of the α parameter. For this purpose, we present a pipeline that will allow the semiautomatic determination of the α parameter of several dozens of light curves of Be stars available from photometric surveys, In this contribution, we describe the pipeline, outlining the main staps required for the semiautomatic analysis of light curves

  4. Radial Surface Density Profiles of Gas and Dust in the Debris Disk around 49 Ceti

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, A. Meredith; Lieman-Sifry, Jesse; Flaherty, Kevin M.

    We present ∼0.″4 resolution images of CO(3–2) and associated continuum emission from the gas-bearing debris disk around the nearby A star 49 Ceti, observed with the Atacama Large Millimeter/Submillimeter Array (ALMA). We analyze the ALMA visibilities in tandem with the broadband spectral energy distribution to measure the radial surface density profiles of dust and gas emission from the system. The dust surface density decreases with radius between ∼100 and 310 au, with a marginally significant enhancement of surface density at a radius of ∼110 au. The SED requires an inner disk of small grains in addition to the outer diskmore » of larger grains resolved by ALMA. The gas disk exhibits a surface density profile that increases with radius, contrary to most previous spatially resolved observations of circumstellar gas disks. While ∼80% of the CO flux is well described by an axisymmetric power-law disk in Keplerian rotation about the central star, residuals at ∼20% of the peak flux exhibit a departure from axisymmetry suggestive of spiral arms or a warp in the gas disk. The radial extent of the gas disk (∼220 au) is smaller than that of the dust disk (∼300 au), consistent with recent observations of other gas-bearing debris disks. While there are so far only three broad debris disks with well characterized radial dust profiles at millimeter wavelengths, 49 Ceti’s disk shows a markedly different structure from two radially resolved gas-poor debris disks, implying that the physical processes generating and sculpting the gas and dust are fundamentally different.« less

  5. SMA Continuum Survey of Circumstellar Disks in Serpens

    NASA Astrophysics Data System (ADS)

    Law, Charles; Ricci, Luca; Andrews, Sean M.; Wilner, David J.; Qi, Chunhua

    2017-06-01

    The lifetime of disks surrounding pre-main-sequence stars is closely linked to planet formation and provides information on disk dispersal mechanisms and dissipation timescales. The potential for these optically thick, gas-rich disks to form planets is critically dependent on how much dust is available to be converted into terrestrial planets and rocky cores of giant planets. For this reason, an understanding of how dust mass varies with key properties such as stellar mass, age, and environment is critical for understanding planet formation. Millimeter wavelength observations, in which the dust emission is optically thin, are required to study the colder dust residing in the disk’s outer regions and to measure disk dust masses. Hence, we have obtained SMA 1.3 mm continuum observations of 62 Class II sources with suspected circumstellar disks in the Serpens star-forming region (SFR). Relative to the well-studied Taurus SFR, Serpens allows us to probe the distribution of dust masses for disks in a much denser and more clustered environment. Only 13 disks were detected in the continuum with the SMA. We calculate the total dust masses of these disks and compare their masses to those of disks in Taurus, Lupus, and Upper Scorpius. We do not find evidence of diminished dust masses in Serpens disks relative to those in Taurus despite the fact that disks in denser clusters may be expected to contain less dust mass due to stronger and more frequent tidal interactions that can disrupt the outer regions of disks. However, considering the low detection fraction, we likely detected only bright continuum sources and a more sensitive survey of Serpens would help clarify these results.

  6. DNA MemoChip: Long-Term and High Capacity Information Storage and Select Retrieval.

    PubMed

    Stefano, George B; Wang, Fuzhou; Kream, Richard M

    2018-02-26

    Over the course of history, human beings have never stopped seeking effective methods for information storage. From rocks to paper, and through the past several decades of using computer disks, USB sticks, and on to the thin silicon "chips" and "cloud" storage of today, it would seem that we have reached an era of efficiency for managing innumerable and ever-expanding data. Astonishingly, when tracing this technological path, one realizes that our ancient methods of informational storage far outlast paper (10,000 vs. 1,000 years, respectively), let alone the computer-based memory devices that only last, on average, 5 to 25 years. During this time of fast-paced information generation, it becomes increasingly difficult for current storage methods to retain such massive amounts of data, and to maintain appropriate speeds with which to retrieve it, especially when in demand by a large number of users. Others have proposed that DNA-based information storage provides a way forward for information retention as a result of its temporal stability. It is now evident that DNA represents a potentially economical and sustainable mechanism for storing information, as demonstrated by its decoding from a 700,000 year-old horse genome. The fact that the human genome is present in a cell, containing also the varied mitochondrial genome, indicates DNA's great potential for large data storage in a 'smaller' space.

  7. DNA MemoChip: Long-Term and High Capacity Information Storage and Select Retrieval

    PubMed Central

    Wang, Fuzhou; Kream, Richard M.

    2018-01-01

    Over the course of history, human beings have never stopped seeking effective methods for information storage. From rocks to paper, and through the past several decades of using computer disks, USB sticks, and on to the thin silicon “chips” and “cloud” storage of today, it would seem that we have reached an era of efficiency for managing innumerable and ever-expanding data. Astonishingly, when tracing this technological path, one realizes that our ancient methods of informational storage far outlast paper (10,000 vs. 1,000 years, respectively), let alone the computer-based memory devices that only last, on average, 5 to 25 years. During this time of fast-paced information generation, it becomes increasingly difficult for current storage methods to retain such massive amounts of data, and to maintain appropriate speeds with which to retrieve it, especially when in demand by a large number of users. Others have proposed that DNA-based information storage provides a way forward for information retention as a result of its temporal stability. It is now evident that DNA represents a potentially economical and sustainable mechanism for storing information, as demonstrated by its decoding from a 700,000 year-old horse genome. The fact that the human genome is present in a cell, containing also the varied mitochondrial genome, indicates DNA’s great potential for large data storage in a ‘smaller’ space. PMID:29481548

  8. Feasibility study for rocket ozone measurements in the 50 to 80 km region using a chemiluminescent technique

    NASA Technical Reports Server (NTRS)

    Goodman, P.

    1973-01-01

    A study has been conducted to determine the feasibility of increasing sensitivity for ozone detection. The detection technique employed is the chemiluminescent reaction of ozone with a rhodamine-B impregnated disk. Previously achieved sensitivities are required to be increased by a factor of about 20 to permit measurements at altitudes of 80 km. Sensitivity was increased by using a more sensitive photomultiplier tube, by increasing the gas velocity past the disk, by different disk preparation techniques, and by using reflective coatings in the disk chamber and on the uncoated side of the glass disk. Reflective coatings provided the largest sensitivity increase. The sum of all these changes was a sensitivity increased by an estimated factor of 70, more than sufficient to permit measurement of ambient ozone concentrations at altitudes of 80 km.

  9. The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt

    2014-05-01

    Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to the entire UK environmental science community. Experience with the first phase demonstrated the range of user needs. A trade-off is needed between access privileges to resources, flexibility of use and security. This has influenced the form and types of service under development for the new phase. JASMIN will deploy a specialised private cloud organised into "Managed" and "Unmanaged" components. In the Managed Cloud, users have direct access to the storage and compute resources for optimal performance but for reasons of security, via a more restrictive PaaS (Platform-as-a-Service) interface. The Unmanaged Cloud is deployed in an isolated part of the network but co-located with the rest of the infrastructure. This enables greater liberty to tenants - full IaaS (Infrastructure-as-a-Service) capability to provision customised infrastructure - whilst at the same time protecting more sensitive parts of the system from direct access using these elevated privileges. The private cloud will be augmented with cloud-bursting capability so that it can exploit the resources available from public clouds, making it effectively a hybrid solution. A single interface will overlay the functionality of both the private cloud and external interfaces to public cloud providers giving users the flexibility to migrate resources between infrastructures as requirements dictate.

  10. Long-term data storage in diamond.

    PubMed

    Dhomkar, Siddharth; Henshaw, Jacob; Jayakumar, Harishankar; Meriles, Carlos A

    2016-10-01

    The negatively charged nitrogen vacancy (NV - ) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV - optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multicolor optical microscopy to read, write, and reset arbitrary data sets with two-dimensional (2D) binary bit density comparable to present digital-video-disk (DVD) technology. Leveraging on the singular dynamics of NV - ionization, we encode information on different planes of the diamond crystal with no cross-talk, hence extending the storage capacity to three dimensions. Furthermore, we correlate the center's charge state and the nuclear spin polarization of the nitrogen host and show that the latter is robust to a cycle of NV - ionization and recharge. In combination with super-resolution microscopy techniques, these observations provide a route toward subdiffraction NV charge control, a regime where the storage capacity could exceed present technologies.

  11. An efficient, modular and simple tape archiving solution for LHC Run-3

    NASA Astrophysics Data System (ADS)

    Murray, S.; Bahyl, V.; Cancio, G.; Cano, E.; Kotlyar, V.; Kruse, D. F.; Leduc, J.

    2017-10-01

    The IT Storage group at CERN develops the software responsible for archiving to tape the custodial copy of the physics data generated by the LHC experiments. Physics run 3 will start in 2021 and will introduce two major challenges for which the tape archive software must be evolved. Firstly the software will need to make more efficient use of tape drives in order to sustain the predicted data rate of 150 petabytes per year as opposed to the current 50 petabytes per year. Secondly the software will need to be seamlessly integrated with EOS, which has become the de facto disk storage system provided by the IT Storage group for physics data. The tape storage software for LHC physics run 3 is code named CTA (the CERN Tape Archive). This paper describes how CTA will introduce a pre-emptive drive scheduler to use tape drives more efficiently, will encapsulate all tape software into a single module that will sit behind one or more EOS systems, and will be simpler by dropping support for obsolete backwards compatibility.

  12. Method and apparatus for offloading compute resources to a flash co-processing appliance

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing -bung

    2015-10-13

    Solid-State Drive (SSD) burst buffer nodes are interposed into a parallel supercomputing cluster to enable fast burst checkpoint of cluster memory to or from nearby interconnected solid-state storage with asynchronous migration between the burst buffer nodes and slower more distant disk storage. The SSD nodes also perform tasks offloaded from the compute nodes or associated with the checkpoint data. For example, the data for the next job is preloaded in the SSD node and very fast uploaded to the respective compute node just before the next job starts. During a job, the SSD nodes perform fast visualization and statistical analysis upon the checkpoint data. The SSD nodes can also perform data reduction and encryption of the checkpoint data.

  13. Ethics in electronic image manipulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weckert, J.; Adeney, D.

    1994-12-31

    It is commonplace now to store images digitally on disk. What does this have to do with ethics? Quite a lot, because digitally stored images can be copied and altered with an ease that has not previously been possible. The moral issues raised by this new technology are nto new in themselves, but are given new urgency by both the ease and the undetectability afforded by this digital storage. It would be silly to argue that all uses of digital technology for image storage give cause for concern, but not all applications are beneficial or even benign. Two categories ofmore » potential moral problems will be outlined here: questions of ownership, and questions of the uses to which manipualted images are put.« less

  14. Dual Microstructure Heat Treatment of a Nickel-Base Disk Alloy Assessed

    NASA Technical Reports Server (NTRS)

    Gayda, John

    2002-01-01

    Gas turbine engines for future subsonic aircraft will require nickel-base disk alloys that can be used at temperatures in excess of 1300 F. Smaller turbine engines, with higher rotational speeds, also require disk alloys with high strength. To address these challenges, NASA funded a series of disk programs in the 1990's. Under these initiatives, Honeywell and Allison focused their attention on Alloy 10, a high-strength, nickel-base disk alloy developed by Honeywell for application in the small turbine engines used in regional jet aircraft. Since tensile, creep, and fatigue properties are strongly influenced by alloy grain size, the effect of heat treatment on grain size and the attendant properties were studied in detail. It was observed that a fine grain microstructure offered the best tensile and fatigue properties, whereas a coarse grain microstructure offered the best creep resistance at high temperatures. Therefore, a disk with a dual microstructure, consisting of a fine-grained bore and a coarse-grained rim, should have a high potential for optimal performance. Under NASA's Ultra-Safe Propulsion Project and Ultra-Efficient Engine Technology (UEET) Program, a disk program was initiated at the NASA Glenn Research Center to assess the feasibility of using Alloy 10 to produce a dual-microstructure disk. The objectives of this program were twofold. First, existing dual-microstructure heat treatment (DMHT) technology would be applied and refined as necessary for Alloy 10 to yield the desired grain structure in full-scale forgings appropriate for use in regional gas turbine engines. Second, key mechanical properties from the bore and rim of a DMHT Alloy 10 disk would be measured and compared with conventional heat treatments to assess the benefits of DMHT technology. At Wyman Gordon and Honeywell, an active-cooling DMHT process was used to convert four full-scale Alloy 10 disks to a dual-grain microstructure. The resulting microstructures are illustrated in the photomicrographs. The fine grain size in the bore can be contrasted with the coarse grain size in the rim. Testing (at NASA Glenn) of coupons machined from these disks showed that the DMHT approach did indeed produce a high-strength, fatigue resistant bore and a creep-resistant rim. This combination of properties was previously unobtainable using conventional heat treatments, which produced disks with a uniform grain size. Future plans are in place to spin test a DMHT disk under the Ultra Safe Propulsion Project to assess the viability of this technology at the component level. This testing will include measurements of disk growth at a high temperature as well as the determination of burst speed at an intermediate temperature.

  15. 40 CFR 63.1192 - What recordkeeping requirements must I meet?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... detection system alarms. Include the date and time of the alarm, when corrective actions were initiated, the... operating temperature and results of incinerator inspections. For all periods when the average temperature... microfilm, on a computer, on computer disks, on magnetic tape disks, or on microfiche. (e) Report the...

  16. 40 CFR 63.1517 - Records

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) If a bag leak detection system is used, the number of total operating hours for the affected source...) The owner or operator may retain records on microfilm, computer disks, magnetic tape, or microfiche; and (3) The owner or operator may report required information on paper or on a labeled computer disk...

  17. 40 CFR 63.1192 - What recordkeeping requirements must I meet?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... detection system alarms. Include the date and time of the alarm, when corrective actions were initiated, the... operating temperature and results of incinerator inspections. For all periods when the average temperature... microfilm, on a computer, on computer disks, on magnetic tape disks, or on microfiche. (e) Report the...

  18. 40 CFR 63.1517 - Records

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) If a bag leak detection system is used, the number of total operating hours for the affected source...) The owner or operator may retain records on microfilm, computer disks, magnetic tape, or microfiche; and (3) The owner or operator may report required information on paper or on a labeled computer disk...

  19. 40 CFR 63.1517 - Records

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) If a bag leak detection system is used, the number of total operating hours for the affected source...) The owner or operator may retain records on microfilm, computer disks, magnetic tape, or microfiche; and (3) The owner or operator may report required information on paper or on a labeled computer disk...

  20. 40 CFR 63.1192 - What recordkeeping requirements must I meet?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... detection system alarms. Include the date and time of the alarm, when corrective actions were initiated, the... operating temperature and results of incinerator inspections. For all periods when the average temperature... microfilm, on a computer, on computer disks, on magnetic tape disks, or on microfiche. (e) Report the...

Top