Sample records for disk file system

  1. Sawmill: A Logging File System for a High-Performance RAID Disk Array

    DTIC Science & Technology

    1995-01-01

    from limiting disk performance, new controller architectures connect the disks directly to the network so that data movement bypasses the file server...These developments raise two questions for file systems: how to get the best performance from a RAID, and how to use such a controller architecture ...the RAID-II storage system; this architecture provides a fast data path that moves data rapidly among the disks, high-speed controller memory, and the

  2. RAMA: A file system for massively parallel computers

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  3. Permanent-File-Validation Utility Computer Program

    NASA Technical Reports Server (NTRS)

    Derry, Stephen D.

    1988-01-01

    Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

  4. Attaching IBM-compatible 3380 disks to Cray X-MP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Midlock, J.L.

    1989-01-01

    A method of attaching IBM-compatible 3380 disks directly to a Cray X-MP via the XIOP with a BMC is described. The IBM 3380 disks appear to the UNICOS operating system as DD-29 disks with UNICOS file systems. IBM 3380 disks provide cheap, reliable large capacity disk storage. Combined with a small number of high-speed Cray disks, the IBM disks provide for the bulk of the storage for small files and infrequently used files. Cray Research designed the BMC and its supporting software in the XIOP to allow IBM tapes and other devices to be attached to the X-MP. No hardwaremore » changes were necessary, and we added less than 2000 lines of code to the XIOP to accomplish this project. This system has been in operation for over eight months. Future enhancements such as the use of a cache controller and attachment to a Y-MP are also described. 1 tab.« less

  5. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  6. ECFS: A decentralized, distributed and fault-tolerant FUSE filesystem for the LHCb online farm

    NASA Astrophysics Data System (ADS)

    Rybczynski, Tomasz; Bonaccorsi, Enrico; Neufeld, Niko

    2014-06-01

    The LHCb experiment records millions of proton collisions every second, but only a fraction of them are useful for LHCb physics. In order to filter out the "bad events" a large farm of x86-servers (~2000 nodes) has been put in place. These servers boot from and run from NFS, however they use their local disk to temporarily store data, which cannot be processed in real-time ("data-deferring"). These events are subsequently processed, when there are no live-data coming in. The effective CPU power is thus greatly increased. This gain in CPU power depends critically on the availability of the local disks. For cost and power-reasons, mirroring (RAID-1) is not used, leading to a lot of operational headache with failing disks and disk-errors or server failures induced by faulty disks. To mitigate these problems and increase the reliability of the LHCb farm, while at same time keeping cost and power-consumption low, an extensive research and study of existing highly available and distributed file systems has been done. While many distributed file systems are providing reliability by "file replication", none of the evaluated ones supports erasure algorithms. A decentralised, distributed and fault-tolerant "write once read many" file system has been designed and implemented as a proof of concept providing fault tolerance without using expensive - in terms of disk space - file replication techniques and providing a unique namespace as a main goals. This paper describes the design and the implementation of the Erasure Codes File System (ECFS) and presents the specialised FUSE interface for Linux. Depending on the encoding algorithm ECFS will use a certain number of target directories as a backend to store the segments that compose the encoded data. When target directories are mounted via nfs/autofs - ECFS will act as a file-system over network/block-level raid over multiple servers.

  7. Jefferson Lab Mass Storage and File Replication Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Bird; Ying Chen; Bryan Hess

    Jefferson Lab has implemented a scalable, distributed, high performance mass storage system - JASMine. The system is entirely implemented in Java, provides access to robotic tape storage and includes disk cache and stage manager components. The disk manager subsystem may be used independently to manage stand-alone disk pools. The system includes a scheduler to provide policy-based access to the storage systems. Security is provided by pluggable authentication modules and is implemented at the network socket level. The tape and disk cache systems have well defined interfaces in order to provide integration with grid-based services. The system is in production andmore » being used to archive 1 TB per day from the experiments, and currently moves over 2 TB per day total. This paper will describe the architecture of JASMine; discuss the rationale for building the system, and present a transparent 3rd party file replication service to move data to collaborating institutes using JASMine, XM L, and servlet technology interfacing to grid-based file transfer mechanisms.« less

  8. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  9. SAM-FS: LSC's New Solaris-Based Storage Management Product

    NASA Technical Reports Server (NTRS)

    Angell, Kent

    1996-01-01

    SAM-FS is a full featured hierarchical storage management (HSM) device that operates as a file system on Solaris-based machines. The SAM-FS file system provides the user with all of the standard UNIX system utilities and calls, and adds some new commands, i.e. archive, release, stage, sls, sfind, and a family of maintenance commands. The system also offers enhancements such as high performance virtual disk read and write, control of the disk through an extent array, and the ability to dynamically allocate block size. SAM-FS provides 'archive sets' which are groupings of data to be copied to secondary storage. In practice, as soon as a file is written to disk, SAM-FS will make copies onto secondary media. SAM-FS is a scalable storage management system. The system can manage millions of files per system, though this is limited today by the speed of UNIX and its utilities. In the future, a new search algorithm will be implemented that will remove logical and performance restrictions on the number of files managed.

  10. A File Archival System

    NASA Technical Reports Server (NTRS)

    Fanselow, J. L.; Vavrus, J. L.

    1984-01-01

    ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.

  11. Floppy disk utility user's guide

    NASA Technical Reports Server (NTRS)

    Akers, J. W.

    1981-01-01

    The Floppy Disk Utility Program transfers programs between files on the hard disk and floppy disk. It also copies the data on one floppy disk onto another floppy disk and compares the data. The program operates on the Data General NOVA-4X under the Real Time Disk Operating System (RDOS).

  12. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  13. ZFS on RBODs - Leveraging RAID Controllers for Metrics and Enclosure Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stearman, D. M.

    2015-03-30

    Traditionally, the Lustre file system has relied on the ldiskfs file system with reliable RAID (Redundant Array of Independent Disks) storage underneath. As of Lustre 2.4, ZFS was added as a backend file system, with built-in software RAID, thereby removing the need of expensive RAID controllers. ZFS was designed to work with JBOD (Just a Bunch Of Disks) storage enclosures under the Solaris Operating System, which provided a rich device management system. Long time users of the Lustre file system have relied on the RAID controllers to provide metrics and enclosure monitoring and management services, with rich APIs and commandmore » line interfaces. This paper will study a hybrid approach using an advanced full featured RAID enclosure which is presented to the host as a JBOD, This RBOD (RAIDed Bunch Of Disks) allows ZFS to do the RAID protection and error correction, while the RAID controller handles management of the disks and monitors the enclosure. It was hoped that the value of the RAID controller features would offset the additional cost, and that performance would not suffer in this mode. The test results revealed that the hybrid RBOD approach did suffer reduced performance.« less

  14. Floppy disk utility user's guide

    NASA Technical Reports Server (NTRS)

    Akers, J. W.

    1980-01-01

    A floppy disk utility program is described which transfers programs between files on a hard disk and floppy disk. It also copies the data on one floppy disk onto another floppy disk and compares the data. The program operates on the Data General NOVA-4X under the Real Time Disk Operating System. Sample operations are given.

  15. Design of a steganographic virtual operating system

    NASA Astrophysics Data System (ADS)

    Ashendorf, Elan; Craver, Scott

    2015-03-01

    A steganographic file system is a secure file system whose very existence on a disk is concealed. Customarily, these systems hide an encrypted volume within unused disk blocks, slack space, or atop conventional encrypted volumes. These file systems are far from undetectable, however: aside from their ciphertext footprint, they require a software or driver installation whose presence can attract attention and then targeted surveillance. We describe a new steganographic operating environment that requires no visible software installation, launching instead from a concealed bootstrap program that can be extracted and invoked with a chain of common Unix commands. Our system conceals its payload within innocuous files that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a file system, user interface and applications through a web architecture.

  16. Using Solid State Disk Array as a Cache for LHC ATLAS Data Analysis

    NASA Astrophysics Data System (ADS)

    Yang, W.; Hanushevsky, A. B.; Mount, R. P.; Atlas Collaboration

    2014-06-01

    User data analysis in high energy physics presents a challenge to spinning-disk based storage systems. The analysis is data intense, yet reads are small, sparse and cover a large volume of data files. It is also unpredictable due to users' response to storage performance. We describe here a system with an array of Solid State Disk as a non-conventional, standalone file level cache in front of the spinning disk storage to help improve the performance of LHC ATLAS user analysis at SLAC. The system uses several days of data access records to make caching decisions. It can also use information from other sources such as a work-flow management system. We evaluate the performance of the system both in terms of caching and its impact on user analysis jobs. The system currently uses Xrootd technology, but the technique can be applied to any storage system.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Kevin

    The software provides a simple web api to allow users to request a time window where a file will not be removed from cache. HPSS provides the concept of a "purge lock". When a purge lock is set on a file, the file will not be removed from disk, entering tape only state. A lot of network file protocols assume a file is on disk so it is good to purge lock a file before transferring using one of those protocols. HPSS's purge lock system is very coarse grained though. A file is either purge locked or not. Nothing enforcesmore » quotas, timely unlocking of purge locks, or managing the races inherent with multiple users wanting to lock/unlock the same file. The Purge Lock Server lets you, through a simple REST API, specify a list of files to purge lock and an expire time, and the system will ensure things happen properly.« less

  18. Performance of the Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  19. Emulation Aid System II (EASY II) System Programmer’s Guide.

    DTIC Science & Technology

    1981-03-01

    DISK-SAVE, PASSWD =SSSS .MTUINIT= 17 ,MTF IILE=99,D)SKUNIT=7. RESTORE-DISK, PASSWD =SSSS,,MTt!NI=I 7,MTF [LE--=99,DSKtJNIT=7. where PASSWD - a system disk...DISK-SAVE, PASSWD =SSSS ,MTUNIT=17,MTFILE=99,DSKtJNIT=7. SAVE A DISK FILE ON TAPE HELP ,O,O,O. DSKSV. EDIT. CR’r BASED EDITOR (COMM ANDS EXPLAINED AS...BE EXPLICITLY TURNED ON QCNTRL ,LOCKED. RDTAPE,UNIT= 17. READING TAPE FOR USE WITH 6000 AND PRINT. 0. RDTAPE. RESTORE-DISK, PASSWD =SSSS ,MTUNIT= 17

  20. Mass storage technology in networks

    NASA Astrophysics Data System (ADS)

    Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo

    1990-08-01

    Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.

  1. The Scalable Checkpoint/Restart Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  2. SAN/CXFS test report to LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, T M; Eldel, A

    2000-01-01

    The primary objectives of this project were to evaluate the performance of the SGI CXFS File System in a Storage Area Network (SAN) and compare/contrast it to the performance of a locally attached XFS file system on the same computer and storage subsystems. The University of Minnesota participants were asked to verify that the performance of the SAN/CXFS configuration did not fall below 85% of the performance of the XFS local configuration. There were two basic hardware test configurations constructed from the following equipment: Two Onyx 2 computer systems each with two Qlogic-based Fibre Channel/XIO Host Bus Adapter (HBA); Onemore » 8-Port Brocade Silkworm 2400 Fibre Channel Switch; and Four Ciprico RF7000 RAID Disk Arrays populated Seagate Barracuda 50GB disk drives. The Operating System on each of the ONYX 2 computer systems was IRIX 6.5.6. The first hardware configuration consisted of directly connecting the Ciprico arrays to the Qlogic controllers without the Brocade switch. The purpose for this configuration was to establish baseline performance data on the Qlogic controllers / Ciprico disk raw subsystem. This baseline performance data would then be used to demonstrate any performance differences arising from the addition of the Brocade Fibre Channel Switch. Furthermore, the performance of the Qlogic controllers could be compared to that of the older, Adaptec-based XIO dual-channel Fibre Channel adapters previously used on these systems. It should be noted that only raw device tests were performed on this configuration. No file system testing was performed on this configuration. The second hardware configuration introduced the Brocade Fibre Channel Switch. Two FC ports from each of the ONYX2 computer systems were attached to four ports of the switch and the four Ciprico arrays were attached to the remaining four. Raw disk subsystem tests were performed on the SAN configuration in order to demonstrate the performance differences between the direct-connect and the switched configurations. After this testing was completed, the Ciprico arrays were formatted with an XFS file system and performance numbers were gathered to establish a File System Performance Baseline. Finally, the disks were formatted with CXFS and further tests were run to demonstrate the performance of the CXFS file system. A summary of the results of these tests is given.« less

  3. Striped tertiary storage arrays

    NASA Technical Reports Server (NTRS)

    Drapeau, Ann L.

    1993-01-01

    Data stripping is a technique for increasing the throughput and reducing the response time of large access to a storage system. In striped magnetic or optical disk arrays, a single file is striped or interleaved across several disks; in a striped tape system, files are interleaved across tape cartridges. Because a striped file can be accessed by several disk drives or tape recorders in parallel, the sustained bandwidth to the file is greater than in non-striped systems, where access to the file are restricted to a single device. It is argued that applying striping to tertiary storage systems will provide needed performance and reliability benefits. The performance benefits of striping for applications using large tertiary storage systems is discussed. It will introduce commonly available tape drives and libraries, and discuss their performance limitations, especially focusing on the long latency of tape accesses. This section will also describe an event-driven tertiary storage array simulator that is being used to understand the best ways of configuring these storage arrays. The reliability problems of magnetic tape devices are discussed, and plans for modeling the overall reliability of striped tertiary storage arrays to identify the amount of error correction required are described. Finally, work being done by other members of the Sequoia group to address latency of accesses, optimizing tertiary storage arrays that perform mostly writes, and compression is discussed.

  4. XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication

    NASA Astrophysics Data System (ADS)

    Bauerdick, L. A. T.; Bloom, K.; Bockelman, B.; Bradley, D. C.; Dasu, S.; Dost, J. M.; Sfiligoi, I.; Tadel, A.; Tadel, M.; Wuerthwein, F.; Yagil, A.; Cms Collaboration

    2014-06-01

    Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.

  5. Architecture and method for a burst buffer using flash technology

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung

    2016-03-15

    A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.

  6. Interfacing a high performance disk array file server to a Gigabit LAN

    NASA Technical Reports Server (NTRS)

    Seshan, Srinivasan; Katz, Randy H.

    1993-01-01

    Our previous prototype, RAID-1, identified several bottlenecks in typical file server architectures. The most important bottleneck was the lack of a high-bandwidth path between disk, memory, and the network. Workstation servers, such as the Sun-4/280, have very slow access to peripherals on busses far from the CPU. For the RAID-2 system, we addressed this problem by designing a crossbar interconnect, Xbus board, that provides a 40MB/s path between disk, memory, and the network interfaces. However, this interconnect does not provide the system CPU with low latency access to control the various interfaces. To provide a high data rate to clients on the network, we were forced to carefully and efficiently design the network software. A block diagram of the system hardware architecture is given. In the following subsections, we describe pieces of the RAID-2 file server hardware that had a significant impact on the design of the network interface.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoopman, J. D.

    This report documents Livermore Computing (LC) activities in support of ASC L2 milestone 5589: Modernization and Expansion of LLNL Archive Disk Cache, due March 31, 2016. The full text of the milestone is included in Attachment 1. The description of the milestone is: Description: Configuration of archival disk cache systems will be modernized to reduce fragmentation, and new, higher capacity disk subsystems will be deployed. This will enhance archival disk cache capability for ASC archive users, enabling files written to the archives to remain resident on disk for many (6–12) months, regardless of file size. The milestone was completed inmore » three phases. On August 26, 2015 subsystems with 6PB of disk cache were deployed for production use in LLNL’s unclassified HPSS environment. Following that, on September 23, 2015 subsystems with 9 PB of disk cache were deployed for production use in LLNL’s classified HPSS environment. On January 31, 2016, the milestone was fully satisfied when the legacy Data Direct Networks (DDN) archive disk cache subsystems were fully retired from production use in both LLNL’s unclassified and classified HPSS environments, and only the newly deployed systems were in use.« less

  8. Virtual file system for PSDS

    NASA Technical Reports Server (NTRS)

    Runnels, Tyson D.

    1993-01-01

    This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

  9. Documentation of model input and output values for simulation of pumping effects in Paradise Valley, a basin tributary to the Humboldt River, Humboldt County, Nevada

    USGS Publications Warehouse

    Carey, A.E.; Prudic, David E.

    1996-01-01

    Documentation is provided of model input and sample output used in a previous report for analysis of ground-water flow and simulated pumping scenarios in Paradise Valley, Humboldt County, Nevada.Documentation includes files containing input values and listings of sample output. The files, in American International Standard Code for Information Interchange (ASCII) or binary format, are compressed and put on a 3-1/2-inch diskette. The decompressed files require approximately 8.4 megabytes of disk space on an International Business Machine (IBM)- compatible microcomputer using the MicroSoft Disk Operating System (MS-DOS) operating system version 5.0 or greater.

  10. Security of patient data when decommissioning ultrasound systems.

    PubMed

    Moggridge, James

    2017-02-01

    Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system's own delete function; full reinstallation of system software; the manufacturer's own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system's own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer's wiping service or the open source wiping software. The typical method of reinstalling an ultrasound system's software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system's hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state.

  11. KNBD: A Remote Kernel Block Server for Linux

    NASA Technical Reports Server (NTRS)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  12. 29 CFR 4000.28 - What if I send a computer disk?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...

  13. 29 CFR 4000.28 - What if I send a computer disk?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...

  14. 29 CFR 4000.28 - What if I send a computer disk?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...

  15. 29 CFR 4000.28 - What if I send a computer disk?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...

  16. 29 CFR 4000.28 - What if I send a computer disk?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...

  17. A performance analysis of advanced I/O architectures for PC-based network file servers

    NASA Astrophysics Data System (ADS)

    Huynh, K. D.; Khoshgoftaar, T. M.

    1994-12-01

    In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.

  18. Data Partitioning and Load Balancing in Parallel Disk Systems

    NASA Technical Reports Server (NTRS)

    Scheuermann, Peter; Weikum, Gerhard; Zabback, Peter

    1997-01-01

    Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible waves, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent, self-reliant file system that aims to optimize striping by taking into account the requirements of the applications and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces.

  19. Experimental Analysis of File Transfer Rates over Wide-Area Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata

    2016-12-01

    File transfers over dedicated connections, supported by large parallel file systems, have become increasingly important in high-performance computing and big data workflows. It remains a challenge to achieve peak rates for such transfers due to the complexities of file I/O, host, and network transport subsystems, and equally importantly, their interactions. We present extensive measurements of disk-to-disk file transfers using Lustre and XFS file systems mounted on multi-core servers over a suite of 10 Gbps emulated connections with 0-366 ms round trip times. Our results indicate that large buffer sizes and many parallel flows do not always guarantee high transfer rates.more » Furthermore, large variations in the measured rates necessitate repeated measurements to ensure confidence in inferences based on them. We propose a new method to efficiently identify the optimal joint file I/O and network transport parameters using a small number of measurements. We show that for XFS and Lustre with direct I/O, this method identifies configurations achieving 97% of the peak transfer rate while probing only 12% of the parameter space.« less

  20. Experiences From NASA/Langley's DMSS Project

    NASA Technical Reports Server (NTRS)

    1996-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at the NASA Langley Research Center (LaRC) has placed such a system into production use. This paper will present the experiences, both good and bad, we have had with this system since putting it into production usage. The system is comprised of: 1) National Storage Laboratory (NSL)/UniTree 2.1, 2) IBM 9570 HIPPI attached disk arrays (both RAID 3 and RAID 5), 3) IBM RS6000 server, 4) HIPPI/IPI3 third party transfers between the disk array systems and the supercomputer clients, a CRAY Y-MP and a CRAY 2, 5) a "warm spare" file server, 6) transition software to convert from CRAY's Data Migration Facility (DMF) based system to DMSS, 7) an NSC PS32 HIPPI switch, and 8) a STK 4490 robotic library accessed from the IBM RS6000 block mux interface. This paper will cover: the performance of the DMSS in the following areas: file transfer rates, migration and recall, and file manipulation (listing, deleting, etc.); the appropriateness of a workstation class of file server for NSL/UniTree with LaRC's present storage requirements in mind the role of the third party transfers between the supercomputers and the DMSS disk array systems in DMSS; a detailed comparison (both in performance and functionality) between the DMF and DMSS systems LaRC's enhancements to the NSL/UniTree system administration environment the mechanism for DMSS to provide file server redundancy the statistics on the availability of DMSS the design and experiences with the locally developed transparent transition software which allowed us to make over 1.5 million DMF files available to NSL/UniTree with minimal system outage

  1. Modifications to the accuracy assessment analysis routine MLTCRP to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    Modifications are described that were made to the analysis program MLTCRP in the accuracy assessment software system to produce a disk output file. The output files produced by this modified program are used to aggregate data for regions greater than a single segment.

  2. Database Reorganization in Parallel Disk Arrays with I/O Service Stealing

    NASA Technical Reports Server (NTRS)

    Zabback, Peter; Onyuksel, Ibrahim; Scheuermann, Peter; Weikum, Gerhard

    1996-01-01

    We present a model for data reorganization in parallel disk systems that is geared towards load balancing in an environment with periodic access patterns. Data reorganization is performed by disk cooling, i.e. migrating files or extents from the hottest disks to the coldest ones. We develop an approximate queueing model for determining the effective arrival rates of cooling requests and discuss its use in assessing the costs versus benefits of cooling.

  3. 75 FR 1625 - Privacy Act of 1974; Report of Amended or Altered System; Medical, Health and Billing Records System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-12

    ...., desktop, laptop, handheld or other computer types) containing protected personal identifiers or PHI is... as the National Indian Women's Resource Center, to conduct analytical and evaluation studies. 8... SYSTEM: STORAGE: File folders, ledgers, card files, microfiche, microfilm, computer tapes, disk packs...

  4. Securing Sensitive Flight and Engine Simulation Data Using Smart Card Technology

    NASA Technical Reports Server (NTRS)

    Blaser, Tammy M.

    2003-01-01

    NASA Glenn Research Center has developed a smart card prototype capable of encrypting and decrypting disk files required to run a distributed aerospace propulsion simulation. Triple Data Encryption Standard (3DES) encryption is used to secure the sensitive intellectual property on disk pre, during, and post simulation execution. The prototype operates as a secure system and maintains its authorized state by safely storing and permanently retaining the encryption keys only on the smart card. The prototype is capable of authenticating a single smart card user and includes pre simulation and post simulation tools for analysis and training purposes. The prototype's design is highly generic and can be used to protect any sensitive disk files with growth capability to urn multiple simulations. The NASA computer engineer developed the prototype on an interoperable programming environment to enable porting to other Numerical Propulsion System Simulation (NPSS) capable operating system environments.

  5. Security of patient data when decommissioning ultrasound systems

    PubMed Central

    2017-01-01

    Background Although ultrasound systems generally archive to Picture Archiving and Communication Systems (PACS), their archiving workflow typically involves storage to an internal hard disk before data are transferred onwards. Deleting records from the local system will delete entries in the database and from the file allocation table or equivalent but, as with a PC, files can be recovered. Great care is taken with disposal of media from a healthcare organisation to prevent data breaches, but ultrasound systems are routinely returned to lease companies, sold on or donated to third parties without such controls. Methods In this project, five methods of hard disk erasure were tested on nine ultrasound systems being decommissioned: the system’s own delete function; full reinstallation of system software; the manufacturer’s own disk wiping service; open source disk wiping software for full and just blank space erasure. Attempts were then made to recover data using open source recovery tools. Results All methods deleted patient data as viewable from the ultrasound system and from browsing the disk from a PC. However, patient identifiable data (PID) could be recovered following the system’s own deletion and the reinstallation methods. No PID could be recovered after using the manufacturer’s wiping service or the open source wiping software. Conclusion The typical method of reinstalling an ultrasound system’s software may not prevent PID from being recovered. When transferring ownership, care should be taken that an ultrasound system’s hard disk has been wiped to a sufficient level, particularly if the scanner is to be returned with approved parts and in a fully working state. PMID:28228821

  6. 76 FR 5973 - Privacy Act of 1974; Notice; Publication of the Systems of Records Managed by the Commodity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-02

    ...: Paper records are stored in file folders, binders, computer files (eLaw) and computer disks. Electronic records, including computer files, are stored on the Commission's network and other electronic media as... physical security measures. Technical security measures within CFTC include restrictions on computer access...

  7. nem_spread Ver. 5.10

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HENNIGAN, GARY; SHADID, JOHN; SJAARDEMA, GREGORY

    2009-06-08

    Nem_spread reads it's input command file (default name nem_spread.inp), takes the named ExodusII geometry definition and spreads out the geometry (and optionally results) contained in that file out to a parallel disk system. The decomposition is taken from a scalar Nemesis load balance file generated by the companion utility nem_slice.

  8. Dataset for forensic analysis of B-tree file system.

    PubMed

    Wani, Mohamad Ahtisham; Bhat, Wasim Ahmad

    2018-06-01

    Since B-tree file system (Btrfs) is set to become de facto standard file system on Linux (and Linux based) operating systems, Btrfs dataset for forensic analysis is of great interest and immense value to forensic community. This article presents a novel dataset for forensic analysis of Btrfs that was collected using a proposed data-recovery procedure. The dataset identifies various generalized and common file system layouts and operations, specific node-balancing mechanisms triggered, logical addresses of various data structures, on-disk records, recovered-data as directory entries and extent data from leaf and internal nodes, and percentage of data recovered.

  9. Optical Digital Image Storage System

    DTIC Science & Technology

    1991-03-18

    figures courtesy of Sony Corporation x LIST OF TABLES Indexing Workstation - Ease of Learning ................................... 99 Indexing Workstation...retaining a master negative copy of the microfilm. 121 The Sony Corporation, the supplier of the optical disk media used in the ODISS projeLt, claims...disk." During the ODISS project, several CMSR files-stored on the Sony optical disks were read several thousand times with no -loss of information

  10. Recent evolution of the offline computing model of the NOvA experiment

    DOE PAGES

    Habig, Alec; Norman, A.; Group, Craig

    2015-12-23

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study ν e appearance in a ν μ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files onmore » either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. In addition, the current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.« less

  11. Recent Evolution of the Offline Computing Model of the NOvA Experiment

    NASA Astrophysics Data System (ADS)

    Habig, Alec; Norman, A.

    2015-12-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study νe appearance in a νμ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files on either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. The current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.

  12. Beyond a Terabyte File System

    NASA Technical Reports Server (NTRS)

    Powers, Alan K.

    1994-01-01

    The Numerical Aerodynamics Simulation Facility's (NAS) CRAY C916/1024 accesses a "virtual" on-line file system, which is expanding beyond a terabyte of information. This paper will present some options to fine tuning Data Migration Facility (DMF) to stretch the online disk capacity and explore the transitions to newer devices (STK 4490, ER90, RAID).

  13. Evaluating the effect of online data compression on the disk cache of a mass storage system

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Yesha, Yelena

    1994-01-01

    A trace driven simulation of the disk cache of a mass storage system was used to evaluate the effect of an online compression algorithm on various performance measures. Traces from the system at NASA's Center for Computational Sciences were used to run the simulation and disk cache hit ratios, number of files and bytes migrating to tertiary storage were measured. The measurements were performed for both an LRU and a size based migration algorithm. In addition to seeing the effect of online data compression on the disk cache performance measure, the simulation provided insight into the characteristics of the interactive references, suggesting that hint based prefetching algorithms are the only alternative for any future improvements to the disk cache hit ratio.

  14. Long-Term file activity patterns in a UNIX workstation environment

    NASA Technical Reports Server (NTRS)

    Gibson, Timothy J.; Miller, Ethan L.

    1998-01-01

    As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.

  15. Launching large computing applications on a disk-less cluster

    NASA Astrophysics Data System (ADS)

    Schwemmer, Rainer; Caicedo Carvajal, Juan Manuel; Neufeld, Niko

    2011-12-01

    The LHCb Event Filter Farm system is based on a cluster of the order of 1.500 disk-less Linux nodes. Each node runs one instance of the filtering application per core. The amount of cores in our current production environment is 8 per machine for the old cluster and 12 per machine on extension of the cluster. Each instance has to load about 1.000 shared libraries, weighting 200 MB from several directory locations from a central repository. The repository is currently hosted on a SAN and exported via NFS. The libraries are all available in the local file system cache on every node. Loading a library still causes a huge number of requests to the server though, because the loader will try to probe every available path. Measurements show there are between 100.000-200.000 calls per application instance start up. Multiplied by the numbers of cores in the farm, this translates into a veritable DDoS attack on the servers, which lasts several minutes. Since the application is being restarted frequently, a better solution had to be found.scp Rolling out the software to the nodes is out of the question, because they have no disks and the software in it's entirety is too large to put into a ram disk. To solve this problem we developed a FUSE based file systems which acts as a permanent, controllable cache that keeps the essential files that are necessary in stock.

  16. 28 CFR 51.20 - Form of submissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... megabyte MS-DOS formatted diskettes; 5 1/4″ 1.2 megabyte MS-DOS formatted floppy disks; nine-track tape... provided in hard copy. (c) All magnetic media shall be clearly labeled with the following information: (1... a disk operating system (DOS) file, it shall be formatted in a standard American Standard Code for...

  17. a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine

    NASA Astrophysics Data System (ADS)

    Dai, X.; Xiong, H.; Zheng, X.

    2012-07-01

    A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be loaded to the pre-rendering cache immediately, the deleting list index the data that is no longer visible in the rendering scene and should be moved to the eliminate cache; the other thread is to move the data in the memory and disk cache according to the adding and the deleting list, and create the download requests when the data is indexed in the adding but cannot be found either in memory cache or disk cache, eliminate cache data is moved to the disk cache when the adding list and deleting are empty. The cache designed as described above in our experiment shows reliable and efficient, and the data loading time and files I/O time decreased sharply, especially when the rendering data getting larger.

  18. Interactive cutting path analysis programs

    NASA Technical Reports Server (NTRS)

    Weiner, J. M.; Williams, D. S.; Colley, S. R.

    1975-01-01

    The operation of numerically controlled machine tools is interactively simulated. Four programs were developed to graphically display the cutting paths for a Monarch lathe, Cintimatic mill, Strippit sheet metal punch, and the wiring path for a Standard wire wrap machine. These programs are run on a IMLAC PDS-ID graphic display system under the DOS-3 disk operating system. The cutting path analysis programs accept input via both paper tape and disk file.

  19. AFTOMS Technology Issues and Alternatives Report

    DTIC Science & Technology

    1989-12-01

    color , resolu- power requirements, physi- tion; memory , processor speed; cal and weather rugged- IAN interfaces, etc,) f,: these ness. display...Telephone and Telegraph 3 CD-I Compact Disk - Interactive CD-ROM Compact Disk-Read Only Memory CGM Computer Graphics Metafile CNWDI Critical Nuclear...Database Management System RFP Request For Proposal 3 RFS Remote File System ROM Read Only Memory 3 S SA-ALC San Antonio Air Logistics Center 3 SAC

  20. Using Purpose-Built Functions and Block Hashes to Enable Small Block and Sub-file Forensics

    DTIC Science & Technology

    2010-01-01

    JPEGs. We tested precarve using the nps-2009-canon2-gen6 (Garfinkel et al., 2009) disk image. The disk image was created with a 32 MB SD card and a...analysis of n-grams in the fragment. Fig. 1 e Usage of a 160 GB iPod reported by iTunes 8.2.1 (6) (top), as reported by the file system (bottom center), and...as computing with random sampling (bottom right). Note that iTunes usage actually in GiB, even though the program displays the “GB” label. Fig. 2 e

  1. Flexibility and Performance of Parallel File Systems

    NASA Technical Reports Server (NTRS)

    Kotz, David; Nieuwejaar, Nils

    1996-01-01

    As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applications difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (API's). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.

  2. RAID-2: Design and implementation of a large scale disk array controller

    NASA Technical Reports Server (NTRS)

    Katz, R. H.; Chen, P. M.; Drapeau, A. L.; Lee, E. K.; Lutz, K.; Miller, E. L.; Seshan, S.; Patterson, D. A.

    1992-01-01

    We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5 inch disk drives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19 inch racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance. We review the design decisions, history, and lessons learned from this three year university implementation effort to construct a truly large scale system assembly.

  3. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  4. The Computer: An Effective Research Assistant

    PubMed Central

    Gancher, Wendy

    1984-01-01

    The development of software packages such as data management systems and statistical packages has made it possible to process large amounts of research data. Data management systems make the organization and manipulation of such data easier. Floppy disks ease the problem of storing and retrieving records. Patient information can be kept confidential by limiting access to computer passwords linked with research files, or by using floppy disks. These attributes make the microcomputer essential to modern primary care research. PMID:21279042

  5. Resident Information Management System of Shibuya

    NASA Astrophysics Data System (ADS)

    Kokubo, Shoji

    Inhabitant record image processing system using optical disks and LAN was introduced and has been at fully operational stage since March, 1985 at Shibuya Ward Office. Inhabitant forms which have been filled in by handwriting are recorded on the optical disks and retrieved when necessary so that inhabitant's moving-in and out business can be handled at any branch office, and waiting time for issuance of the inhabitant form is markedly reduced. The optical file system is outlined first, then the system outline at the Ward Office and its operation are described.

  6. Modifications to the accuracy assessment analysis routine SPATL to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    The SPATL is an analysis program in the Accuracy Assessment Software System which makes comparisons between ground truth information and dot labeling for an individual segment. In order to facilitate the aggregation cf this information, SPATL was modified to produce a disk output file containing the necessary information about each segment.

  7. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  8. Storage Media for Microcomputers.

    ERIC Educational Resources Information Center

    Trautman, Rodes

    1983-01-01

    Reviews computer storage devices designed to provide additional memory for microcomputers--chips, floppy disks, hard disks, optical disks--and describes how secondary storage is used (file transfer, formatting, ingredients of incompatibility); disk/controller/software triplet; magnetic tape backup; storage volatility; disk emulator; and…

  9. I/O performance evaluation of a Linux-based network-attached storage device

    NASA Astrophysics Data System (ADS)

    Sun, Zhaoyan; Dong, Yonggui; Wu, Jinglian; Jia, Huibo; Feng, Guanping

    2002-09-01

    In a Local Area Network (LAN), clients are permitted to access the files on high-density optical disks via a network server. But the quality of read service offered by the conventional server is not satisfied because of the multiple functions on the server and the overmuch caller. This paper develops a Linux-based Network-Attached Storage (NAS) server. The Operation System (OS), composed of an optimized kernel and a miniaturized file system, is stored in a flash memory. After initialization, the NAS device is connected into the LAN. The administrator and users could configure the access the server through the web page respectively. In order to enhance the quality of access, the management of buffer cache in file system is optimized. Some benchmark programs are peformed to evaluate the I/O performance of the NAS device. Since data recorded in optical disks are usually for reading accesses, our attention is focused on the reading throughput of the device. The experimental results indicate that the I/O performance of our NAS device is excellent.

  10. The Cheetah Data Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunz, P.F.; Word, G.B.

    1991-03-01

    Cheetah is a data management system based on the C programming language. The premise of Cheetah is that the banks' of FORTRAN based systems should be structures' as defined by the C language. Cheetah is a system to mange these structures, while preserving the use of the C language in its native form. For C structures managed by Cheetah, the user can use Cheetah utilities such as reading and writing, in a machine independent form, both binary and text files to disk or over a network. Files written by Cheetah also contain a dictionary describing in detail the data containedmore » in the file. Such information is intended to be used by interactive programs for presenting the contents of the file. Such information is intended to be used by interactive programs for presenting the contents of file. Cheetah has been ported to many different operating systems with no operating system dependent switches.« less

  11. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  12. Efficient proof of ownership for cloud storage systems

    NASA Astrophysics Data System (ADS)

    Zhong, Weiwei; Liu, Zhusong

    2017-08-01

    Cloud storage system through the deduplication technology to save disk space and bandwidth, but the use of this technology has appeared targeted security attacks: the attacker can deceive the server to obtain ownership of the file by get the hash value of original file. In order to solve the above security problems and the different security requirements of the files in the cloud storage system, an efficient and information-theoretical secure proof of ownership sceme is proposed to support the file rating. Through the K-means algorithm to implement file rating, and use random seed technology and pre-calculation method to achieve safe and efficient proof of ownership scheme. Finally, the scheme is information-theoretical secure, and achieve better performance in the most sensitive areas of client-side I/O and computation.

  13. MICE data handling on the Grid

    NASA Astrophysics Data System (ADS)

    Martyniak, J.; Mice Collaboration

    2014-06-01

    The international Muon Ionisation Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time, for application to a future Neutrino factory or Muon Collider. The experiment is currently under construction at the ISIS synchrotron at the Rutherford Appleton Laboratory (RAL), UK. In this paper we present a system - the Raw Data Mover, which allows us to store and distribute MICE raw data - and a framework for offline reconstruction and data management. The aim of the Raw Data Mover is to upload raw data files onto a safe tape storage as soon as the data have been written out by the DAQ system and marked as ready to be uploaded. Internal integrity of the files is verified and they are uploaded to the RAL Tier-1 Castor Storage Element (SE) and placed on two tapes for redundancy. We also make another copy at a separate disk-based SE at this stage to make it easier for users to access data quickly. Both copies are check-summed and the replicas are registered with an instance of the LCG File Catalog (LFC). On success a record with basic file properties is added to the MICE Metadata DB. The reconstruction process is triggered by new raw data records filled in by the mover system described above. Off-line reconstruction jobs for new raw files are submitted to RAL Tier-1 and the output is stored on tape. Batch reprocessing is done at multiple MICE enabled Grid sites and output files are shipped to central tape or disk storage at RAL using a custom File Transfer Controller.

  14. Environmental Containment Property Estimation Using QSARs in an Expert System

    DTIC Science & Technology

    1993-01-15

    2 megabytes of memory (RAM), with 1000 kBytes of memory allocated for HyperCard. PEP overview The PEP system currently consists of four HyperCard...BCF Universel I ’ mI Figure 6. TSA module card from PEP The TSA module is also designed to accept files generated by other hardware/software... allocated to 1500 MB. * Installation of PEP PEP is typically shipped on one 3.5 inch 1.44 Megabyte floppy disk. To install PEP: 1. Insert the PEP disk into

  15. An Improved B+ Tree for Flash File Systems

    NASA Astrophysics Data System (ADS)

    Havasi, Ferenc

    Nowadays mobile devices such as mobile phones, mp3 players and PDAs are becoming evermore common. Most of them use flash chips as storage. To store data efficiently on flash, it is necessary to adapt ordinary file systems because they are designed for use on hard disks. Most of the file systems use some kind of search tree to store index information, which is very important from a performance aspect. Here we improved the B+ search tree algorithm so as to make flash devices more efficient. Our implementation of this solution saves 98%-99% of the flash operations, and is now the part of the Linux kernel.

  16. On-Line Data Reconstruction in Redundant Disk Arrays.

    DTIC Science & Technology

    1994-05-01

    each sale, - file servers that support a large number of clients with differing work schedules , and * automated teller networks in banking systems...24KB Head scheduling : FIFO User data layout: Sequential in address space of array Disk spindles: Synchronized Table 2.2: Default array parameters for...package and a set of scheduling and queueing routines. 2.3.3. Default workload This dissertation reports on many performance evaluations. In order to

  17. [PVFS 2000: An operational parallel file system for Beowulf

    NASA Technical Reports Server (NTRS)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  18. 32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Commercial Activities Management Information... to Part 169a—Commercial Activities Management Information System (CAMIS) Each DoD Component shall... American Standard Code Information Interchange text file format on a MicroSoft-Disk Operating System...

  19. VizieR Online Data Catalog: FARGO_THORIN 1.0 hydrodynamic code (Chrenko+, 2017)

    NASA Astrophysics Data System (ADS)

    Chrenko, O.; Broz, M.; Lambrechts, M.

    2017-07-01

    This archive contains the source files, documentation and example simulation setups of the FARGO_THORIN 1.0 hydrodynamic code. The program was introduced, described and used for simulations in the paper. It is built on top of the FARGO code (Masset, 2000A&AS..141..165M, Baruteau & Masset, 2008ApJ...672.1054B) and it is also interfaced with the REBOUND integrator package (Rein & Liu, 2012A&A...537A.128R). THORIN stands for Two-fluid HydrOdynamics, the Rebound integrator Interface and Non-isothermal gas physics. The program is designed for self-consistent investigations of protoplanetary systems consisting of a gas disk, a disk of small solid particles (pebbles) and embedded protoplanets. Code features: I) Non-isothermal gas disk with implicit numerical solution of the energy equation. The implemented energy source terms are: Compressional heating, viscous heating, stellar irradiation, vertical escape of radiation, radiative diffusion in the midplane and radiative feedback to accretion heating of protoplanets. II) Planets evolved in 3D, with close encounters allowed. The orbits are integrated using the IAS15 integrator (Rein & Spiegel, 2015MNRAS.446.1424R). The code detects the collisions among planets and resolve them as mergers. III) Refined treatment of the planet-disk gravitational interaction. The code uses a vertical averaging of the gravitational potential, as outlined in Muller & Kley (2012A&A...539A..18M). IV) Pebble disk represented by an Eulerian, presureless and inviscid fluid. The pebble dynamics is affected by the Epstein gas drag and optionally by the diffusive effects. We also implemented the drag back-reaction term into the Navier-Stokes equation for the gas. Archive summary: ------------------------------------------------------------------------- directory/file Explanation ------------------------------------------------------------------------- /in_relax Contains setup of the first example simulation /in_wplanet Contains setup of the second example simulation /srcmain Contains the source files of FARGOTHORIN /src_reb Contains the source files of the REBOUND integrator package to be linked with THORIN GUNGPL3 GNU General Public License, version 3 LICENSE License agreement README Simple user's guide UserGuide.pdf Extended user's guide refman.pdf Programer's guide ----------------------------------------------------------------------------- (1 data file).

  20. Reliable file sharing in distributed operating system using web RTC

    NASA Astrophysics Data System (ADS)

    Dukiya, Rajesh

    2017-12-01

    Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.

  1. Development and evaluation of oral reporting system for PACS.

    PubMed

    Umeda, T; Inamura, K; Inamoto, K; Ikezoe, J; Kozuka, T; Kawase, I; Fujii, Y; Karasawa, H

    1994-05-01

    Experimental workstations for oral reporting and synchronized image filing have been developed and evaluated by radiologists and referring physicians. The file media is a 5.25-inch rewritable magneto-optical disk of 600-Mb capacity whose file format is in accordance with the IS&C specification. The results of evaluation tell that this system is superior to other existing methods of the same kind such as transcribing, dictating, handwriting, typewriting and key selections. The most significant advantage of the system is that images and their interpretation are never separated. The first practical application to the teaching file and the teaching conference is contemplated in the Osaka University Hospital. This system is a complete digital system in terms of images, voices and demographic data, so that on-line transmission, off-line communication or filing to any database will be easily realized in a PACS environment. We are developing an integrated system of a speech recognizer connected to this digitized oral system.

  2. Archiving and Distributing Seismic Data at the Southern California Earthquake Data Center (SCEDC)

    NASA Astrophysics Data System (ADS)

    Appel, V. L.

    2002-12-01

    The Southern California Earthquake Data Center (SCEDC) archives and provides public access to earthquake parametric and waveform data gathered by the Southern California Seismic Network and since January 1, 2001, the TriNet seismic network, southern California's earthquake monitoring network. The parametric data in the archive includes earthquake locations, magnitudes, moment-tensor solutions and phase picks. The SCEDC waveform archive prior to TriNet consists primarily of short-period, 100-samples-per-second waveforms from the SCSN. The addition of the TriNet array added continuous recordings of 155 broadband stations (20 samples per second or less), and triggered seismograms from 200 accelerometers and 200 short-period instruments. Since the Data Center and TriNet use the same Oracle database system, new earthquake data are available to the seismological community in near real-time. Primary access to the database and waveforms is through the Seismogram Transfer Program (STP) interface. The interface enables users to search the database for earthquake information, phase picks, and continuous and triggered waveform data. Output is available in SAC, miniSEED, and other formats. Both the raw counts format (V0) and the gain-corrected format (V1) of COSMOS (Consortium of Organizations for Strong-Motion Observation Systems) are now supported by STP. EQQuest is an interface to prepackaged waveform data sets for select earthquakes in Southern California stored at the SCEDC. Waveform data for large-magnitude events have been prepared and new data sets will be available for download in near real-time following major events. The parametric data from 1981 to present has been loaded into the Oracle 9.2.0.1 database system and the waveforms for that time period have been converted to mSEED format and are accessible through the STP interface. The DISC optical-disk system (the "jukebox") that currently serves as the mass-storage for the SCEDC is in the process of being replaced with a series of inexpensive high-capacity (1.6 Tbyte) magnetic-disk RAIDs. These systems are built with PC-technology components, using 16 120-Gbyte IDE disks, hot-swappable disk trays, two RAID controllers, dual redundant power supplies and a Linux operating system. The system is configured over a private gigabit network that connects to the two Data Center servers and spans between the Seismological Lab and the USGS. To ensure data integrity, each RAID disk system constantly checks itself against its twin and verifies file integrity using 128-bit MD5 file checksums that are stored separate from the system. The final level of data protection is a Sony AIT-3 tape backup of the files. The primary advantage of the magnetic-disk approach is faster data access because magnetic disk drives have almost no latency. This means that the SCEDC can provide better "on-demand" interactive delivery of the seismograms in the archive.

  3. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  4. Rootkit Detection Using a Cross-View Clean Boot Method

    DTIC Science & Technology

    2013-03-01

    FindNextFile: [2] Kernel32.dll 4. SSDTHooks r -- ... CALL NtQueryDirectoryFile 5. Code Patch ing - 6. Layered Driver 4 NtQueryDirectoryFile : 7...NTFS Driver 0 Volume Manger Disk Driver [2] I. Disk Driver r ! J IAT hooks take advantage of function calls in applications [13]. When an...f36e923898161fa7be50810288e2f48a 61 Appendix D: Windows Source Code Windows Batch File @echo o f f py thon walk . py pause shutdown − r − t 0 Walk.py in

  5. Development of the Large-Scale Statistical Analysis System of Satellites Observations Data with Grid Datafarm Architecture

    NASA Astrophysics Data System (ADS)

    Yamamoto, K.; Murata, K.; Kimura, E.; Honda, R.

    2006-12-01

    In the Solar-Terrestrial Physics (STP) field, the amount of satellite observation data has been increasing every year. It is necessary to solve the following three problems to achieve large-scale statistical analyses of plenty of data. (i) More CPU power and larger memory and disk size are required. However, total powers of personal computers are not enough to analyze such amount of data. Super-computers provide a high performance CPU and rich memory area, but they are usually separated from the Internet or connected only for the purpose of programming or data file transfer. (ii) Most of the observation data files are managed at distributed data sites over the Internet. Users have to know where the data files are located. (iii) Since no common data format in the STP field is available now, users have to prepare reading program for each data by themselves. To overcome the problems (i) and (ii), we constructed a parallel and distributed data analysis environment based on the Gfarm reference implementation of the Grid Datafarm architecture. The Gfarm shares both computational resources and perform parallel distributed processings. In addition, the Gfarm provides the Gfarm filesystem which can be as virtual directory tree among nodes. The Gfarm environment is composed of three parts; a metadata server to manage distributed files information, filesystem nodes to provide computational resources and a client to throw a job into metadata server and manages data processing schedulings. In the present study, both data files and data processes are parallelized on the Gfarm with 6 file system nodes: CPU clock frequency of each node is Pentium V 1GHz, 256MB memory and40GB disk. To evaluate performances of the present Gfarm system, we scanned plenty of data files, the size of which is about 300MB for each, in three processing methods: sequential processing in one node, sequential processing by each node and parallel processing by each node. As a result, in comparison between the number of files and the elapsed time, parallel and distributed processing shorten the elapsed time to 1/5 than sequential processing. On the other hand, sequential processing times were shortened in another experiment, whose file size is smaller than 100KB. In this case, the elapsed time to scan one file is within one second. It implies that disk swap took place in case of parallel processing by each node. We note that the operation became unstable when the number of the files exceeded 1000. To overcome the problem (iii), we developed an original data class. This class supports our reading of data files with various data formats since it converts them into an original data format since it defines schemata for every type of data and encapsulates the structure of data files. In addition, since this class provides a function of time re-sampling, users can easily convert multiple data (array) with different time resolution into the same time resolution array. Finally, using the Gfarm, we achieved a high performance environment for large-scale statistical data analyses. It should be noted that the present method is effective only when one data file size is large enough. At present, we are restructuring the new Gfarm environment with 8 nodes: CPU is Athlon 64 x2 Dual Core 2GHz, 2GB memory and 1.2TB disk (using RAID0) for each node. Our original class is to be implemented on the new Gfarm environment. In the present talk, we show the latest results with applying the present system for data analyses with huge number of satellite observation data files.

  6. Designing for Peta-Scale in the LSST Database

    NASA Astrophysics Data System (ADS)

    Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.

    2007-10-01

    The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.

  7. Proof of cipher text ownership based on convergence encryption

    NASA Astrophysics Data System (ADS)

    Zhong, Weiwei; Liu, Zhusong

    2017-08-01

    Cloud storage systems save disk space and bandwidth through deduplication technology, but with the use of this technology has been targeted security attacks: the attacker can get the original file just use hash value to deceive the server to obtain the file ownership. In order to solve the above security problems and the different security requirements of cloud storage system files, an efficient information theory security proof of ownership scheme is proposed. This scheme protects the data through the convergence encryption method, and uses the improved block-level proof of ownership scheme, and can carry out block-level client deduplication to achieve efficient and secure cloud storage deduplication scheme.

  8. LVFS: A Scalable Petabye/Exabyte Data Storage System

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.

    2013-12-01

    Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system. The LVFS system replaces the NFS disk mounting approach of LAADS and utilizes the already existing highly optimized metadata database server, which is applicable to most scientific big data intensive compute systems. Thus, LVFS ties the existing storage system with the existing metadata infrastructure system which we believe leads to a scalable exabyte virtual file system. The uniqueness of the implemented design is not limited to LAADS but can be employed with most scientific data processing systems. By utilizing the Filesystem In Userspace (FUSE), a kernel module available in many operating systems, LVFS was able to replace the NFS system while staying POSIX compliant. As a result, the LVFS system becomes scalable to exabyte sizes owing to the use of highly scalable database servers optimized for metadata storage. The flexibility of the LVFS design allows it to organize data on the fly in different ways, such as by region, date, instrument or product without the need for duplication, symbolic links, or any other replication methods. We proposed here a strategic reference architecture that addresses the inefficiencies of scientific petabyte/exabyte file system access through the dynamic integration of the observing system's large metadata file.

  9. Cardiopulmonary data acquisition system. Version 2.0, volume 2: Detailed software/hardware documentation

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Detailed software and hardware documentation for the Cardiopulmonary Data Acquisition System is presented. General wiring and timing diagrams are given including those for the LSI-11 computer control panel and interface cables. Flowcharts and complete listings of system programs are provided along with the format of the floppy disk file.

  10. Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.

    2016-12-01

    Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.

  11. Multisensory Public Access Catalogs on CD-ROM.

    ERIC Educational Resources Information Center

    Harrison, Nancy; Murphy, Brower

    1987-01-01

    BiblioFile Intelligent Catalog is a CD-ROM-based public access catalog system which incorporates graphics and sound to provide a multisensory interface and artificial intelligence techniques to increase search precision. The system can be updated frequently and inexpensively by linking hard disk drives to CD-ROM optical drives. (MES)

  12. Mark 6: A Next-Generation VLBI Data System

    NASA Astrophysics Data System (ADS)

    Whitney, A. R.; Lapsley, D. E.; Taveniku, M.

    2011-07-01

    A new real-time high-data-rate disk-array system based on entirely commercial-off-the-shelf hardware components is being evaluated for possible use as a next-generation VLBI data system. The system, developed by XCube Communications of Nashua, NH, USA was originally developed for the automotive industry for testing/evaluation of autonomous driving systems that require continuous capture of an array of video cameras and automotive sensors at ~8Gbps from multiple 10GigE data links and other data sources. In order to sustain the required recording data rate, the system is designed to account for slow and/or failed disks by shifting the load to other disks as necessary in order to maintain the target data rate. The system is based on a Linux OS with some modifications to memory management and drivers in order to guarantee the timely movement of data, and the hardware/software combination is highly tuned to achieve the target data rate; data are stored in standard Linux files. A kit is also being designed that will allow existing Mark 5 disk modules to be modified to be used with the XCube system (though PATA disks will need to be replaced by SATA disks). Demonstrations of the system at Haystack Observatory and NRAO Socorro have proved very encouraging; some modest software upgrades/revisions are being made by XCube in order to meet VLBI-specific requirements. The system is easily expandable, with sustained 16 Gbps likely to be supported before end CY2011.

  13. The medium is NOT the message or Indefinitely long-term file storage at Leeds University

    NASA Technical Reports Server (NTRS)

    Holdsworth, David

    1996-01-01

    Approximately 3 years ago we implemented an archive file storage system which embodies experiences gained over more than 25 years of using and writing file storage systems. It is the third in-house system that we have written, and all three systems have been adopted by other institutions. This paper discusses the requirements for long-term data storage in a university environment, and describes how our present system is designed to meet these requirements indefinitely. Particular emphasis is laid on experiences from past systems, and their influence on current system design. We also look at the influence of the IEEE-MSS standard. We currently have the system operating in five UK universities. The system operates in a multi-server environment, and is currently operational with UNIX (SunOS4, Solaris2, SGI-IRIX, HP-UX), NetWare3 and NetWare4. PCs logged on to NetWare can also archive and recover files that live on their hard disks.

  14. Using compressed images in multimedia education

    NASA Astrophysics Data System (ADS)

    Guy, William L.; Hefner, Lance V.

    1996-04-01

    The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD-ROM depending on the quality of image required for the finding in question. This reduces the weight of a teaching file from that of a baby elephant to that of a single CD-ROM disc. Thus, with this method of teaching file preparation and storage the following advantages are realized: (1) Technically easier and less time consuming image reproduction. (2) Considerably less unwieldy and substantially more portable teaching files. (3) Novel ability to index files and then retrieve specific cases of choice based on descriptive text.

  15. 18 CFR 3b.223 - Fees.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ....223 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES COLLECTION, MAINTENANCE, USE, AND DISSEMINATION OF RECORDS OF IDENTIFIABLE PERSONAL... the Commission's systems of records on magnetic tape or disks, or computer files, copies of the...

  16. 18 CFR 3b.223 - Fees.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ....223 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES COLLECTION, MAINTENANCE, USE, AND DISSEMINATION OF RECORDS OF IDENTIFIABLE PERSONAL... the Commission's systems of records on magnetic tape or disks, or computer files, copies of the...

  17. 18 CFR 3b.223 - Fees.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ....223 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES COLLECTION, MAINTENANCE, USE, AND DISSEMINATION OF RECORDS OF IDENTIFIABLE PERSONAL... the Commission's systems of records on magnetic tape or disks, or computer files, copies of the...

  18. 18 CFR 3b.223 - Fees.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ....223 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES COLLECTION, MAINTENANCE, USE, AND DISSEMINATION OF RECORDS OF IDENTIFIABLE PERSONAL... the Commission's systems of records on magnetic tape or disks, or computer files, copies of the...

  19. 18 CFR 3b.223 - Fees.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ....223 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES COLLECTION, MAINTENANCE, USE, AND DISSEMINATION OF RECORDS OF IDENTIFIABLE PERSONAL... the Commission's systems of records on magnetic tape or disks, or computer files, copies of the...

  20. Incorporating the APS Catalog of the POSS I and Image Archive in ADS

    NASA Technical Reports Server (NTRS)

    Humphreys, Roberta M.

    1998-01-01

    The primary purpose of this contract was to develop the software to both create and access an on-line database of images from digital scans of the Palomar Sky Survey. This required modifying our DBMS (called Star Base) to create an image database from the actual raw pixel data from the scans. The digitized images are processed into a set of coordinate-reference index and pixel files that are stored in run-length files, thus achieving an efficient lossless compression. For efficiency and ease of referencing, each digitized POSS I plate is then divided into 900 subplates. Our custom DBMS maps each query into the corresponding POSS plate(s) and subplate(s). All images from the appropriate subplates are retrieved from disk with byte-offsets taken from the index files. These are assembled on-the-fly into a GIF image file for browser display, and a FITS format image file for retrieval. The FITS images have a pixel size of 0.33 arcseconds. The FITS header contains astrometric and photometric information. This method keeps the disk requirements manageable while allowing for future improvements. When complete, the APS Image Database will contain over 130 Gb of data. A set of web pages query forms are available on-line, as well as an on-line tutorial and documentation. The database is distributed to the Internet by a high-speed SGI server and a high-bandwidth disk system. URL is http://aps.umn.edu/IDB/. The image database software is written in perl and C and has been compiled on SGI computers with MIX5.3. A copy of the written documentation is included and the software is on the accompanying exabyte tape.

  1. IBM NJE protocol emulator for VAX/VMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.

    1981-01-01

    Communications software has been written at Argonne National Laboratory to enable a VAX/VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE is actually a collection of programs that support job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any node in the network for printing, punching, or job submission,more » as well as to a VM/370 user's virtual reader. Files sent from the VAX are queued and transmitted asynchronously to allow users to perform other work while files are awaiting transmission. No changes are required to the IBM software.« less

  2. Nemesis I: Parallel Enhancements to ExodusII

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennigan, Gary L.; John, Matthew S.; Shadid, John N.

    2006-03-28

    NEMESIS I is an enhancement to the EXODUS II finite element database model used to store and retrieve data for unstructured parallel finite element analyses. NEMESIS I adds data structures which facilitate the partitioning of a scalar (standard serial) EXODUS II file onto parallel disk systems found on many parallel computers. Since the NEMESIS I application programming interface (APl)can be used to append information to an existing EXODUS II files can be used on files which contain NEMESIS I information. The NEMESIS I information is written and read via C or C++ callable functions which compromise the NEMESIS I API.

  3. Tuning HDF5 subfiling performance on parallel file systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less

  4. Experiments and Analyses of Data Transfers Over Wide-Area Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata

    Dedicated wide-area network connections are increasingly employed in high-performance computing and big data scenarios. One might expect the performance and dynamics of data transfers over such connections to be easy to analyze due to the lack of competing traffic. However, non-linear transport dynamics and end-system complexities (e.g., multi-core hosts and distributed filesystems) can in fact make analysis surprisingly challenging. We present extensive measurements of memory-to-memory and disk-to-disk file transfers over 10 Gbps physical and emulated connections with 0–366 ms round trip times (RTTs). For memory-to-memory transfers, profiles of both TCP and UDT throughput as a function of RTT show concavemore » and convex regions; large buffer sizes and more parallel flows lead to wider concave regions, which are highly desirable. TCP and UDT both also display complex throughput dynamics, as indicated by their Poincare maps and Lyapunov exponents. For disk-to-disk transfers, we determine that high throughput can be achieved via a combination of parallel I/O threads, parallel network threads, and direct I/O mode. Our measurements also show that Lustre filesystems can be mounted over long-haul connections using LNet routers, although challenges remain in jointly optimizing file I/O and transport method parameters to achieve peak throughput.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Lee H.; Laros, James H., III

    This paper describes a methodology for implementing disk-less cluster systems using the Network File System (NFS) that scales to thousands of nodes. This method has been successfully deployed and is currently in use on several production systems at Sandia National Labs. This paper will outline our methodology and implementation, discuss hardware and software considerations in detail and present cluster configurations with performance numbers for various management operations like booting.

  6. Disks and Outflows Around Young Stars

    NASA Astrophysics Data System (ADS)

    Beckwith, Steven; Staude, Jakob; Quetz, Axel; Natta, Antonella

    The subject of the book, the ubiquitous circumstellar disks around very young stars and the corresponding jets of outflowing matter, has recently become one of the hottest areas in astrophysics. The disks are thought to be precursors to planetary systems, and the outflows are thought to be a necessary phase in the formation of a young star, helping the star to get rid of angular momentum and energy as it makes its way onto the main sequence. The possible connections to planetary systems and stellar astrophysics makes these topics especially broad, appealing to generalists and specialists alike. The CD not only contains papers that could not be printed in the book but allows the authors to include a fair amount of data, often displayed as color images. The CD-ROM contains all the contributions printed in the corresponding book (Lecture Notes in Physics Vol. 465) and, in addition, those presented exclusively in digital form. Each contribution consists of a file in portable document format (PDF). The electronic version allows full-text searching within each file using Adobe's Acrobat Reader providing instructions for installation on Unix (Sun), PC and Macintosh computers, respectively. All contributions can be printed out; the color diagrams and color frames, which are printed in black and white in the book, can be viewed in color on screen.

  7. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  8. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  9. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  10. Is the bang worth the buck? A RAID performance study

    NASA Technical Reports Server (NTRS)

    Hauser, Susan E.; Berman, Lewis E.; Thoma, George R.

    1996-01-01

    Expecting a high data delivery rate as well as data protection, the Lister Hill National Center for Biomedical Communications procured a RAID system to house image files for image delivery applications. A study was undertaken to determine the configuration of the RAID system that would provide for the fastest retrieval of image files. Average retrieval times with single and with concurrent users were measured for several stripe widths and several numbers of disks for RAID levels 0, 0+1 and 5. These are compared to each other and to average retrieval times for non-RAID configurations of the same hardware. Although the study in ongoing, a few conclusions have emerged regarding the tradeoffs among the different configurations with respect to file retrieval speed and cost.

  11. Input/output behavior of supercomputing applications

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  12. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 2

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    This report contains copies of nearly all of the technical papers and viewgraphs presented at the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Application. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include the following: magnetic disk and tape technologies; optical disk and tape; software storage and file management systems; and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  13. NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 3

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)

    1992-01-01

    This report contains copies of nearly all of the technical papers and viewgraphs presented at the National Space Science Data Center (NSSDC) Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990s.

  14. Clementine High Resolution Camera Mosaicking Project. Volume 14; CL 6014; 0 deg N to 80 deg N Latitude, 270 deg E to 300 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  15. Clementine High Resolution Camera Mosaicking Project. Volume 17; CL 6017; 0 deg to 80 deg S Latitude, 330 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  16. Clementine High Resolution Camera Mosaicking Project. Volume 15; CL 6015; 0 deg S to 80 deg S Latitude, 270 deg E to 300 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  17. Clementine High Resolution Camera Mosaicking Project. Volume 13; CL 6013; 0 deg S to 80 deg S Latitude, 240 deg to 270 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  18. Clementine High Resolution Camera Mosaicking Project. Volume 18; CL 6018; 80 deg N to 80 deg S Latitude, 330 deg E to 360 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  19. Clementine High Resolution Camera Mosaicking Project. Volume 12; CL 6012; 0 deg N to 80 deg N Latitude, 240 deg to 270 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  20. Clementine High Resolution Camera Mosaicking Project. Volume 10; CL 6010; 0 deg N to 80 deg N Latitude, 210 deg E to 240 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  1. Clementine High Resolution Camera Mosaicking Project. Volume 16; CL 6016; 0 deg N to 80 deg N Latitude, 300 deg E to 330 deg E Longitude; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Malin Space Science Systems (MSSS) effort to mosaic Clementine I high resolution (HiRes) camera lunar images. These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. These mosaics are spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel. The geometric control is provided by the 100 m/pixel U.S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD were compiled from sub-polar data (latitudes 80 degrees South to 80 degrees North; -80 to +80) within the longitude range 0-30 deg E. The mosaics are divided into tiles that cover approximately 1.75 degrees of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. This CD contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  2. Simple, Script-Based Science Processing Archive

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Hegde, Mahabaleshwara; Barth, C. Wrandle

    2007-01-01

    The Simple, Scalable, Script-based Science Processing (S4P) Archive (S4PA) is a disk-based archival system for remote sensing data. It is based on the data-driven framework of S4P and is used for data transfer, data preprocessing, metadata generation, data archive, and data distribution. New data are automatically detected by the system. S4P provides services such as data access control, data subscription, metadata publication, data replication, and data recovery. It comprises scripts that control the data flow. The system detects the availability of data on an FTP (file transfer protocol) server, initiates data transfer, preprocesses data if necessary, and archives it on readily available disk drives with FTP and HTTP (Hypertext Transfer Protocol) access, allowing instantaneous data access. There are options for plug-ins for data preprocessing before storage. Publication of metadata to external applications such as the Earth Observing System Clearinghouse (ECHO) is also supported. S4PA includes a graphical user interface for monitoring the system operation and a tool for deploying the system. To ensure reliability, S4P continuously checks stored data for integrity, Further reliability is provided by tape backups of disks made once a disk partition is full and closed. The system is designed for low maintenance, requiring minimal operator oversight.

  3. High-Speed Recording of Test Data on Hard Disks

    NASA Technical Reports Server (NTRS)

    Lagarde, Paul M., Jr.; Newnan, Bruce

    2003-01-01

    Disk Recording System (DRS) is a systems-integration computer program for a direct-to-disk (DTD) high-speed data acquisition system (HDAS) that records rocket-engine test data. The HDAS consists partly of equipment originally designed for recording the data on tapes. The tape recorders were replaced with hard-disk drives, necessitating the development of DRS to provide an operating environment that ties two computers, a set of five DTD recorders, and signal-processing circuits from the original tape-recording version of the HDAS into one working system. DRS includes three subsystems: (1) one that generates a graphical user interface (GUI), on one of the computers, that serves as a main control panel; (2) one that generates a GUI, on the other computer, that serves as a remote control panel; and (3) a data-processing subsystem that performs tasks on the DTD recorders according to instructions sent from the main control panel. The software affords capabilities for dynamic configuration to record single or multiple channels from a remote source, remote starting and stopping of the recorders, indexing to prevent overwriting of data, and production of filtered frequency data from an original time-series data file.

  4. On Data Transfers Over Wide-Area Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang

    Dedicated wide-area network connections are employed in big data and high-performance computing scenarios, since the absence of cross-traffic promises to make it easier to analyze and optimize data transfers over them. However, nonlinear transport dynamics and end-system complexity due to multi-core hosts and distributed file systems make these tasks surprisingly challenging. We present an overview of methods to analyze memory and disk file transfers using extensive measurements over 10 Gbps physical and emulated connections with 0–366 ms round trip times (RTTs). For memory transfers, we derive performance profiles of TCP and UDT throughput as a function of RTT, which showmore » concave regions in contrast to entirely convex regions predicted by previous models. These highly desirable concave regions can be expanded by utilizing large buffers and more parallel flows. We also present Poincar´e maps and Lyapunov exponents of TCP and UDT throughputtraces that indicate complex throughput dynamics. For disk file transfers, we show that throughput can be optimized using a combination of parallel I/O and network threads under direct I/O mode. Our initial throughput measurements of Lustre filesystems mounted over long-haul connections using LNet routers show convex profiles indicative of I/O limits.« less

  5. The Mark 3 data base handler

    NASA Technical Reports Server (NTRS)

    Ryan, J. W.; Ma, C.; Schupler, B. R.

    1980-01-01

    A data base handler which would act to tie Mark 3 system programs together is discussed. The data base handler is written in FORTRAN and is implemented on the Hewlett-Packard 21MX and the IBM 360/91. The system design objectives were to (1) provide for an easily specified method of data interchange among programs, (2) provide for a high level of data integrity, (3) accommodate changing requirments, (4) promote program accountability, (5) provide a single source of program constants, and (6) provide a central point for data archiving. The system consists of two distinct parts: a set of files existing on disk packs and tapes; and a set of utility subroutines which allow users to access the information in these files. Users never directly read or write the files and need not know the details of how the data are formatted in the files. To the users, the storage medium is format free. A user does need to know something about the sequencing of his data in the files but nothing about data in which he has no interest.

  6. Reducing I/O variability using dynamic I/O path characterization in petascale storage systems

    DOE PAGES

    Son, Seung Woo; Sehrish, Saba; Liao, Wei-keng; ...

    2016-11-01

    In petascale systems with a million CPU cores, scalable and consistent I/O performance is becoming increasingly difficult to sustain mainly because of I/O variability. Furthermore, the I/O variability is caused by concurrently running processes/jobs competing for I/O or a RAID rebuild when a disk drive fails. We present a mechanism that stripes across a selected subset of I/O nodes with the lightest workload at runtime to achieve the highest I/O bandwidth available in the system. In this paper, we propose a probing mechanism to enable application-level dynamic file striping to mitigate I/O variability. We also implement the proposed mechanism inmore » the high-level I/O library that enables memory-to-file data layout transformation and allows transparent file partitioning using subfiling. Subfiling is a technique that partitions data into a set of files of smaller size and manages file access to them, making data to be treated as a single, normal file to users. Here, we demonstrate that our bandwidth probing mechanism can successfully identify temporally slower I/O nodes without noticeable runtime overhead. Experimental results on NERSC’s systems also show that our approach isolates I/O variability effectively on shared systems and improves overall collective I/O performance with less variation.« less

  7. DUST DISK AROUND A BLACK HOLE IN GALAXY NGC 4261

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This is a Hubble Space Telescope image of an 800-light-year-wide spiral-shaped disk of dust fueling a massive black hole in the center of galaxy, NGC 4261, located 100 million light-years away in the direction of the constellation Virgo. By measuring the speed of gas swirling around the black hole, astronomers calculate that the object at the center of the disk is 1.2 billion times the mass of our Sun, yet concentrated into a region of space not much larger than our solar system. The strikingly geometric disk -- which contains enough mass to make 100,000 stars like our Sun -- was first identified in Hubble observations made in 1992. These new Hubble images reveal for the first time structure in the disk, which may be produced by waves or instabilities in the disk. Hubble also reveals that the disk and black hole are offset from the center of NGC 4261, implying some sort of dynamical interaction is taking place, that has yet to be fully explained. Credit: L. Ferrarese (Johns Hopkins University) and NASA Image files in GIF and JPEG format, captions, and press release text may be accessed on Internet via anonymous ftp from oposite.stsci.edu in /pubinfo:

  8. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Palamuttam, R. S.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; Verma, R.; Waliser, D. E.; Lee, H.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 10 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. We have implemented a parallel data ingest capability in which the user specifies desired variables (arrays) as several time-sorted lists of URL's (i.e. using OPeNDAP model.nc?varname, or local files). The specified variables are partitioned by time/space and then each Spark node pulls its bundle of arrays into memory to begin a computation pipeline. We also investigated the performance of several N-dim. array libraries (scala breeze, java jblas & netlib-java, and ND4J). We are currently developing science codes using ND4J and studying memory behavior on the JVM. On the pyspark side, many of our science codes already use the numpy and SciPy ecosystems. The talk will cover: the architecture of SciSpark, the design of the scientific RDD (sRDD) data structure, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  9. Radiotherapy supporting system based on the image database using IS&C magneto-optical disk

    NASA Astrophysics Data System (ADS)

    Ando, Yutaka; Tsukamoto, Nobuhiro; Kunieda, Etsuo; Kubo, Atsushi

    1994-05-01

    Since radiation oncologists make the treatment plan by prior experience, information about previous cases is helpful in planning the radiation treatment. We have developed an supporting system for the radiation therapy. The case-based reasoning method was implemented in order to search old treatments and images of past cases. This system evaluates similarities between the current case and all stored cases (case base). The portal images of the similar cases can be retrieved for reference images, as well as treatment records which show examples of the radiation treatment. By this system radiotherapists can easily make suitable plans of the radiation therapy. This system is useful to prevent inaccurate plannings due to preconceptions and/or lack of knowledge. Images were stored into magneto-optical disks and the demographic data is recorded to the hard disk which is equipped in the personal computer. Images can be displayed quickly on the radiotherapist's demands. The radiation oncologist can refer past cases which are recorded in the case base and decide the radiation treatment of the current case. The file and data format of magneto-optical disk is the IS&C format. This format provides the interchangeability and reproducibility of the medical information which includes images and other demographic data.

  10. The CDF Run II disk inventory manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul Hubbard and Stephan Lammel

    2001-11-02

    The Collider Detector at Fermilab (CDF) experiment records and analyses proton-antiproton interactions at a center-of-mass energy of 2 TeV. Run II of the Fermilab Tevatron started in April of this year. The duration of the run is expected to be over two years. One of the main data handling strategies of CDF for Run II is to hide all tape access from the user and to facilitate sharing of data and thus disk space. A disk inventory manager was designed and developed over the past years to keep track of the data on disk, to coordinate user access to themore » data, and to stage data back from tape to disk as needed. The CDF Run II disk inventory manager consists of a server process, a user and administrator command line interfaces, and a library with the routines of the client API. Data are managed in filesets which are groups of one or more files. The system keeps track of user access to the filesets and attempts to keep frequently accessed data on disk. Data that are not on disk are automatically staged back from tape as needed. For CDF the main staging method is based on the mt-tools package as tapes are written according to the ANSI standard.« less

  11. Towards more stable operation of the Tokyo Tier2 center

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; Mashimo, T.; Matsui, N.; Sakamoto, H.; Ueda, I.

    2014-06-01

    The Tokyo Tier2 center, which is located at the International Center for Elementary Particle Physics (ICEPP) in the University of Tokyo, was established as a regional analysis center in Japan for the ATLAS experiment. The official operation with WLCG was started in 2007 after the several years development since 2002. In December 2012, we have replaced almost all hardware as the third system upgrade to deal with analysis for further growing data of the ATLAS experiment. The number of CPU cores are increased by factor of two (9984 cores in total), and the performance of individual CPU core is improved by 20% according to the HEPSPEC06 benchmark test at 32bit compile mode. The score is estimated as 18.03 (SL6) per core by using Intel Xeon E5-2680 2.70 GHz. Since all worker nodes are made by 16 CPU cores configuration, we deployed 624 blade servers in total. They are connected to 6.7 PB of disk storage system with non-blocking 10 Gbps internal network backbone by using two center network switches (NetIron MLXe-32). The disk storage is made by 102 of RAID6 disk arrays (Infortrend DS S24F-G2840-4C16DO0) and served by equivalent number of 1U file servers with 8G-FC connection to maximize the file transfer throughput per storage capacity. As of February 2013, 2560 CPU cores and 2.00 PB of disk storage have already been deployed for WLCG. Currently, the remaining non-grid resources for both CPUs and disk storage are used as dedicated resources for the data analysis by the ATLAS Japan collaborators. Since all hardware in the non-grid resources are made by same architecture with Tier2 resource, they will be able to be migrated as the Tier2 extra resource on demand of the ATLAS experiment in the future. In addition to the upgrade of computing resources, we expect the improvement of connectivity on the wide area network. Thanks to the Japanese NREN (NII), another 10 Gbps trans-Pacific line from Japan to Washington will be available additionally with existing two 10 Gbps lines (Tokyo to New York and Tokyo to Los Angeles). The new line will be connected to LHCONE for the more improvement of the connectivity. In this circumstance, we are working for the further stable operation. For instance, we have newly introduced GPFS (IBM) for the non-grid disk storage, while Disk Pool Manager (DPM) are continued to be used as Tier2 disk storage from the previous system. Since the number of files stored in a DPM pool will be increased with increasing the total amount of data, the development of stable database configuration is one of the crucial issues as well as scalability. We have started some studies on the performance of asynchronous database replication so that we can take daily full backup. In this report, we would like to introduce several improvements in terms of the performances and stability of our new system and possibility of the further improvement of local I/O performance in the multi-core worker node. We also present the status of the wide area network connectivity from Japan to US and/or EU with LHCONE.

  12. DICOM implementation on online tape library storage system

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Dai, Hailei L.; Elghammer, David; Levine, Betty A.; Mun, Seong K.

    1998-07-01

    The main purpose of this project is to implement a Digital Image and Communications (DICOM) compliant online tape library system over the Internet. Once finished, the system will be used to store medical exams generated from U.S. ARMY Mobile ARMY Surgical Hospital (MASH) in Tuzla, Bosnia. A modified UC Davis implementation of DICOM storage class is used for this project. DICOM storage class user and provider are implemented as the system's interface to the Internet. The DICOM software provides flexible configuration options such as types of modalities and trusted remote DICOM hosts. Metadata is extracted from each exam and indexed in a relational database for query and retrieve purposes. The medical images are stored inside the Wolfcreek-9360 tape library system from StorageTek Corporation. The tape library system has nearline access to more than 1000 tapes. Each tape has a capacity of 800 megabytes making the total nearline tape access of around 1 terabyte. The tape library uses the Application Storage Manager (ASM) which provides cost-effective file management, storage, archival, and retrieval services. ASM automatically and transparently copies files from expensive magnetic disk to less expensive nearline tape library, and restores the files back when they are needed. The ASM also provides a crash recovery tool, which enable an entire file system restore in a short time. A graphical user interface (GUI) function is used to view the contents of the storage systems. This GUI also allows user to retrieve the stored exams and send the exams to anywhere on the Internet using DICOM protocols. With the integration of different components of the system, we have implemented a high capacity online tape library storage system that is flexible and easy to use. Using tape as an alternative storage media as opposed to the magnetic disk has the great potential of cost savings in terms of dollars per megabyte of storage. As this system matures, the Hospital Information Systems/Radiology Information Systems (HIS/RIS) or other components can be developed potentially as interfaces to the outside world thus widen the usage of the tape library system.

  13. Computer assisted audit techniques for UNIX (UNIX-CAATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polk, W.T.

    1991-12-31

    Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General`s Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less

  14. Computer assisted audit techniques for UNIX (UNIX-CAATS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polk, W.T.

    1991-01-01

    Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General's Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less

  15. WriteShield: A Pseudo Thin Client for Prevention of Information Leakage

    NASA Astrophysics Data System (ADS)

    Kirihata, Yasuhiro; Sameshima, Yoshiki; Onoyama, Takashi; Komoda, Norihisa

    While thin-client systems are diffusing as an effective security method in enterprises and organizations, there is a new approach called pseudo thin-client system. In this system, local disks of clients are write-protected and user data is forced to save on the central file server to realize the same security effect of conventional thin-client systems. Since it takes purely the software-based simple approach, it does not require the hardware enhancement of network and servers to reduce the installation cost. However there are several problems such as no write control to external media, memory depletion possibility, and lower security because of the exceptional write permission to the system processes. In this paper, we propose WriteShield, a pseudo thin-client system which solves these issues. In this system, the local disks are write-protected with volume filter driver and it has a virtual cache mechanism to extend the memory cache size for the write protection. This paper presents design and implementation details of WriteShield. Besides we describe the security analysis and simulation evaluation of paging algorithms for virtual cache mechanism and measure the disk I/O performance to verify its feasibility in the actual environment.

  16. Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Blackwell, Kim; Blasso, Len (Editor); Lipscomb, Ann (Editor)

    1991-01-01

    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.

  17. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Loikith, P.; Lee, H.; McGibbney, L. J.; Whitehall, K. D.

    2014-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark. Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk, and makes iterative algorithms feasible. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 100 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning (ML) based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. The goals of SciSpark are to: (1) Decrease the time to compute comparison statistics and plots from minutes to seconds; (2) Allow for interactive exploration of time-series properties over seasons and years; (3) Decrease the time for satellite data ingestion into RCMES to hours; (4) Allow for Level-2 comparisons with higher-order statistics or PDF's in minutes to hours; and (5) Move RCMES into a near real time decision-making platform. We will report on: the architecture and design of SciSpark, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning (sharding) of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  18. Network issues for large mass storage requirements

    NASA Technical Reports Server (NTRS)

    Perdue, James

    1992-01-01

    File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.

  19. NJE; VAX-VMS IBM NJE network protocol emulator. [DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.; Raffenetti, C.

    NJE is communications software developed to enable a VAX VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE supports job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any network node for printing, punching, or job submission, or to a VM/370 user's virtual reader. Files sent from the VAXmore » are queued and transmitted asynchronously. No changes are required to the IBM software.DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%); VMS 2.5; VAX11/780 with DUP-11 UNIBUS interface and 9600 baud synchronous modem..« less

  20. Engine structures analysis software: Component Specific Modeling (COSMO)

    NASA Astrophysics Data System (ADS)

    McKnight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-08-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  1. Engine Structures Analysis Software: Component Specific Modeling (COSMO)

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Maffeo, R. J.; Schwartz, S.

    1994-01-01

    A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.

  2. Sharing lattice QCD data over a widely distributed file system

    NASA Astrophysics Data System (ADS)

    Amagasa, T.; Aoki, S.; Aoki, Y.; Aoyama, T.; Doi, T.; Fukumura, K.; Ishii, N.; Ishikawa, K.-I.; Jitsumoto, H.; Kamano, H.; Konno, Y.; Matsufuru, H.; Mikami, Y.; Miura, K.; Sato, M.; Takeda, S.; Tatebe, O.; Togawa, H.; Ukawa, A.; Ukita, N.; Watanabe, Y.; Yamazaki, T.; Yoshie, T.

    2015-12-01

    JLDG is a data-grid for the lattice QCD (LQCD) community in Japan. Several large research groups in Japan have been working on lattice QCD simulations using supercomputers distributed over distant sites. The JLDG provides such collaborations with an efficient method of data management and sharing. File servers installed on 9 sites are connected to the NII SINET VPN and are bound into a single file system with the GFarm. The file system looks the same from any sites, so that users can do analyses on a supercomputer on a site, using data generated and stored in the JLDG at a different site. We present a brief description of hardware and software of the JLDG, including a recently developed subsystem for cooperating with the HPCI shared storage, and report performance and statistics of the JLDG. As of April 2015, 15 research groups (61 users) store their daily research data of 4.7PB including replica and 68 million files in total. Number of publications for works which used the JLDG is 98. The large number of publications and recent rapid increase of disk usage convince us that the JLDG has grown up into a useful infrastructure for LQCD community in Japan.

  3. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  4. Biological Investigations of Adaptive Networks: Neuronal Control of Conditioned Responses

    DTIC Science & Technology

    1989-07-01

    The program also controls A/D sampling of voltage trace from NMR transducer and disk files for NMR, neural spikes, and synchronization. * HSAD . Basic...format which ANALYZE (by John Desmond) can read. e FIG.HIRES Reads C-64 HSAD files and EVENT NMR files and generates oscilloscope-like figures showing

  5. An analysis of file migration in a UNIX supercomputing environment

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1992-01-01

    The super computer center at the National Center for Atmospheric Research (NCAR) migrates large numbers of files to and from its mass storage system (MSS) because there is insufficient space to store them on the Cray supercomputer's local disks. This paper presents an analysis of file migration data collected over two years. The analysis shows that requests to the MSS are periodic, with one day and one week periods. Read requests to the MSS account for the majority of the periodicity; as write requests are relatively constant over the course of a week. Additionally, reads show a far greater fluctuation than writes over a day and week since reads are driven by human users while writes are machine-driven.

  6. Online data handling and storage at the CMS experiment

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gómez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, RK; Morovic, S.; Nuñez-Barranco-Fernández, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

  7. Online Data Handling and Storage at the CMS Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J. M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced bymore » the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.« less

  8. NASA Langley Research Center's distributed mass storage system

    NASA Technical Reports Server (NTRS)

    Pao, Juliet Z.; Humes, D. Creig

    1993-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.

  9. The USL NASA PC R and D development environment standards

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Moreau, Dennis R.

    1984-01-01

    The development environment standards which have been established in order to control usage of the IBM PC/XT development systems and to prevent interference between projects being currently developed on the PC's are discussed. The standards address the following areas: scheduling PC resources; login/logout procedures; training; file naming conventions; hard disk organization; diskette care; backup procedures; and copying policies.

  10. Logic Design of a Shared Disk System in a Multi-Micro Computer Environment.

    DTIC Science & Technology

    1983-06-01

    overall system, is given. An exnaustive description of eacn device can De found in tne cited references. A. INTEL 80S5 Tne INTEL Be86 is a nign...eitner could De accomplished, it was necessary to understand ootn tne existing system arcnitecture ani software. Tne last cnapter addressed tnat...to De adapted: tne loader program and tne Doot ROP program. Tne loader program is a simplified version of CP/M-Bö and contains cniy encu^n file

  11. Interactive display of molecular models using a microcomputer system

    NASA Technical Reports Server (NTRS)

    Egan, J. T.; Macelroy, R. D.

    1980-01-01

    A simple, microcomputer-based, interactive graphics display system has been developed for the presentation of perspective views of wire frame molecular models. The display system is based on a TERAK 8510a graphics computer system with a display unit consisting of microprocessor, television display and keyboard subsystems. The operating system includes a screen editor, file manager, PASCAL and BASIC compilers and command options for linking and executing programs. The graphics program, written in USCD PASCAL, involves the centering of the coordinate system, the transformation of centered model coordinates into homogeneous coordinates, the construction of a viewing transformation matrix to operate on the coordinates, clipping invisible points, perspective transformation and scaling to screen coordinates; commands available include ZOOM, ROTATE, RESET, and CHANGEVIEW. Data file structure was chosen to minimize the amount of disk storage space. Despite the inherent slowness of the system, its low cost and flexibility suggests general applicability.

  12. Storage resource manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Bakken, J.; Petravick, D.

    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less

  13. The NSO FTS database program and archive (FTSDBM)

    NASA Technical Reports Server (NTRS)

    Lytle, D. M.

    1992-01-01

    Data from the NSO Fourier transform spectrometer is being re-archived from half inch tape onto write-once compact disk. In the process, information about each spectrum and a low resolution copy of each spectrum is being saved into an on-line database. FTSDBM is a simple database management program in the NSO external package for IRAF. A command language allows the FTSDBM user to add entries to the database, delete entries, select subsets from the database based on keyword values including ranges of values, create new database files based on these subsets, make keyword lists, examine low resolution spectra graphically, and make disk number/file number lists. Once the archive is complete, FTSDBM will allow the database to be efficiently searched for data of interest to the user and the compact disk format will allow random access to that data.

  14. Design and evaluation of a hybrid storage system in HEP environment

    NASA Astrophysics Data System (ADS)

    Xu, Qi; Cheng, Yaodong; Chen, Gang

    2017-10-01

    Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.

  15. VizieR Online Data Catalog: SDSS bulge, disk and total stellar mass estimates (Mendel+, 2014)

    NASA Astrophysics Data System (ADS)

    Mendel, J. T.; Simard, L.; Palmer, M.; Ellison, S. L.; Patton, D. R.

    2014-01-01

    We present a catalog of bulge, disk, and total stellar mass estimates for ~660000 galaxies in the Legacy area of the Sloan Digital Sky Survey Data (SDSS) Release 7. These masses are based on a homogeneous catalog of g- and r-band photometry described by Simard et al. (2011, Cat. J/ApJS/196/11), which we extend here with bulge+disk and Sersic profile photometric decompositions in the SDSS u, i, and z bands. We discuss the methodology used to derive stellar masses from these data via fitting to broadband spectral energy distributions (SEDs), and show that the typical statistical uncertainty on total, bulge, and disk stellar mass is ~0.15 dex. Despite relatively small formal uncertainties, we argue that SED modeling assumptions, including the choice of synthesis model, extinction law, initial mass function, and details of stellar evolution likely contribute an additional 60% systematic uncertainty in any mass estimate based on broadband SED fitting. We discuss several approaches for identifying genuine bulge+disk systems based on both their statistical likelihood and an analysis of their one-dimensional surface-brightness profiles, and include these metrics in the catalogs. Estimates of the total, bulge and disk stellar masses for both normal and dust-free models and their uncertainties are made publicly available here. (4 data files).

  16. Progress In Optical Memory Technology

    NASA Astrophysics Data System (ADS)

    Tsunoda, Yoshito

    1987-01-01

    More than 20 years have passed since the concept of optical memory was first proposed in 1966. Since then considerable progress has been made in this area together with the creation of completely new markets of optical memory in consumer and computer application areas. The first generation of optical memory was mainly developed with holographic recording technology in late 1960s and early 1970s. Considerable number of developments have been done in both analog and digital memory applications. Unfortunately, these technologies did not meet a chance to be a commercial product. The second generation of optical memory started at the beginning of 1970s with bit by bit recording technology. Read-only type optical memories such as video disks and compact audio disks have extensively investigated. Since laser diodes were first applied to optical video disk read out in 1976, there have been extensive developments of laser diode pick-ups for optical disk memory systems. The third generation of optical memory started in 1978 with bit by bit read/write technology using laser diodes. Developments of recording materials including both write-once and erasable have been actively pursued at several research institutes. These technologies are mainly focused on the optical memory systems for computer application. Such practical applications of optical memory technology has resulted in the creation of such new products as compact audio disks and computer file memories.

  17. DMFS: A Data Migration File System for NetBSD

    NASA Technical Reports Server (NTRS)

    Studenmund, William

    1999-01-01

    I have recently developed dmfs, a Data Migration File System, for NetBSD. This file system is based on the overlay file system, which is discussed in a separate paper, and provides kernel support for the data migration system being developed by my research group here at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal meta data in a flat file, which resides on a separate file system. Our data migration system provides archiving and file migration services. System utilities scan the dmfs file system for recently modified files, and archive them to two separate tape stores. Once a file has been doubly archived, files larger than a specified size will be truncated to that size, potentially freeing up large amounts of the underlying file store. Some sites will choose to retain none of the file (deleting its contents entirely from the file system) while others may choose to retain a portion, for instance a preamble describing the remainder of the file. The dmfs layer coordinates access to the file, retaining user-perceived access and modification times, file size, and restricting access to partially migrated files to the portion actually resident. When a user process attempts to read from the non-resident portion of a file, it is blocked and the dmfs layer sends a request to a system daemon to restore the file. As more of the file becomes resident, the user process is permitted to begin accessing the now-resident portions of the file. For simplicity, our data migration system divides a file into two portions, a resident portion followed by an optional non-resident portion. Also, a file is in one of three states: fully resident, fully resident and archived, and (partially) non-resident and archived. For a file which is only partially resident, any attempt to write or truncate the file, or to read a non-resident portion, will trigger a file restoration. Truncations and writes are blocked until the file is fully restored so that a restoration which only partially succeed does not leave the file in an indeterminate state with portions existing only on tape and other portions only in the disk file system. We chose layered file system technology as it permits us to focus on the data migration functionality, and permits end system administrators to choose the underlying file store technology. We chose the overlay layered file system instead of the null layer for two reasons: first to permit our layer to better preserve meta data integrity and second to prevent even root processes from accessing migrated files. This is achieved as the underlying file store becomes inaccessible once the dmfs layer is mounted. We are quite pleased with how the layered file system has turned out. Of the 45 vnode operations in NetBSD, 20 (forty-four percent) required no intervention by our file layer - they are passed directly to the underlying file store. Of the twenty five we do intercept, nine (such as vop_create()) are intercepted only to ensure meta data integrity. Most of the functionality was concentrated in five operations: vop_read, vop_write, vop_getattr, vop_setattr, and vop_fcntl. The first four are the core operations for controlling access to migrated files and preserving the user experience. vop_fcntl, a call generated for a certain class of fcntl codes, provides the command channel used by privileged user programs to communicate with the dmfs layer.

  18. Education Statistics on Disk. [CD-ROM.

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    This CD-ROM disk contains a computer program developed by the Office of Educational Research and Improvement to provide convenient access to the wealth of education statistics published by the National Center for Education Statistics (NCES). The program contains over 1,800 tables, charts, and text files from the following NCES publications,…

  19. Automating Disk Forensic Processing with SleuthKit, XML and Python

    DTIC Science & Technology

    2009-05-01

    1 Automating Disk Forensic Processing with SleuthKit, XML and Python Simson L. Garfinkel Abstract We have developed a program called fiwalk which...files themselves. We show how it is relatively simple to create automated disk forensic applications using a Python module we have written that reads...software that the portable device may contain. Keywords: Computer Forensics; XML; Sleuth Kit; Python I. INTRODUCTION In recent years we have found many

  20. Scheduler software for tracking and data relay satellite system loading analysis: User manual and programmer guide

    NASA Technical Reports Server (NTRS)

    Craft, R.; Dunn, C.; Mccord, J.; Simeone, L.

    1980-01-01

    A user guide and programmer documentation is provided for a system of PRIME 400 minicomputer programs. The system was designed to support loading analyses on the Tracking Data Relay Satellite System (TDRSS). The system is a scheduler for various types of data relays (including tape recorder dumps and real time relays) from orbiting payloads to the TDRSS. Several model options are available to statistically generate data relay requirements. TDRSS time lines (representing resources available for scheduling) and payload/TDRSS acquisition and loss of sight time lines are input to the scheduler from disk. Tabulated output from the interactive system includes a summary of the scheduler activities over time intervals specified by the user and overall summary of scheduler input and output information. A history file, which records every event generated by the scheduler, is written to disk to allow further scheduling on remaining resources and to provide data for graphic displays or additional statistical analysis.

  1. A Future Accelerated Cognitive Distributed Hybrid Testbed for Big Data Science Analytics

    NASA Astrophysics Data System (ADS)

    Halem, M.; Prathapan, S.; Golpayegani, N.; Huang, Y.; Blattner, T.; Dorband, J. E.

    2016-12-01

    As increased sensor spectral data volumes from current and future Earth Observing satellites are assimilated into high-resolution climate models, intensive cognitive machine learning technologies are needed to data mine, extract and intercompare model outputs. It is clear today that the next generation of computers and storage, beyond petascale cluster architectures, will be data centric. They will manage data movement and process data in place. Future cluster nodes have been announced that integrate multiple CPUs with high-speed links to GPUs and MICS on their backplanes with massive non-volatile RAM and access to active flash RAM disk storage. Active Ethernet connected key value store disk storage drives with 10Ge or higher are now available through the Kinetic Open Storage Alliance. At the UMBC Center for Hybrid Multicore Productivity Research, a future state-of-the-art Accelerated Cognitive Computer System (ACCS) for Big Data science is being integrated into the current IBM iDataplex computational system `bluewave'. Based on the next gen IBM 200 PF Sierra processor, an interim two node IBM Power S822 testbed is being integrated with dual Power 8 processors with 10 cores, 1TB Ram, a PCIe to a K80 GPU and an FPGA Coherent Accelerated Processor Interface card to 20TB Flash Ram. This system is to be updated to the Power 8+, an NVlink 1.0 with the Pascal GPU late in 2016. Moreover, the Seagate 96TB Kinetic Disk system with 24 Ethernet connected active disks is integrated into the ACCS storage system. A Lightweight Virtual File System developed at the NASA GSFC is installed on bluewave. Since remote access to publicly available quantum annealing computers is available at several govt labs, the ACCS will offer an in-line Restricted Boltzmann Machine optimization capability to the D-Wave 2X quantum annealing processor over the campus high speed 100 Gb network to Internet 2 for large files. As an evaluation test of the cognitive functionality of the architecture, the following studies utilizing all the system components will be presented; (i) a near real time climate change study generating CO2 fluxes and (ii) a deep dive capability into an 8000 x8000 pixel image pyramid display and (iii) Large dense and sparse eigenvalue decomposition.

  2. 78 FR 13222 - Procedures for the Handling of Retaliation Complaints Under Section 1558 of the Affordable Care Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-27

    ...: Large print, electronic file on computer disk (Word Perfect, ASCII, Mates with Duxbury Braille System... the Library of Congress, or in the Government Printing Office. 29 U.S.C. 203(e)(2)(A). An employee... Corp., 649 F.3d at 230 n.2 (section 15(a)(3) of the FLSA protects former employees); cf. Robinson v...

  3. Sharing digital micrographs and other data files between computers.

    PubMed

    Entwistle, A

    2004-01-01

    It ought to be easy to exchange digital micrographs and other computer data files with a colleague even on another continent. In practice, this often is not the case. The advantages and disadvantages of various methods that are available for exchanging data files between computers are discussed. When possible, data should be transferred through computer networking. When data are to be exchanged locally between computers with similar operating systems, the use of a local area network is recommended. For computers in commercial or academic environments that have dissimilar operating systems or are more widely spaced, the use of FTPs is recommended. Failing this, posting the data on a website and transferring by hypertext transfer protocol is suggested. If peer to peer exchange between computers in domestic environments is needed, the use of Messenger services such as Microsoft Messenger or Yahoo Messenger is the method of choice. When it is not possible to transfer the data files over the internet, single use, writable CD ROMs are the best media for transferring data. If for some reason this is not possible, DVD-R/RW, DVD+R/RW, 100 MB ZIP disks and USB flash media are potentially useful media for exchanging data files.

  4. Implementation of relational data base management systems on micro-computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, C.L.

    1982-01-01

    This dissertation describes an implementation of a Relational Data Base Management System on a microcomputer. A specific floppy disk based hardward called TERAK is being used, and high level query interface which is similar to a subset of the SEQUEL language is provided. The system contains sub-systems such as I/O, file management, virtual memory management, query system, B-tree management, scanner, command interpreter, expression compiler, garbage collection, linked list manipulation, disk space management, etc. The software has been implemented to fulfill the following goals: (1) it is highly modularized. (2) The system is physically segmented into 16 logically independent, overlayable segments,more » in a way such that a minimal amount of memory is needed at execution time. (3) Virtual memory system is simulated that provides the system with seemingly unlimited memory space. (4) A language translator is applied to recognize user requests in the query language. The code generation of this translator generates compact code for the execution of UPDATE, DELETE, and QUERY commands. (5) A complete set of basic functions needed for on-line data base manipulations is provided through the use of a friendly query interface. (6) To eliminate the dependency on the environment (both software and hardware) as much as possible, so that it would be easy to transplant the system to other computers. (7) To simulate each relation as a sequential file. It is intended to be a highly efficient, single user system suited to be used by small or medium sized organizations for, say, administrative purposes. Experiments show that quite satisfying results have indeed been achieved.« less

  5. Workload Characterization and Performance Implications of Large-Scale Blog Servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Myeongjae; Kim, Youngjae; Hwang, Jeaho

    With the ever-increasing popularity of social network services (SNSs), an understanding of the characteristics of these services and their effects on the behavior of their host servers is critical. However, there has been a lack of research on the workload characterization of servers running SNS applications such as blog services. To fill this void, we empirically characterized real-world web server logs collected from one of the largest South Korean blog hosting sites for 12 consecutive days. The logs consist of more than 96 million HTTP requests and 4.7 TB of network traffic. Our analysis reveals the followings: (i) The transfermore » size of non-multimedia files and blog articles can be modeled using a truncated Pareto distribution and a log-normal distribution, respectively; (ii) User access for blog articles does not show temporal locality, but is strongly biased towards those posted with image or audio files. We additionally discuss the potential performance improvement through clustering of small files on a blog page into contiguous disk blocks, which benefits from the observed file access patterns. Trace-driven simulations show that, on average, the suggested approach achieves 60.6% better system throughput and reduces the processing time for file access by 30.8% compared to the best performance of the Ext4 file system.« less

  6. How to Get from Cupertino to Boca Raton.

    ERIC Educational Resources Information Center

    Troxel, Duane K.; Chiavacci, Jim

    1985-01-01

    Describes seven methods to transfer data from Apple computer disks to IBM computer disks and vice versa: print out data and retype; use a commercial software package, optical-character reader, homemade cable, or modem to pass or transfer data directly; pay commercial data-transfer service; or store files on mainframe and download. (MBR)

  7. Evaluating Non-In-Place Update Techniques for Flash-Based Transaction Processing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yongkun; Goda, Kazuo; Kitsuregawa, Masaru

    Recently, flash memory is emerging as the storage device. With price sliding fast, the cost per capacity is approaching to that of SATA disk drives. So far flash memory has been widely deployed in consumer electronics even partly in mobile computing environments. For enterprise systems, the deployment has been studied by many researchers and developers. In terms of the access performance characteristics, flash memory is quite different from disk drives. Without the mechanical components, flash memory has very high random read performance, whereas it has a limited random write performance because of the erase-before-write design. The random write performance of flash memory is comparable with or even worse than that of disk drives. Due to such a performance asymmetry, naive deployment to enterprise systems may not exploit the potential performance of flash memory at full blast. This paper studies the effectiveness of using non-in-place-update (NIPU) techniques through the IO path of flash-based transaction processing systems. Our deliberate experiments using both open-source DBMS and commercial DBMS validated the potential benefits; x3.0 to x6.6 performance improvement was confirmed by incorporating non-in-place-update techniques into file system without any modification of applications or storage devices.

  8. Storage media pipelining: Making good use of fine-grained media

    NASA Technical Reports Server (NTRS)

    Vanmeter, Rodney

    1993-01-01

    This paper proposes a new high-performance paradigm for accessing removable media such as tapes and especially magneto-optical disks. In high-performance computing the striping of data across multiple devices is a common means of improving data transfer rates. Striping has been used very successfully for fixed magnetic disks improving overall system reliability as well as throughput. It has also been proposed as a solution for providing improved bandwidth for tape and magneto-optical subsystems. However, striping of removable media has shortcomings, particularly in the areas of latency to data and restricted system configurations, and is suitable primarily for very large I/Os. We propose that for fine-grained media, an alternative access method, media pipelining, may be used to provide high bandwidth for large requests while retaining the flexibility to support concurrent small requests and different system configurations. Its principal drawback is high buffering requirements in the host computer or file server. This paper discusses the possible organization of such a system including the hardware conditions under which it may be effective, and the flexibility of configuration. Its expected performance is discussed under varying workloads including large single I/O's and numerous smaller ones. Finally, a specific system incorporating a high-transfer-rate magneto-optical disk drive and autochanger is discussed.

  9. A mass spectrometry proteomics data management platform.

    PubMed

    Sharma, Vagisha; Eng, Jimmy K; Maccoss, Michael J; Riffle, Michael

    2012-09-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.

  10. VizieR Online Data Catalog: Spitzer obs. of warm dust in 83 debris disks (Ballering+, 2017)

    NASA Astrophysics Data System (ADS)

    Ballering, N. P.; Rieke, G. H.; Su, K. Y. L.; Gaspar, A.

    2018-04-01

    For our sample, we used the systems with a warm component found by Ballering+ (2013, J/ApJ/775/55), where "warm" was defined as warmer than 130K. All of these systems have data available from the Multiband Imaging Photometer for Spitzer (MIPS) at 24 and 70um and from the Spitzer Infrared Spectrograph (IRS). The selected 83 targets used for our analysis are listed in Table 1. (5 data files).

  11. LVFS: A Big Data File Storage Bridge for the HPC Community

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.

    2015-12-01

    Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.

  12. Incorporating Oracle on-line space management with long-term archival technology

    NASA Technical Reports Server (NTRS)

    Moran, Steven M.; Zak, Victor J.

    1996-01-01

    The storage requirements of today's organizations are exploding. As computers continue to escalate in processing power, applications grow in complexity and data files grow in size and in number. As a result, organizations are forced to procure more and more megabytes of storage space. This paper focuses on how to expand the storage capacity of a Very Large Database (VLDB) cost-effectively within a Oracle7 data warehouse system by integrating long term archival storage sub-systems with traditional magnetic media. The Oracle architecture described in this paper was based on an actual proof of concept for a customer looking to store archived data on optical disks yet still have access to this data without user intervention. The customer had a requirement to maintain 10 years worth of data on-line. Data less than a year old still had the potential to be updated thus will reside on conventional magnetic disks. Data older than a year will be considered archived and will be placed on optical disks. The ability to archive data to optical disk and still have access to that data provides the system a means to retain large amounts of data that is readily accessible yet significantly reduces the cost of total system storage. Therefore, the cost benefits of archival storage devices can be incorporated into the Oracle storage medium and I/O subsystem without loosing any of the functionality of transaction processing, yet at the same time providing an organization access to all their data.

  13. A Systematic Approach for Assessing Workforce Readiness

    DTIC Science & Technology

    2014-08-01

    goals: (1) to collect data from designated in- formation technology equipment and (2) to analyze the collected data. The two goals are typically...office, home, or information technology (IT) department. The purpose of data collection is to gather images of disks, file servers, etc., from a site...naissance to identify which technologies are deployed at a site and determine which data need to be collected from the organization’s systems and

  14. Reducing Backups by Utilizing DMF

    NASA Technical Reports Server (NTRS)

    Cardo, Nicholas P.; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    Although a filesystem may be migratable, for a period of time the data blocks are on disk only. When performing system dumps, these data blocks are backed up to tape. If the data blocks are offline or dual resident, then only the inode is backed up. If all online files are made dual resident prior to performing system dumps, the dump time and the amount of resources required can be significantly reduced. The High Speed Processors group at the Numerical Aerodynamics Simulation (NAS) Facility at NASA Ames Research Center developed a tool to make all online files dual resident. The result is that a file whose data blocks are on DMF tape and still assigned to the original inode. Our 150GB filesystem used to take 8 to 12 hours to backup and used 50 to 60 tapes. Now the backup is typically under 10 tapes and completes in under 2 hours. This paper discusses this new tool and advantages gained by using it.

  15. Virus Alert: Ten Steps to Safe Computing.

    ERIC Educational Resources Information Center

    Gunter, Glenda A.

    1997-01-01

    Discusses computer viruses and explains how to detect them; discusses virus protection and the need to update antivirus software; and offers 10 safe computing tips, including scanning floppy disks and commercial software, how to safely download files from the Internet, avoiding pirated software copies, and backing up files. (LRW)

  16. Integration experiences and performance studies of A COTS parallel archive systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Bary

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less

  17. Integration experiments and performance studies of a COTS parallel archive system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Gary

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less

  18. Pilot climate data system user's guide

    NASA Technical Reports Server (NTRS)

    Reph, M. G.; Treinish, L. A.; Bloch, L.

    1984-01-01

    Instructions for using the Pilot Climate Data System (PCDS), an interactive, scientific data management system for locating, obtaining, manipulating, and displaying climate-research data are presented. The PCDS currently provides this supoort for approximately twenty data sets. Figures that illustrate the terminal displays which a user sees when he/she runs the PCDS and some examples of the output from this system are included. The capabilities which are described in detail allow a user to perform the following: (1) obtain comprehensive descriptions of a number of climate parameter data sets and the associated sensor measurements from which they were derived; (2) obtain detailed information about the temporal coverage and data volume of data sets which are readily accessible via the PCDS; (3) extract portions of a data set using criteria such as time range and geographic location, and output the data to tape, user terminal, system printer, or online disk files in a special data-set-independent format; (4) access and manipulate the data in these data-set-independent files, performing such functions as combining the data, subsetting the data, and averaging the data; and (5) create various graphical representations of the data stored in the data-set-independent files.

  19. Segy-change: The swiss army knife for the SEG-Y files

    NASA Astrophysics Data System (ADS)

    Stanghellini, Giuseppe; Carrara, Gabriela

    Data collected during active and passive seismic surveys can be stored in many different, more or less standard, formats. One of the most popular is the SEG-Y format, developed since 1975 to store single-line seismic digital data on tapes, and now evolved to store them into hard-disk and other media as well. Unfortunately, sometimes, files that are claimed to be recorded in the SEG-Y format cannot be processed using available free or industrial packages. Aiming to solve this impasse we present segy-change, a pre-processing software program to view, analyze, change and fix errors present in SEG-Y data files. It is written in C language and it can be used also as a software library and is compatible with most operating systems. Segy-change allows the user to display and optionally change the values inside all parts of a SEG-Y file: the file header, the trace headers and the data blocks. In addition, it allows to do a quality check on the data by plotting the traces. We provide instructions and examples on how to use the software.

  20. Derived virtual devices: a secure distributed file system mechanism

    NASA Technical Reports Server (NTRS)

    VanMeter, Rodney; Hotz, Steve; Finn, Gregory

    1996-01-01

    This paper presents the design of derived virtual devices (DVDs). DVDs are the mechanism used by the Netstation Project to provide secure shared access to network-attached peripherals distributed in an untrusted network environment. DVDs improve Input/Output efficiency by allowing user processes to perform I/O operations directly from devices without intermediate transfer through the controlling operating system kernel. The security enforced at the device through the DVD mechanism includes resource boundary checking, user authentication, and restricted operations, e.g., read-only access. To illustrate the application of DVDs, we present the interactions between a network-attached disk and a file system designed to exploit the DVD abstraction. We further discuss third-party transfer as a mechanism intended to provide for efficient data transfer in a typical NAP environment. We show how DVDs facilitate third-party transfer, and provide the security required in a more open network environment.

  1. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Kato, Toshio

    Osaka Gas Co., Ltd. constructed Osaka Gas Technical Information System (OGTIS) in 1979, which stores and retrieves the in-house technical information and provides even primary materials by unifying optical disk files, facsimile system and so on. The major information sources are technical materials, survey materials, planning documents, design materials, research reports, business tour reports which are all generated inside the Company. At the present moment it amounts to 25,000 items in total adding 1,000 items annually. The data file is updated once in a month and also outputs the abstract journal OGTIS Report monthly. In 1983 it constructed System for International Exchange of Personal Information (SIP) as a subsystem of OGTIS in order to compile SIP database which covers exchange outlines with oversea enterprises or organizations. The data size is 2,600 totally adding about 500 annually with monthly data updating.

  2. A Mass Spectrometry Proteomics Data Management Platform*

    PubMed Central

    Sharma, Vagisha; Eng, Jimmy K.; MacCoss, Michael J.; Riffle, Michael

    2012-01-01

    Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are “organically” distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/. PMID:22611296

  3. Data Processing Factory for the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Stoughton, Christopher; Adelman, Jennifer; Annis, James T.; Hendry, John; Inkmann, John; Jester, Sebastian; Kent, Steven M.; Kuropatkin, Nickolai; Lee, Brian; Lin, Huan; Peoples, John, Jr.; Sparks, Robert; Tucker, Douglas; Vanden Berk, Dan; Yanny, Brian; Yocum, Dan

    2002-12-01

    The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A data processing factory, using technologies both old and new, handles this flow. Distribution to end users is via disk farms, to serve corrected images and calibrated spectra, and a database, to efficiently process catalog queries. For distribution of modest amounts of data from Apache Point Observatory to Fermilab, scripts use rsync to update files, while larger data transfers are accomplished by shipping magnetic tapes commercially. All data processing pipelines are wrapped in scripts to address consecutive phases: preparation, submission, checking, and quality control. We constructed the factory by chaining these pipelines together while using an operational database to hold processed imaging catalogs. The science database catalogs all imaging and spectroscopic object, with pointers to the various external files associated with them. Diverse computing systems address particular processing phases. UNIX computers handle tape reading and writing, as well as calibration steps that require access to a large amount of data with relatively modest computational demands. Commodity CPUs process steps that require access to a limited amount of data with more demanding computations requirements. Disk servers optimized for cost per Gbyte serve terabytes of processed data, while servers optimized for disk read speed run SQLServer software to process queries on the catalogs. This factory produced data for the SDSS Early Data Release in June 2001, and it is currently producing Data Release One, scheduled for January 2003.

  4. Agentless Cloud-Wide Monitoring of Virtual Disk State

    DTIC Science & Technology

    2015-10-01

    packages include Apache, MySQL , PHP, Ruby on Rails, Java Application Servers, and many others. Figure 2.12 shows the results of a run of the Software...Linux, Apache, MySQL , PHP (LAMP) set of applications. Thus, many file-level update logs will contain the same versions of files repeated across many

  5. Bin-Carver: Automatic Recovery of Binary Executable Files

    DTIC Science & Technology

    2012-05-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Texas A&M University,Department of Computer Science and Engineering,College Station,TX,77840 8...PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT...least 23 4K data blocks) and observed how this binary file gets organized in a brand new disk. We found that this simple ls file actually gets

  6. NASA ARCH- A FILE ARCHIVAL SYSTEM FOR THE DEC VAX

    NASA Technical Reports Server (NTRS)

    Scott, P. J.

    1994-01-01

    The function of the NASA ARCH system is to provide a permanent storage area for files that are infrequently accessed. The NASA ARCH routines were designed to provide a simple mechanism by which users can easily store and retrieve files. The user treats NASA ARCH as the interface to a black box where files are stored. There are only five NASA ARCH user commands, even though NASA ARCH employs standard VMS directives and the VAX BACKUP utility. Special care is taken to provide the security needed to insure file integrity over a period of years. The archived files may exist in any of three storage areas: a temporary buffer, the main buffer, and a magnetic tape library. When the main buffer fills up, it is transferred to permanent magnetic tape storage and deleted from disk. Files may be restored from any of the three storage areas. A single file, multiple files, or entire directories can be stored and retrieved. archived entities hold the same name, extension, version number, and VMS file protection scheme as they had in the user's account prior to archival. NASA ARCH is capable of handling up to 7 directory levels. Wildcards are supported. User commands include TEMPCOPY, DISKCOPY, DELETE, RESTORE, and DIRECTORY. The DIRECTORY command searches a directory of savesets covering all three archival areas, listing matches according to area, date, filename, or other criteria supplied by the user. The system manager commands include 1) ARCHIVE- to transfer the main buffer to duplicate magnetic tapes, 2) REPORTto determine when the main buffer is full enough to archive, 3) INCREMENT- to back up the partially filled main buffer, and 4) FULLBACKUP- to back up the entire main buffer. On-line help files are provided for all NASA ARCH commands. NASA ARCH is written in DEC VAX DCL for interactive execution and has been implemented on a DEC VAX computer operating under VMS 4.X. This program was developed in 1985.

  7. New capabilities in the HENP grand challenge storage access systemand its application at RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, L.; Gibbard, B.; Malon, D.

    2000-04-25

    The High Energy and Nuclear Physics Data Access GrandChallenge project has developed an optimizing storage access softwaresystem that was prototyped at RHIC. It is currently undergoingintegration with the STAR experiment in preparation for data taking thatstarts in mid-2000. The behavior and lessons learned in the RHIC MockData Challenge exercises are described as well as the observedperformance under conditions designed to characterize scalability. Up to250 simultaneous queries were tested and up to 10 million events across 7event components were involved in these queries. The system coordinatesthe staging of "bundles" of files from the HPSS tape system, so that allthe needed componentsmore » of each event are in disk cache when accessed bythe application software. The caching policy algorithm for thecoordinated bundle staging is described in the paper. The initialprototype implementation interfaced to the Objectivity/DB. In this latestversion, it evolved to work with arbitrary files and use CORBA interfacesto the tag database and file catalog services. The interface to the tagdatabase and the MySQL-based file catalog services used by STAR aredescribed along with the planned usage scenarios.« less

  8. Efficient management and promotion of utilization of the video information acquired by observation

    NASA Astrophysics Data System (ADS)

    Kitayama, T.; Tanaka, K.; Shimabukuro, R.; Hase, H.; Ogido, M.; Nakamura, M.; Saito, H.; Hanafusa, Y.; Sonoda, A.

    2012-12-01

    In Japan Agency for Marine-Earth Science and Technology (JAMSTEC), the deep sea videos are made from the research by JAMSTEC submersibles in 1982, and the information on the huge deep-sea that will reach more 4,000 dives (ca. 24,700 tapes) by the present are opened to public via the Internet since 2002. The deep-sea videos is important because the time variation of deep-sea environment with difficult investigation and collection and growth of the living thing in extreme environment can be checked. Moreover, with development of video technique, the advanced analysis of an investigation image is attained. For grasp of deep sea environment, especially the utility value of the image is high. In JAMSTEC's Data Research Center for Marine-Earth Sciences (DrC), collection of the video are obtained by dive investigation of JAMSTEC, preservation, quality control, and open to public are performed. It is our big subject that the huge video information which utility value has expanded managed efficiently and promotion of use. In this announcement, the present measure is introduced about these subjects . The videos recorded on a tape or various media onboard are collected, and the backup and encoding for preventing the loss and degradation are performed. The video inside of a hard disk has the large file size. Then, we use the Linear Tape File System (LTFS) which attracts attention with image management engineering these days. Cost does not start compared with the usual disk backup, but correspondence years can also save the video data for a long time, and the operatively of a file is not different from a disk. The video that carried out the transcode to offer is archived by disk storage, and offer according to a use is possible for it. For the promotion of utilization of the video, the video public presentation system was reformed completely from November, 2011 to "JAMSTEC E-library of Deep Sea Images (http:// www.godac.jamstec.go.jp/jedi/)" This new system has preparing various searches (e.g. Search by map, Tree, Icon, Keyword et al.). The video annotation is enabled with the same interface, and the usability of use and management is raised. Moreover, In the "Biological Information System for Marine Life : BISMaL (http://www.godac.jamstec.go.jp/bismal/e/index.html)" which is a data system for biodiversity information, particularly in biogeographic data of marine organisms, based on photography position information, the visualization of living thing distribution, the life list of a deep sea living thing, and the deep sea video were used, and aim at the contribution to biodiversity grasp. Future, aiming at the accuracy improvement of the information given to the video by Work support of the comment registration by automatic recognition of an image and Development of a comment registration tool onboard, it aims at offering higher quality information.

  9. AMPS/PC - AUTOMATIC MANUFACTURING PROGRAMMING SYSTEM

    NASA Technical Reports Server (NTRS)

    Schroer, B. J.

    1994-01-01

    The AMPS/PC system is a simulation tool designed to aid the user in defining the specifications of a manufacturing environment and then automatically writing code for the target simulation language, GPSS/PC. The domain of problems that AMPS/PC can simulate are manufacturing assembly lines with subassembly lines and manufacturing cells. The user defines the problem domain by responding to the questions from the interface program. Based on the responses, the interface program creates an internal problem specification file. This file includes the manufacturing process network flow and the attributes for all stations, cells, and stock points. AMPS then uses the problem specification file as input for the automatic code generator program to produce a simulation program in the target language GPSS. The output of the generator program is the source code of the corresponding GPSS/PC simulation program. The system runs entirely on an IBM PC running PC DOS Version 2.0 or higher and is written in Turbo Pascal Version 4 requiring 640K memory and one 360K disk drive. To execute the GPSS program, the PC must have resident the GPSS/PC System Version 2.0 from Minuteman Software. The AMPS/PC program was developed in 1988.

  10. A Lightweight, High-performance I/O Management Package for Data-intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jun Wang

    2007-07-17

    File storage systems are playing an increasingly important role in high-performance computing as the performance gap between CPU and disk increases. It could take a long time to develop an entire system from scratch. Solutions will have to be built as extensions to existing systems. If new portable, customized software components are plugged into these systems, better sustained high I/O performance and higher scalability will be achieved, and the development cycle of next-generation of parallel file systems will be shortened. The overall research objective of this ECPI development plan aims to develop a lightweight, customized, high-performance I/O management package namedmore » LightI/O to extend and leverage current parallel file systems used by DOE. During this period, We have developed a novel component in LightI/O and prototype them into PVFS2, and evaluate the resultant prototype—extended PVFS2 system on data-intensive applications. The preliminary results indicate the extended PVFS2 delivers better performance and reliability to users. A strong collaborative effort between the PI at the University of Nebraska Lincoln and the DOE collaborators—Drs Rob Ross and Rajeev Thakur at Argonne National Laboratory who are leading the PVFS2 group makes the project more promising.« less

  11. Ultrasonic scanning system for imaging flaw growth in composites

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.; Meyn, E. H.

    1982-01-01

    A system for measuring and visually representing damage in composite specimens while they are being loaded was demonstrated. It uses a hobbiest grade microcomputer system to control data taking and image processing. The system scans operator selected regions of the specimen while it is under load in a tensile test machine and measures internal damage by the attenuation of a 2.5 MHz ultrasonic beam passed through the specimen. The microcomputer dynamically controls the position of ultrasonic transducers mounted on a two axis motor driven carriage. As many as 65,536 samples can be taken and filed on a floppy disk system in less than four minutes.

  12. Instrumentation for Airwake Measurements on the Flight Deck of a FFG-7

    DTIC Science & Technology

    1991-11-01

    volatile RAM to the computer hard disk with a unique file name based on time and date. At an opportune time the data file(s) are manually transferred...1967 6 Royal Air Force Manual (Volume D) AP3456D Al-i APPENDIX 1 GENERAL SPECIFICATION FOR VADAR VADAR was developed by the Instrumentation and Trials...TTCP HTP -6) N. Matheson N. Pollock DJ. Sherman Materials Research Laboratory Director/Library Defence Science & Technology Organisation Salisbury

  13. Non-volatile main memory management methods based on a file system.

    PubMed

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  14. “Superluminal” FITS File Processing on Multiprocessors: Zero Time Endian Conversion Technique

    NASA Astrophysics Data System (ADS)

    Eguchi, Satoshi

    2013-05-01

    The FITS is the standard file format in astronomy, and it has been extended to meet the astronomical needs of the day. However, astronomical datasets have been inflating year by year. In the case of the ALMA telescope, a ˜TB-scale four-dimensional data cube may be produced for one target. Considering that typical Internet bandwidth is tens of MB/s at most, the original data cubes in FITS format are hosted on a VO server, and the region which a user is interested in should be cut out and transferred to the user (Eguchi et al. 2012). The system will equip a very high-speed disk array to process a TB-scale data cube in 10 s, and disk I/O speed, endian conversion, and data processing speeds will be comparable. Hence, reducing the endian conversion time is one of issues to solve in our system. In this article, I introduce a technique named “just-in-time endian conversion”, which delays the endian conversion for each pixel just before it is really needed, to sweep out the endian conversion time; by applying this method, the FITS processing speed increases 20% for single threading and 40% for multi-threading compared to CFITSIO. The speedup tightly relates to modern CPU architecture to improve the efficiency of instruction pipelines due to break of “causality”, a programmed instruction code sequence.

  15. Small Interactive Image Processing System (SMIPS) system description

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

  16. Mass storage at NSA

    NASA Technical Reports Server (NTRS)

    Shields, Michael F.

    1993-01-01

    The need to manage large amounts of data on robotically controlled devices has been critical to the mission of this Agency for many years. In many respects this Agency has helped pioneer, with their industry counterparts, the development of a number of products long before these systems became commercially available. Numerous attempts have been made to field both robotically controlled tape and optical disk technology and systems to satisfy our tertiary storage needs. Custom developed products were architected, designed, and developed without vendor partners over the past two decades to field workable systems to handle our ever increasing storage requirements. Many of the attendees of this symposium are familiar with some of the older products, such as: the Braegen Automated Tape Libraries (ATL's), the IBM 3850, the Ampex TeraStore, just to name a few. In addition, we embarked on an in-house development of a shared disk input/output support processor to manage our every increasing tape storage needs. For all intents and purposes, this system was a file server by current definitions which used CDC Cyber computers as the control processors. It served us well and was just recently removed from production usage.

  17. Analysis of the access patterns at GSFC distributed active archive center

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore; Bedet, Jean-Jacques

    1996-01-01

    The Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC) has been operational for more than two years. Its mission is to support existing and pre Earth Observing System (EOS) Earth science datasets, facilitate the scientific research, and test Earth Observing System Data and Information System (EOSDIS) concepts. Over 550,000 files and documents have been archived, and more than six Terabytes have been distributed to the scientific community. Information about user request and file access patterns, and their impact on system loading, is needed to optimize current operations and to plan for future archives. To facilitate the management of daily activities, the GSFC DAAC has developed a data base system to track correspondence, requests, ingestion and distribution. In addition, several log files which record transactions on Unitree are maintained and periodically examined. This study identifies some of the users' requests and file access patterns at the GSFC DAAC during 1995. The analysis is limited to the subset of orders for which the data files are under the control of the Hierarchical Storage Management (HSM) Unitree. The results show that most of the data volume ordered was for two data products. The volume was also mostly made up of level 3 and 4 data and most of the volume was distributed on 8 mm and 4 mm tapes. In addition, most of the volume ordered was for deliveries in North America although there was a significant world-wide use. There was a wide range of request sizes in terms of volume and number of files ordered. On an average 78.6 files were ordered per request. Using the data managed by Unitree, several caching algorithms have been evaluated for both hit rate and the overhead ('cost') associated with the movement of data from near-line devices to disks. The algorithm called LRU/2 bin was found to be the best for this workload, but the STbin algorithm also worked well.

  18. Beating the tyranny of scale with a private cloud configured for Big Data

    NASA Astrophysics Data System (ADS)

    Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag

    2015-04-01

    The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.

  19. Development and Implementation of Kumamoto Technopolis Regional Database T-KIND

    NASA Astrophysics Data System (ADS)

    Onoue, Noriaki

    T-KIND (Techno-Kumamoto Information Network for Data-Base) is a system for effectively searching information of technology, human resources and industries which are necessary to realize Kumamoto Technopolis. It is composed of coded database, image database and LAN inside technoresearch park which is the center of R & D in the Technopolis. It constructs on-line system by networking general-purposed computers, minicomputers, optical disk file systems and so on, and provides the service through public telephone line. Two databases are now available on enterprise information and human resource information. The former covers about 4,000 enterprises, and the latter does about 2,000 persons.

  20. Automated Camouflage Pattern Generation Technology Survey.

    DTIC Science & Technology

    1985-08-07

    supported by high speed data communications? Costs: 9 What are your rates? $/CPU hour: $/MB disk storage/day: S/connect hour: other charges: What are your... data to the workstation, tape drives are needed for backing up and archiving completed patterns, 256 megabytes of on-line hard disk space as a minimum...is needed to support multiple processes and data files, and 4 megabytes of actual or virtual memory is needed to process the largest expected single

  1. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.

  2. A high-speed network for cardiac image review.

    PubMed Central

    Elion, J. L.; Petrocelli, R. R.

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  3. Integrating new Storage Technologies into EOS

    NASA Astrophysics Data System (ADS)

    Peters, Andreas J.; van der Ster, Dan C.; Rocha, Joaquim; Lensing, Paul

    2015-12-01

    The EOS[1] storage software was designed to cover CERN disk-only storage use cases in the medium-term trading scalability against latency. To cover and prepare for long-term requirements the CERN IT data and storage services group (DSS) is actively conducting R&D and open source contributions to experiment with a next generation storage software based on CEPH[3] and ethernet enabled disk drives. CEPH provides a scale-out object storage system RADOS and additionally various optional high-level services like S3 gateway, RADOS block devices and a POSIX compliant file system CephFS. The acquisition of CEPH by Redhat underlines the promising role of CEPH as the open source storage platform of the future. CERN IT is running a CEPH service in the context of OpenStack on a moderate scale of 1 PB replicated storage. Building a 100+PB storage system based on CEPH will require software and hardware tuning. It is of capital importance to demonstrate the feasibility and possibly iron out bottlenecks and blocking issues beforehand. The main idea behind this R&D is to leverage and contribute to existing building blocks in the CEPH storage stack and implement a few CERN specific requirements in a thin, customisable storage layer. A second research topic is the integration of ethernet enabled disks. This paper introduces various ongoing open source developments, their status and applicability.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less

  5. Performance of the engineering analysis and data system 2 common file system

    NASA Technical Reports Server (NTRS)

    Debrunner, Linda S.

    1993-01-01

    The Engineering Analysis and Data System (EADS) was used from April 1986 to July 1993 to support large scale scientific and engineering computation (e.g. computational fluid dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP in June 1991, after which a contract was awarded to Cray Grumman. EADS II was installed in February 1993, and by July 1993 most users were migrated. EADS II is a network of heterogeneous computer systems supporting scientific and engineering applications. The Common File System (CFS) is a key component of this system. The CFS provides a seamless, integrated environment to the users of EADS II including both disk and tape storage. UniTree software is used to implement this hierarchical storage management system. The performance of the CFS suffered during the early months of the production system. Several of the performance problems were traced to software bugs which have been corrected. Other problems were associated with hardware. However, the use of NFS in UniTree UCFM software limits the performance of the system. The performance issues related to the CFS have led to a need to develop a greater understanding of the CFS organization. This paper will first describe the EADS II with emphasis on the CFS. Then, a discussion of mass storage systems will be presented, and methods of measuring the performance of the Common File System will be outlined. Finally, areas for further study will be identified and conclusions will be drawn.

  6. VizieR Online Data Catalog: 340GHz SMA obs. of 50 nearby protoplanetary disks (Tripathi+, 2017)

    NASA Astrophysics Data System (ADS)

    Tripathi, A.; Andrews, S. M.; Birnstiel, T.; Wilner, D. J.

    2018-03-01

    A sample of 50 nearby (d<=200pc) disk targets was collated from the archived catalog of ~340GHz (880um) continuum measurements made with the Submillimeter Array (SMA), since the start of science operations in 2004. Of the 50 disks in our survey, 10 were recently observed by us expressly for the purposes of the present study. To our knowledge, the SMA observations of 18 targets have not yet been published elsewhere. Table 1 is a brief SMA observation log, with references for where the data originally appeared (observations span 2005 jun 12 to 2015 Jan 19). (3 data files).

  7. LAS - LAND ANALYSIS SYSTEM, VERSION 5.0

    NASA Technical Reports Server (NTRS)

    Pease, P. B.

    1994-01-01

    The Land Analysis System (LAS) is an image analysis system designed to manipulate and analyze digital data in raster format and provide the user with a wide spectrum of functions and statistical tools for analysis. LAS offers these features under VMS with optional image display capabilities for IVAS and other display devices as well as the X-Windows environment. LAS provides a flexible framework for algorithm development as well as for the processing and analysis of image data. Users may choose between mouse-driven commands or the traditional command line input mode. LAS functions include supervised and unsupervised image classification, film product generation, geometric registration, image repair, radiometric correction and image statistical analysis. Data files accepted by LAS include formats such as Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR). The enhanced geometric registration package now includes both image to image and map to map transformations. The over 200 LAS functions fall into image processing scenario categories which include: arithmetic and logical functions, data transformations, fourier transforms, geometric registration, hard copy output, image restoration, intensity transformation, multispectral and statistical analysis, file transfer, tape profiling and file management among others. Internal improvements to the LAS code have eliminated the VAX VMS dependencies and improved overall system performance. The maximum LAS image size has been increased to 20,000 lines by 20,000 samples with a maximum of 256 bands per image. The catalog management system used in earlier versions of LAS has been replaced by a more streamlined and maintenance-free method of file management. This system is not dependent on VAX/VMS and relies on file naming conventions alone to allow the use of identical LAS file names on different operating systems. While the LAS code has been improved, the original capabilities of the system have been preserved. These include maintaining associated image history, session logging, and batch, asynchronous and interactive mode of operation. The LAS application programs are integrated under version 4.1 of an interface called the Transportable Applications Executive (TAE). TAE 4.1 has four modes of user interaction: menu, direct command, tutor (or help), and dynamic tutor. In addition TAE 4.1 allows the operation of LAS functions using mouse-driven commands under the TAE-Facelift environment provided with TAE 4.1. These modes of operation allow users, from the beginner to the expert, to exercise specific application options. LAS is written in C-language and FORTRAN 77 for use with DEC VAX computers running VMS with approximately 16Mb of physical memory. This program runs under TAE 4.1. Since TAE 4.1 is not a current version of TAE, TAE 4.1 is included within the LAS distribution. Approximately 130,000 blocks (65Mb) of disk storage space are necessary to store the source code and files generated by the installation procedure for LAS and 44,000 blocks (22Mb) of disk storage space are necessary for TAE 4.1 installation. The only other dependencies for LAS are the subroutine libraries for the specific display device(s) that will be used with LAS/DMS (e.g. X-Windows and/or IVAS). The standard distribution medium for LAS is a set of two 9track 6250 BPI magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. This program was developed in 1986 and last updated in 1992.

  8. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  9. Central Satellite Data Repository Supporting Research and Development

    NASA Astrophysics Data System (ADS)

    Han, W.; Brust, J.

    2015-12-01

    Near real-time satellite data is critical to many research and development activities of atmosphere, land, and ocean processes. Acquiring and managing huge volumes of satellite data without (or with less) latency in an organization is always a challenge in the big data age. An organization level data repository is a practical solution to meeting this challenge. The STAR (Center for Satellite Applications and Research of NOAA) Central Data Repository (SCDR) is a scalable, stable, and reliable repository to acquire, manipulate, and disseminate various types of satellite data in an effective and efficient manner. SCDR collects more than 200 data products, which are commonly used by multiple groups in STAR, from NOAA, GOES, Metop, Suomi NPP, Sentinel, Himawari, and other satellites. The processes of acquisition, recording, retrieval, organization, and dissemination are performed in parallel. Multiple data access interfaces, like FTP, FTPS, HTTP, HTTPS, and RESTful, are supported in the SCDR to obtain satellite data from their providers through high speed internet. The original satellite data in various raster formats can be parsed in the respective adapter to retrieve data information. The data information is ingested to the corresponding partitioned tables in the central database. All files are distributed equally on the Network File System (NFS) disks to balance the disk load. SCDR provides consistent interfaces (including Perl utility, portal, and RESTful Web service) to locate files of interest easily and quickly and access them directly by over 200 compute servers via NFS. SCDR greatly improves collection and integration of near real-time satellite data, addresses satellite data requirements of scientists and researchers, and facilitates their primary research and development activities.

  10. The NetLogger Toolkit V2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunter, Dan; Lee, Jason; Stoufer, Martin

    2003-03-28

    The NetLogger Toolkit is designed to monitor, under actual operating conditions, the behavior of all the elements of the application-to-application communication path in order to determine exactly where time is spent within a complex system Using NetLogger, distnbuted application components are modified to produce timestamped logs of "interesting" events at all the critical points of the distributed system Events from each component are correlated, which allov^ one to characterize the performance of all aspects of the system and network in detail. The NetLogger Toolkit itself consists of four components an API and library of functions to simplify the generation ofmore » application-level event logs, a set of tools for collecting and sorting log files, an event archive system, and a tool for visualization and analysis of the log files In order to instrument an application to produce event logs, the application developer inserts calls to the NetLogger API at all the critical points in the code, then links the application with the NetLogger library All the tools in the NetLogger Toolkit share a common log format, and assume the existence of accurate and synchronized system clocks NetLogger messages can be logged using an easy-to-read text based format based on the lETF-proposed ULM format, or a binary format that can still be used through the same API but that is several times faster and smaller, with performance comparable or better than binary message formats such as MPI, XDR, SDDF-Binary, and PBIO. The NetLogger binary format is both highly efficient and self-describing, thus optimized for the dynamic message construction and parsing of application instrumentation. NetLogger includes an "activation" API that allows NetLogger logging to be turned on, off, or modified by changing an external file This IS useful for activating logging in daemons/services (e g GndFTP server). The NetLogger reliability API provides the ability to specify backup logging locations and penodically try to reconnect broken TCP pipe. A typical use for this is to store data on local disk while net is down. An event archiver can log one or more incoming NetLogger streams to a local disk file (netlogd) or to a mySQL database (netarchd). We have found exploratory, visual analysis of the log event data to be the most useful means of determining the causes of performance anomalies The NetLogger Visualization tool, niv, has been developed to provide a flexible and interactive graphical representation of system-level and application-level events.« less

  11. CD-ROM source data uploaded to the operating and storage devices of an IBM 3090 mainframe through a PC terminal.

    PubMed

    Boros, L G; Lepow, C; Ruland, F; Starbuck, V; Jones, S; Flancbaum, L; Townsend, M C

    1992-07-01

    A powerful method of processing MEDLINE and CINAHL source data uploaded to the IBM 3090 mainframe computer through an IBM/PC is described. Data are first downloaded from the CD-ROM's PC devices to floppy disks. These disks then are uploaded to the mainframe computer through an IBM/PC equipped with WordPerfect text editor and computer network connection (SONNGATE). Before downloading, keywords specifying the information to be accessed are typed at the FIND prompt of the CD-ROM station. The resulting abstracts are downloaded into a file called DOWNLOAD.DOC. The floppy disks containing the information are simply carried to an IBM/PC which has a terminal emulation (TELNET) connection to the university-wide computer network (SONNET) at the Ohio State University Academic Computing Services (OSU ACS). The WordPerfect (5.1) processes and saves the text into DOS format. Using the File Transfer Protocol (FTP, 130,000 bytes/s) of SONNET, the entire text containing the information obtained through the MEDLINE and CINAHL search is transferred to the remote mainframe computer for further processing. At this point, abstracts in the specified area are ready for immediate access and multiple retrieval by any PC having network switch or dial-in connection after the USER ID, PASSWORD and ACCOUNT NUMBER are specified by the user. The system provides the user an on-line, very powerful and quick method of searching for words specifying: diseases, agents, experimental methods, animals, authors, and journals in the research area downloaded. The user can also copy the TItles, AUthors and SOurce with optional parts of abstracts into papers under edition. This arrangement serves the special demands of a research laboratory by handling MEDLINE and CINAHL source data resulting after a search is performed with keywords specified for ongoing projects. Since the Ohio State University has a centrally founded mainframe system, the data upload, storage and mainframe operations are free.

  12. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  13. NORTHWOODS Wildlife Habitat Data Base

    Treesearch

    Mark D. Nelson; Janine M. Benyus; Richard R. Buech

    1992-01-01

    Wildlife habitat data from seven Great Lakes National Forests were combined into a wildlife-habitat matrix named NORTHWOODS. Several electronic file formats of NORTHWOODS data base and documentation are available on floppy disks for microcomputers.

  14. Building Parts Inventory Files Using the AppleWorks Data Base Subprogram and Apple IIe or GS Computers.

    ERIC Educational Resources Information Center

    Schlenker, Richard M.

    This manual is a "how to" training device for building database files using the AppleWorks program with an Apple IIe or Apple IIGS Computer with Duodisk or two disk drives and an 80-column card. The manual provides step-by-step directions, and includes 25 figures depicting the computer screen at the various stages of the database file…

  15. Clementine High Resolution Camera Mosaicking Project. Volume 21; CL 6021; 80 deg S to 90 deg S Latitude, North Periapsis; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Clementine I high resolution (HiRes) camera lunar image mosaics developed by Malin Space Science Systems (MSSS). These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. The geometric control is provided by the U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD are compiled from polar data (latitudes greater than 80 degrees), and are presented in the stereographic projection at a scale of 30 m/pixel at the pole, a resolution 5 times greater than that (150 m/pixel) of the corresponding UV/Vis polar basemap. This 5:1 scale ratio is in keeping with the sub-polar mosaic, in which the HiRes and UV/Vis mosaics had scales of 20 m/pixel and 100 m/pixel, respectively. The equal-area property of the stereographic projection made this preferable for the HiRes polar mosaic rather than the basemap's orthographic projection. Thus, a necessary first step in constructing the mosaic was the reprojection of the UV/Vis basemap to the stereographic projection. The HiRes polar data can be naturally grouped according to the orbital periapsis, which was in the south during the first half of the mapping mission and in the north during the second half. Images in each group have generally uniform intrinsic resolution, illumination, exposure and gain. Rather than mingle data from the two periapsis epochs, separate mosaics are provided for each, a total of 4 polar mosaics. The mosaics are divided into 100 square tiles of 2250 pixels (approximately 2.2 deg near the pole) on a side. Not all squares of this grid contain HiRes mosaic data, some inevitably since a square is not a perfect representation of a (latitude) circle, others due to the lack of HiRes data. This CD also contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  16. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  17. 45 CFR 286.260 - May Tribes use sampling and electronic filing?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... quarterly reports electronically, based on format specifications that we will provide. Tribes who do not have the capacity to submit reports electronically may submit quarterly reports on a disk or in hard...

  18. The Seven Deadly Sins of Online Microcomputing.

    ERIC Educational Resources Information Center

    King, Alan

    1989-01-01

    Offers suggestions for avoiding common errors in online microcomputer use. Areas discussed include learning the basics; hardware protection; backup options; hard disk organization; software selection; file security; and the use of dedicated communications lines. (CLB)

  19. HECWRC, Flood Flow Frequency Analysis Computer Program 723-X6-L7550

    DTIC Science & Technology

    1989-02-14

    AGENCY NAME AND ADDRESS, ORDER NO., ETC. (1 NTS sells, leave blank) 11. PRICE INFORMA-ION Price includes documentation: Price code: DO1 $50.00 12 ...required is 256 K. Math coprocessor (8087/80287/80387) is highly recommended but not required. 16. DATA FILE TECHNICAL DESCRIPTION The software is...disk drive (360 KB or 1.2 MB). A 10 MB or larger hard disk is recommended. Math coprocessor (8087/80287/80387) is highly recommended but not renuired

  20. The Cheetah data management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunz, P.F.; Word, G.B.

    1992-09-01

    Cheetah is a data management system based on the C programming language, with support for other languages. Its main goal is to transfer data between memory and I/O steams in a general way. The streams are either associated with disk files or are network data stems. Cheetah provides optional convenience functions to assist in the management of C structures. Cheetah steams are self-describing so that general purpose applications can fully understand an incoming steam. This information can be used to display the data in an incoming steam to the user of an interactive general application, complete with variable names andmore » optional comments.« less

  1. An EXCEL macro for importing log ASCII standard (LAS) files into EXCEL worksheets

    NASA Astrophysics Data System (ADS)

    Özkaya, Sait Ismail

    1996-02-01

    An EXCEL 5.0 macro is presented for converting a LAS text file into an EXCEL worksheet. Although EXCEL has commands for importing text files and parsing text lines, LAS files must be decoded line-by-line because three different delimiters are used to separate fields of differing length. The macro is intended to eliminate manual decoding of LAS version 2.0. LAS is a floppy disk format for storage and transfer of log data as text files. LAS was proposed by the Canadian Well Logging Society. The present EXCEL macro decodes different sections of a LAS file, separates, and places the fields into different columns of an EXCEL worksheet. To import a LAS file into EXCEL without errors, the file must not contain any unrecognized symbols, and the data section must be the last section. The program does not check for the presence of mandatory sections or fields as required by LAS rules. Once a file is incorporated into EXCEL, mandatory sections and fields may be inspected visually.

  2. Exploring the use of I/O nodes for computation in a MIMD multiprocessor

    NASA Technical Reports Server (NTRS)

    Kotz, David; Cai, Ting

    1995-01-01

    As parallel systems move into the production scientific-computing world, the emphasis will be on cost-effective solutions that provide high throughput for a mix of applications. Cost effective solutions demand that a system make effective use of all of its resources. Many MIMD multiprocessors today, however, distinguish between 'compute' and 'I/O' nodes, the latter having attached disks and being dedicated to running the file-system server. This static division of responsibilities simplifies system management but does not necessarily lead to the best performance in workloads that need a different balance of computation and I/O. Of course, computational processes sharing a node with a file-system service may receive less CPU time, network bandwidth, and memory bandwidth than they would on a computation-only node. In this paper we begin to examine this issue experimentally. We found that high performance I/O does not necessarily require substantial CPU time, leaving plenty of time for application computation. There were some complex file-system requests, however, which left little CPU time available to the application. (The impact on network and memory bandwidth still needs to be determined.) For applications (or users) that cannot tolerate an occasional interruption, we recommend that they continue to use only compute nodes. For tolerant applications needing more cycles than those provided by the compute nodes, we recommend that they take full advantage of both compute and I/O nodes for computation, and that operating systems should make this possible.

  3. Virus Information Update CIAC-2301

    DTIC Science & Technology

    1998-05-21

    a tune through a sound card. Byway is reported to be in the wild internationally, especially in Venezuela, Mexico , Bulgaria, UK and USA. REMOVAL NOTE...1482, Varicella Type: Program. Disk Location: Features: Damage: Size: See Also: Notes: v6-146: This virus was written to hurt users of the TBCLEAN...antivirus package. If you have a file infected with the Varicella virus, and if you tried to clean this virus infected file with tbclean, what would

  4. Framework for Integrating Science Data Processing Algorithms Into Process Control Systems

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Crichton, Daniel J.; Chang, Albert Y.; Foster, Brian M.; Freeborn, Dana J.; Woollard, David M.; Ramirez, Paul M.

    2011-01-01

    A software framework called PCS Task Wrapper is responsible for standardizing the setup, process initiation, execution, and file management tasks surrounding the execution of science data algorithms, which are referred to by NASA as Product Generation Executives (PGEs). PGEs codify a scientific algorithm, some step in the overall scientific process involved in a mission science workflow. The PCS Task Wrapper provides a stable operating environment to the underlying PGE during its execution lifecycle. If the PGE requires a file, or metadata regarding the file, the PCS Task Wrapper is responsible for delivering that information to the PGE in a manner that meets its requirements. If the PGE requires knowledge of upstream or downstream PGEs in a sequence of executions, that information is also made available. Finally, if information regarding disk space, or node information such as CPU availability, etc., is required, the PCS Task Wrapper provides this information to the underlying PGE. After this information is collected, the PGE is executed, and its output Product file and Metadata generation is managed via the PCS Task Wrapper framework. The innovation is responsible for marshalling output Products and Metadata back to a PCS File Management component for use in downstream data processing and pedigree. In support of this, the PCS Task Wrapper leverages the PCS Crawler Framework to ingest (during pipeline processing) the output Product files and Metadata produced by the PGE. The architectural components of the PCS Task Wrapper framework include PGE Task Instance, PGE Config File Builder, Config File Property Adder, Science PGE Config File Writer, and PCS Met file Writer. This innovative framework is really the unifying bridge between the execution of a step in the overall processing pipeline, and the available PCS component services as well as the information that they collectively manage.

  5. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  6. ATLAS Data Management Accounting with Hadoop Pig and HBase

    NASA Astrophysics Data System (ADS)

    Lassnig, Mario; Garonne, Vincent; Dimitrov, Gancho; Canali, Luca

    2012-12-01

    The ATLAS Distributed Data Management system requires accounting of its contents at the metadata layer. This presents a hard problem due to the large scale of the system, the high dimensionality of attributes, and the high rate of concurrent modifications of data. The system must efficiently account more than 90PB of disk and tape that store upwards of 500 million files across 100 sites globally. In this work a generic accounting system is presented, which is able to scale to the requirements of ATLAS. The design and architecture is presented, and the implementation is discussed. An emphasis is placed on the design choices such that the underlying data models are generally applicable to different kinds of accounting, reporting and monitoring.

  7. File-based data flow in the CMS Filter Farm

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Andronidis, A.; Bawej, T.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.

    2015-12-01

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.

  8. Publication of science data on CD-ROM: A guide and example

    NASA Technical Reports Server (NTRS)

    Angelici, Gary; Skiles, J. W.

    1993-01-01

    CD-ROM (Compact Disk-Read Only Memory) is becoming the standard media not only in audio recording, but also in the publication of data and information accessible on many computer platforms. Little has been written about the complicated process involved in creating easy-to-use, high quality, and useful CD-ROM's containing scientific data. This document is a manual designed to aid those who are responsible for the publication of scientific data on CD-ROM. All aspects and steps of the procedure are covered, from feasibility assessment through disk design, data preparation, disc mastering, and CD-ROM distribution. General advice and actual examples are based on lessons learned from the publication of scientific data for an interdisciplinary field experiment. Appendices include actual files from a CD-ROM, a purchase request for CD-ROM mastering services, and the disk art for the first disk published for the project.

  9. Digitized molecular diagnostics: reading disk-based bioassays with standard computer drives.

    PubMed

    Li, Yunchao; Ou, Lily M L; Yu, Hua-Zhong

    2008-11-01

    We report herein a digital signal readout protocol for screening disk-based bioassays with standard optical drives of ordinary desktop/notebook computers. Three different types of biochemical recognition reactions (biotin-streptavidin binding, DNA hybridization, and protein-protein interaction) were performed directly on a compact disk in a line array format with the help of microfluidic channel plates. Being well-correlated with the optical darkness of the binding sites (after signal enhancement by gold nanoparticle-promoted autometallography), the reading error levels of prerecorded audio files can serve as a quantitative measure of biochemical interaction. This novel readout protocol is about 1 order of magnitude more sensitive than fluorescence labeling/scanning and has the capability of examining multiplex microassays on the same disk. Because no modification to either hardware or software is needed, it promises a platform technology for rapid, low-cost, and high-throughput point-of-care biomedical diagnostics.

  10. 77 FR 281 - Proposed Consent Decree, Clean Air Act Citizen Suit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-04

    ... holidays. Comments on a disk or CD-ROM should be formatted in Word or ASCII file, avoiding the use of... U.S. Virgin Islands had failed to submit CAA SIPs for improving visibility in mandatory Federal...

  11. Detecting Hardware-assisted Hypervisor Rootkits within Nested Virtualized Environments

    DTIC Science & Technology

    2012-06-14

    least the minimum required for the guest OS and click “Next”. For 64-bit Windows 7 the minimum required is 2048 MB (Figure 66). Figure 66. Memory...prompted for Memory, allocate at least the minimum required for the guest OS, for 64-bit Windows 7 the minimum required is 2048 MB (Figure 79...130 21. Within the virtual disk creation wizard, select VDI for the file type (Figure 81). Figure 81. Select File Type 22. Select Dynamically

  12. HECLIB. Volume 2: HECDSS Subroutines Programmer’s Manual

    DTIC Science & Technology

    1991-05-01

    algorithm and hierarchical design for database accesses. This algorithm provides quick access to data sets and an efficient means of adding new data set...Description of How DSS Works DSS version 6 utilizes a modified hash algorithm based upon the pathname to store and retrieve data. This structure allows...balancing disk space and record access times. A variation in this algorithm is for "stable" files. In a stable file, a hash table is not utilized

  13. Up-to-date state of storage techniques used for large numerical data files

    NASA Technical Reports Server (NTRS)

    Chlouba, V.

    1975-01-01

    Methods for data storage and output in data banks and memory files are discussed along with a survey of equipment available for this. Topics discussed include magnetic tapes, magnetic disks, Terabit magnetic tape memory, Unicon 690 laser memory, IBM 1360 photostore, microfilm recording equipment, holographic recording, film readers, optical character readers, digital data storage techniques, and photographic recording. The individual types of equipment are summarized in tables giving the basic technical parameters.

  14. RANS Simulation (Actuator Disk Model[ADM]) of the NREL Phase VI wind turbine modeled as MHK Turbine

    DOE Data Explorer

    Javaherchi, Teymour

    2016-06-08

    Attached are the .cas and .dat files for the Reynolds Averaged Navier-Stokes (RANS) simulation of a single lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. In this case study the flow field around and in the wake of the NREL Phase VI wind turbine, modeled is MHK turbine, is simulated using Actuator Disk Model (a.k.a Porous Media) by solving RANS equations coupled with a turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Actuator Disk Theory (see the stated section of attached M.Sc. thesis for more details).

  15. IDG - INTERACTIVE DIF GENERATOR

    NASA Technical Reports Server (NTRS)

    Preheim, L. E.

    1994-01-01

    The Interactive DIF Generator (IDG) utility is a tool used to generate and manipulate Directory Interchange Format files (DIF). Its purpose as a specialized text editor is to create and update DIF files which can be sent to NASA's Master Directory, also referred to as the International Global Change Directory at Goddard. Many government and university data systems use the Master Directory to advertise the availability of research data. The IDG interface consists of a set of four windows: (1) the IDG main window; (2) a text editing window; (3) a text formatting and validation window; and (4) a file viewing window. The IDG main window starts up the other windows and contains a list of valid keywords. The keywords are loaded from a user-designated file and selected keywords can be copied into any active editing window. Once activated, the editing window designates the file to be edited. Upon switching from the editing window to the formatting and validation window, the user has options for making simple changes to one or more files such as inserting tabs, aligning fields, and indenting groups. The viewing window is a scrollable read-only window that allows fast viewing of any text file. IDG is an interactive tool and requires a mouse or a trackball to operate. IDG uses the X Window System to build and manage its interactive forms, and also uses the Motif widget set and runs under Sun UNIX. IDG is written in C-language for Sun computers running SunOS. This package requires the X Window System, Version 11 Revision 4, with OSF/Motif 1.1. IDG requires 1.8Mb of hard disk space. The standard distribution medium for IDG is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The program was developed in 1991 and is a copyrighted work with all copyright vested in NASA. SunOS is a trademark of Sun Microsystems, Inc. X Window System is a trademark of Massachusetts Institute of Technology. OSF/Motif is a trademark of the Open Software Foundation, Inc. UNIX is a trademark of Bell Laboratories.

  16. High signal intensity of intervertebral calcified disks on T1-weighted MR images resulting from fat content.

    PubMed

    Malghem, Jacques; Lecouvet, Frédéric E; François, Robert; Vande Berg, Bruno C; Duprez, Thierry; Cosnard, Guy; Maldague, Baudouin E

    2005-02-01

    To explain a cause of high signal intensity on T1-weighted MR images in calcified intervertebral disks associated with spinal fusion. Magnetic resonance and radiological examinations of 13 patients were reviewed, presenting one or several intervertebral disks showing a high signal intensity on T1-weighted MR images, associated both with the presence of calcifications in the disks and with peripheral fusion of the corresponding spinal segments. Fusion was due to ligament ossifications (n=8), ankylosing spondylitis (n=4), or posterior arthrodesis (n=1). Imaging files included X-rays and T1-weighted MR images in all cases, T2-weighted MR images in 12 cases, MR images with fat signal suppression in 7 cases, and a CT scan in 1 case. Histological study of a calcified disk from an anatomical specimen of an ankylosed lumbar spine resulting from ankylosing spondylitis was examined. The signal intensity of the disks was similar to that of the bone marrow or of perivertebral fat both on T1-weighted MR images and on all sequences, including those with fat signal suppression. In one of these disks, a strongly negative absorption coefficient was focally measured by CT scan, suggesting a fatty content. The histological examination of the ankylosed calcified disk revealed the presence of well-differentiated bone tissue and fatty marrow within the disk. The high signal intensity of some calcified intervertebral disks on T1-weighted MR images can result from the presence of fatty marrow, probably related to a disk ossification process in ankylosed spines.

  17. Multiple rings in the transition disk and companion candidates around RX J1615.3-3255. High contrast imaging with VLT/SPHERE

    NASA Astrophysics Data System (ADS)

    de Boer, J.; Salter, G.; Benisty, M.; Vigan, A.; Boccaletti, A.; Pinilla, P.; Ginski, C.; Juhasz, A.; Maire, A.-L.; Messina, S.; Desidera, S.; Cheetham, A.; Girard, J. H.; Wahhaj, Z.; Langlois, M.; Bonnefoy, M.; Beuzit, J.-L.; Buenzli, E.; Chauvin, G.; Dominik, C.; Feldt, M.; Gratton, R.; Hagelberg, J.; Isella, A.; Janson, M.; Keller, C. U.; Lagrange, A.-M.; Lannier, J.; Menard, F.; Mesa, D.; Mouillet, D.; Mugrauer, M.; Peretti, S.; Perrot, C.; Sissa, E.; Snik, F.; Vogt, N.; Zurlo, A.; SPHERE Consortium

    2016-11-01

    Context. The effects of a planet sculpting the disk from which it formed are most likely to be found in disks that are in transition between being classical protoplanetary and debris disks. Recent direct imaging of transition disks has revealed structures such as dust rings, gaps, and spiral arms, but an unambiguous link between these structures and sculpting planets is yet to be found. Aims: We aim to find signs of ongoing planet-disk interaction and study the distribution of small grains at the surface of the transition disk around RX J1615.3-3255 (RX J1615). Methods: We observed RX J1615 with VLT/SPHERE. From these observations, we obtained polarimetric imaging with ZIMPOL (R'-band) and IRDIS (J), and IRDIS (H2H3) dual-band imaging with simultaneous spatially resolved spectra with the IFS (YJ). Results: We image the disk for the first time in scattered light and detect two arcs, two rings, a gap and an inner disk with marginal evidence for an inner cavity. The shapes of the arcs suggest that they are probably segments of full rings. Ellipse fitting for the two rings and inner disk yield a disk inclination I = 47 ± 2° and find semi-major axes of 1.50 ± 0.01'' (278 au), 1.06 ± 0.01'' (196 au) and 0.30 ± 0.01'' (56 au), respectively. We determine the scattering surface height above the midplane, based on the projected ring center offsets. Nine point sources are detected between 2.1'' and 8.0'' separation and considered as companion candidates. With NACO data we recover four of the nine point sources, which we determine to be not co-moving, and therefore unbound to the system. Conclusions: We present the first detection of the transition disk of RX J1615 in scattered light. The height of the rings indicate limited flaring of the disk surface, which enables partial self-shadowing in the disk. The outermost arc either traces the bottom of the disk or it is another ring with semi-major axis ≳ 2.35'' (435 au). We explore both scenarios, extrapolating the complete shape of the feature, which will allow us to distinguish between the two in future observations. The most attractive scenario, where the arc traces the bottom of the outer ring, requires the disk to be truncated at r ≈ 360 au. If the closest companion candidate is indeed orbiting the disk at 540 au, then it would be the most likely cause for such truncation. This companion candidate, as well as the remaining four, all require follow up observations to determine if they are bound to the system. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 095.C-0298(A), 095.C-0298(B), and 095.C-0693(A) during guaranteed and open time observations of the SPHERE consortium, and on NACO observations: program IDs: 085.C-0012(A), 087.C-0111(A), and 089.C-0133(A). The reduced images as FITS files are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/595/A114

  18. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    NASA Astrophysics Data System (ADS)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16 channels with 240MB/s network throughput.When it is integrated,sharing system can provide 1020MB/s write speed simultaneously.When the master storage server fails, the backup storage server takes over the normal service.The literacy of client will not be affected,in which switching time is less than 5s.The design and integrated storage system meet users requirements. Anyway, all-fiber way is too expensive in SAN; SCSI hard disk transfer rate may still be the bottleneck in the development of the entire storage system. Stornext can provide users with efficient sharing, management, automatic archiving of large numbers of files and hardware solutions. It occupies a leading position in big data management. Storage is the most popular sharing shareware, and there are drawbacks in Stornext: Firstly, Stornext software is expensive, in which charge by the sites. When the network scale is large, the purchase cost will be very high. Secondly, the parameters of Stornext software are more demands on the skills of technical staff. If there is a problem, it is difficult to exclude.

  19. VizieR Online Data Catalog: Transits observed in OGLE 2001-2003 (Udalski+, 2002-2004)

    NASA Astrophysics Data System (ADS)

    Udalski, A.; Paczynski, B.; Zebrun, K.; Szymanski, M.; Kubiak, M.; Soszinski, I.; Szewczyk, O.; Wyrzykowski, L.; Pietrzynski, G.

    2003-11-01

    We present results of an extensive photometric search for planetary and low-luminosity object transits in the Galactic disk stars commencing the third phase of the Optical Gravitational Lensing Experiment - OGLE-III. (1 data file).

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  1. Data oriented job submission scheme for the PHENIX user analysis in CCJ

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; En'yo, H.; Ichihara, T.; Watanabe, Y.; Yokkaichi, S.

    2011-12-01

    The RIKEN Computing Center in Japan (CCJ) has been developed to make it possible analyzing huge amount of data corrected by the PHENIX experiment at RHIC. The corrected raw data or reconstructed data are transferred via SINET3 with 10 Gbps bandwidth from Brookheaven National Laboratory (BNL) by using GridFTP. The transferred data are once stored in the hierarchical storage management system (HPSS) prior to the user analysis. Since the size of data grows steadily year by year, concentrations of the access request to data servers become one of the serious bottlenecks. To eliminate this I/O bound problem, 18 calculating nodes with total 180 TB local disks were introduced to store the data a priori. We added some setup in a batch job scheduler (LSF) so that user can specify the requiring data already distributed to the local disks. The locations of data are automatically obtained from a database, and jobs are dispatched to the appropriate node which has the required data. To avoid the multiple access to a local disk from several jobs in a node, techniques of lock file and access control list are employed. As a result, each job can handle a local disk exclusively. Indeed, the total throughput was improved drastically as compared to the preexisting nodes in CCJ, and users can analyze about 150 TB data within 9 hours. We report this successful job submission scheme and the feature of the PC cluster.

  2. 42 CFR 93.508 - Filing, forms, and service.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative... nondocumentary materials such as videotapes, computer disks, or physical evidence. This provision does not apply...

  3. 42 CFR 93.508 - Filing, forms, and service.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative... nondocumentary materials such as videotapes, computer disks, or physical evidence. This provision does not apply...

  4. 42 CFR 93.508 - Filing, forms, and service.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative... nondocumentary materials such as videotapes, computer disks, or physical evidence. This provision does not apply...

  5. 42 CFR 93.508 - Filing, forms, and service.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative... nondocumentary materials such as videotapes, computer disks, or physical evidence. This provision does not apply...

  6. 42 CFR 93.508 - Filing, forms, and service.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... HEALTH EFFECTS STUDIES OF HAZARDOUS SUBSTANCES RELEASES AND FACILITIES PUBLIC HEALTH SERVICE POLICIES ON RESEARCH MISCONDUCT Opportunity To Contest ORI Findings of Research Misconduct and HHS Administrative... nondocumentary materials such as videotapes, computer disks, or physical evidence. This provision does not apply...

  7. DPM — efficient storage in diverse environments

    NASA Astrophysics Data System (ADS)

    Hellmich, Martin; Furano, Fabrizio; Smith, David; Brito da Rocha, Ricardo; Álvarez Ayllón, Alejandro; Manzi, Andrea; Keeble, Oliver; Calvet, Ivan; Regala, Miguel Antonio

    2014-06-01

    Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular storage solution Disk Pool Manager (DPM/dmlite) as the enabling technology. The storage manager DPM is designed for these new environments, allowing users to scale up and down as they need it, and optimizing their computing centers energy efficiency and costs. DPM runs on high-performance machines, profiting from multi-core and multi-CPU setups. It supports separating the database from the metadata server, the head node, largely reducing its hard disk requirements. Since version 1.8.6, DPM is released in EPEL and Fedora, simplifying distribution and maintenance, but also supporting the ARM architecture beside i386 and x86_64, allowing it to run the smallest low-power machines such as the Raspberry Pi or the CuBox. This usage is facilitated by the possibility to scale horizontally using a main database and a distributed memcached-powered namespace cache. Additionally, DPM supports a variety of storage pools in the backend, most importantly HDFS, S3-enabled storage, and cluster file systems, allowing users to fit their DPM installation exactly to their needs. In this paper, we investigate the power-efficiency and total cost of ownership of various DPM configurations. We develop metrics to evaluate the expected performance of a setup both in terms of namespace and disk access considering the overall cost including equipment, power consumptions, or data/storage fees. The setups tested range from the lowest scale using Raspberry Pis with only 700MHz single cores and a 100Mbps network connections, over conventional multi-core servers to typical virtual machine instances in cloud settings. We evaluate the combinations of different name server setups, for example load-balanced clusters, with different storage setups, from using a classic local configuration to private and public clouds.

  8. Efficient Checkpointing of Virtual Machines using Virtual Machine Introspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Han, Fang; Scott, Stephen L

    Cloud Computing environments rely heavily on system-level virtualization. This is due to the inherent benefits of virtualization including fault tolerance through checkpoint/restart (C/R) mechanisms. Because clouds are the abstraction of large data centers and large data centers have a higher potential for failure, it is imperative that a C/R mechanism for such an environment provide minimal latency as well as a small checkpoint file size. Recently, there has been much research into C/R with respect to virtual machines (VM) providing excellent solutions to reduce either checkpoint latency or checkpoint file size. However, these approaches do not provide both. This papermore » presents a method of checkpointing VMs by utilizing virtual machine introspection (VMI). Through the usage of VMI, we are able to determine which pages of memory within the guest are used or free and are better able to reduce the amount of pages written to disk during a checkpoint. We have validated this work by using various benchmarks to measure the latency along with the checkpoint size. With respect to checkpoint file size, our approach results in file sizes within 24% or less of the actual used memory within the guest. Additionally, the checkpoint latency of our approach is up to 52% faster than KVM s default method.« less

  9. Jet creation in post-AGB binaries: the circum-companion accretion disk around BD+46°442

    NASA Astrophysics Data System (ADS)

    Bollen, Dylan; Van Winckel, Hans; Kamath, Devika

    2017-11-01

    Aims: We aim at describing and understanding binary interaction processes in systems with very evolved companions. Here, we focus on understanding the origin and determining the properties of the high-velocity outflow observed in one such system. Methods: We present a quantitative analysis of BD+46°442, a post-AGB binary that shows active mass transfer that leads to the creation of a disk-driven outflow or jet. We obtained high-resolution optical spectra from the HERMES spectrograph, mounted on the 1.2 m Flemish Mercator Telescope. By performing a time-series analysis of the Hα profile, we identified the different components of the system. We deduced the jet geometry by comparing the orbital phased data with our jet model. In order to image the accretion disk around the companion of BD+46°442, we applied the technique of Doppler tomography. Results: The orbital phase-dependent variations in the Hα profile can be related to an accretion disk around the companion, from which a high-velocity outflow or jet is launched. Our model shows that there is a clear correlation between the inclination angle and the jet opening angle. The latitudinally dependent velocity structure of our jet model shows a good correspondence to the data, with outflow velocities higher than at least 400 km s-1. The intensity peak in the Doppler map might be partly caused by a hot spot in the disk, or by a larger asymmetrical structure in the disk. Conclusions: We show that BD+46°442 is a result of a binary interaction channel. The origin of the fast outflow in this system might be to a gaseous disk around the secondary component, which is most likely a main-sequence star. Our analysis suggests that the outflow has a rather wide opening angle and is not strongly collimated. Our time-resolved spectral monitoring reveals the launching site of the jet in the binary BD+46°442. Similar orbital phase-dependent Hα profiles are commonly observed in post-AGB binaries. Post-AGB binaries provide ideal test beds to study jet formation and launching mechanisms over a wide range of orbital conditions. Based on observations made with the Mercator Telescope, operated on the island of La Palma by the Flemmish Community, at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.The reduced spectra (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A60

  10. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  11. 76 FR 12106 - Lead-Based Paint Renovation, Repair and Painting Activities in Target Housing and Child Occupied...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-04

    ... will also be accepted on standard disks in Microsoft Word or ASCII file format. D. How should I handle... hazards of lead-based paint and where to receive more information about health protection. The poster also...

  12. Investigation of selected disk systems

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The large disk systems offered by IBM, UNIVAC, Digital Equipment Corporation, and Data General were examined. In particular, these disk systems were analyzed in terms of how well available operating systems take advantage of the respective disk controller's transfer rates, and to what degree all available data for optimizing disk usage is effectively employed. In the course of this analysis, generic functions and components of disk systems were defined and the capabilities of the surveyed disk system were investigated.

  13. Advanced Satellite Workstation - An integrated workstation environment for operational support of satellite system planning and analysis

    NASA Astrophysics Data System (ADS)

    Hamilton, Marvin J.; Sutton, Stewart A.

    A prototype integrated environment, the Advanced Satellite Workstation (ASW), which was developed and delivered for evaluation and operator feedback in an operational satellite control center, is described. The current ASW hardware consists of a Sun Workstation and Macintosh II Workstation connected via an ethernet Network Hardware and Software, Laser Disk System, Optical Storage System, and Telemetry Data File Interface. The central objective of ASW is to provide an intelligent decision support and training environment for operator/analysis of complex systems such as satellites. Compared to the many recent workstation implementations that incorporate graphical telemetry displays and expert systems, ASW provides a considerably broader look at intelligent, integrated environments for decision support, based on the premise that the central features of such an environment are intelligent data access and integrated toolsets.

  14. Flow prediction for propfan engine installation effects on transport aircraft at transonic speeds

    NASA Technical Reports Server (NTRS)

    Samant, S. S.; Yu, N. J.

    1986-01-01

    An Euler-based method for aerodynamic analysis of turboprop transport aircraft at transonic speeds has been developed. In this method, inviscid Euler equations are solved over surface-fitted grids constructed about aircraft configurations. Propeller effects are simulated by specifying sources of momentum and energy on an actuator disc located in place of the propeller. A stripwise boundary layer procedure is included to account for the viscous effects. A preliminary version of an approach to embed the exhaust plume within the global Euler solution has also been developed for more accurate treatment of the exhaust flow. The resulting system of programs is capable of handling wing-body-nacelle-propeller configurations. The propeller disks may be tractors or pushers and may represent single or counterrotation propellers. Results from analyses of three test cases of interest (a wing alone, a wing-body-nacelle model, and a wing-nacelle-endplate model) are presented. A user's manual for executing the system of computer programs with formats of various input files, sample job decks, and sample input files is provided in appendices.

  15. Data Elevator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BYNA, SUNRENDRA; DONG, BIN; WU, KESHENG

    Data Elevator: Efficient Asynchronous Data Movement in Hierarchical Storage Systems Multi-layer storage subsystems, including SSD-based burst buffers and disk-based parallel file systems (PFS), are becoming part of HPC systems. However, software for this storage hierarchy is still in its infancy. Applications may have to explicitly move data among the storage layers. We propose Data Elevator for transparently and efficiently moving data between a burst buffer and a PFS. Users specify the final destination for their data, typically on PFS, Data Elevator intercepts the I/O calls, stages data on burst buffer, and then asynchronously transfers the data to their final destinationmore » in the background. This system allows extensive optimizations, such as overlapping read and write operations, choosing I/O modes, and aligning buffer boundaries. In tests with large-scale scientific applications, Data Elevator is as much as 4.2X faster than Cray DataWarp, the start-of-art software for burst buffer, and 4X faster than directly writing to PFS. The Data Elevator library uses HDF5's Virtual Object Layer (VOL) for intercepting parallel I/O calls that write data to PFS. The intercepted calls are redirected to the Data Elevator, which provides a handle to write the file in a faster and intermediate burst buffer system. Once the application finishes writing the data to the burst buffer, the Data Elevator job uses HDF5 to move the data to final destination in an asynchronous manner. Hence, using the Data Elevator library is currently useful for applications that call HDF5 for writing data files. Also, the Data Elevator depends on the HDF5 VOL functionality.« less

  16. Data transfer nodes and demonstration of 100-400 Gbps wide area throughput using the Caltech SDN testbed

    NASA Astrophysics Data System (ADS)

    Mughal, A.; Newman, H.

    2017-10-01

    We review and demonstrate the design of efficient data transfer nodes (DTNs), from the perspective of the highest throughput over both local and wide area networks, as well as the highest performance per unit cost. A careful system-level design is required for the hardware, firmware, OS and software components. Furthermore, additional tuning of these components, and the identification and elimination of any remaining bottlenecks is needed once the system is assembled and commissioned, in order to obtain optimal performance. For high throughput data transfers, specialized software is used to overcome the traditional limits in performance caused by the OS, file system, file structures used, etc. Concretely, we will discuss and present the latest results using Fast Data Transfer (FDT), developed by Caltech. We present and discuss the design choices for three generations of Caltech DTNs. Their transfer capabilities range from 40 Gbps to 400 Gbps. Disk throughput is still the biggest challenge in the current generation of available hardware. However, new NVME drives combined with RDMA and a new NVME network fabric are expected to improve the overall data-transfer throughput and simultaneously reduce the CPU load on the end nodes.

  17. Small Aircraft Data Distribution System

    NASA Technical Reports Server (NTRS)

    Chazanoff, Seth L.; Dinardo, Steven J.

    2012-01-01

    The CARVE Small Aircraft Data Distribution System acquires the aircraft location and attitude data that is required by the various programs running on a distributed network. This system distributes the data it acquires to the data acquisition programs for inclusion in their data files. It uses UDP (User Datagram Protocol) to broadcast data over a LAN (Local Area Network) to any programs that might have a use for the data. The program is easily adaptable to acquire additional data and log that data to disk. The current version also drives displays using precision pitch and roll information to aid the pilot in maintaining a level-level attitude for radar/radiometer mapping beyond the degree available by flying visually or using a standard gyro-driven attitude indicator. The software is designed to acquire an array of data to help the mission manager make real-time decisions as to the effectiveness of the flight. This data is displayed for the mission manager and broadcast to the other experiments on the aircraft for inclusion in their data files. The program also drives real-time precision pitch and roll displays for the pilot and copilot to aid them in maintaining the desired attitude, when required, during data acquisition on mapping lines.

  18. File-Based Data Flow in the CMS Filter Farm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J.M.; et al.

    2015-12-23

    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes aremore » also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.« less

  19. Industrial-Strength Streaming Video.

    ERIC Educational Resources Information Center

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  20. Limited Area Coverage/High Resolution Picture Transmission (LAC/HRPT) data vegetative index calculation processor user's manual

    NASA Technical Reports Server (NTRS)

    Obrien, S. O. (Principal Investigator)

    1980-01-01

    The program, LACVIN, calculates vegetative indexes numbers on limited area coverage/high resolution picture transmission data for selected IJ grid sections. The IJ grid sections were previously extracted from the full resolution data tapes and stored on disk files.

  1. System and Method for High-Speed Data Recording

    NASA Technical Reports Server (NTRS)

    Taveniku, Mikael B. (Inventor)

    2017-01-01

    A system and method for high speed data recording includes a control computer and a disk pack unit. The disk pack is provided within a shell that provides handling and protection for the disk packs. The disk pack unit provides cooling of the disks and connection for power and disk signaling. A standard connection is provided between the control computer and the disk pack unit. The disk pack units are self sufficient and able to connect to any computer. Multiple disk packs are connected simultaneously to the system, so that one disk pack can be active while one or more disk packs are inactive. To control for power surges, the power to each disk pack is controlled programmatically for the group of disks in a disk pack.

  2. User's guide for a large signal computer model of the helical traveling wave tube

    NASA Technical Reports Server (NTRS)

    Palmer, Raymond W.

    1992-01-01

    The use is described of a successful large-signal, two-dimensional (axisymmetric), deformable disk computer model of the helical traveling wave tube amplifier, an extensively revised and operationally simplified version. We also discuss program input and output and the auxiliary files necessary for operation. Included is a sample problem and its input data and output results. Interested parties may now obtain from the author the FORTRAN source code, auxiliary files, and sample input data on a standard floppy diskette, the contents of which are described herein.

  3. Accretion Discs Around Black Holes: Developement of Theory

    NASA Astrophysics Data System (ADS)

    Bisnovatyi-Kogan, G. S.

    Standard accretion disk theory is formulated which is based on the local heat balance. The energy produced by a turbulent viscous heating is supposed to be emitted to the sides of the disc. Sources of turbulence in the accretion disc are connected with nonlinear hydrodynamic instability, convection, and magnetic field. In standard theory there are two branches of solution, optically thick, and optically thin. Advection in accretion disks is described by the differential equations what makes the theory nonlocal. Low-luminous optically thin accretion disc model with advection at some suggestions may become advectively dominated, carrying almost all the energy inside the black hole. The proper account of magnetic filed in the process of accretion limits the energy advected into a black hole, efficiency of accretion should exceed ˜ 1/4 of the standard accretion disk model efficiency.

  4. Effects of Disk Warping on the Inclination Evolution of Star-Disk-Binary Systems

    NASA Astrophysics Data System (ADS)

    Zanazzi, J. J.; Lai, Dong

    2018-04-01

    Several recent studies have suggested that circumstellar disks in young stellar binaries may be driven into misalignement with their host stars due to secular gravitational interactions between the star, disk and the binary companion. The disk in such systems is twisted/warped due to the gravitational torques from the oblate central star and the external companion. We calculate the disk warp profile, taking into account of bending wave propagation and viscosity in the disk. We show that for typical protostellar disk parameters, the disk warp is small, thereby justifying the "flat-disk" approximation adopted in previous theoretical studies. However, the viscous dissipation associated with the small disk warp/twist tends to drive the disk toward alignment with the binary or the central star. We calculate the relevant timescales for the alignment. We find the alignment is effective for sufficiently cold disks with strong external torques, especially for systems with rapidly rotating stars, but is ineffective for the majority of star-disk-binary systems. Viscous warp driven alignment may be necessary to account for the observed spin-orbit alignment in multi-planet systems if these systems are accompanied by an inclined binary companion.

  5. The American Indian: A Multimedia Encyclopedia.

    ERIC Educational Resources Information Center

    Carter, Christina E.

    1993-01-01

    Reviews "The American Indian: A Multimedia Encyclopedia," Version 1.0 (New York, Facts on File, Inc., 1993). This electronic product (compact disk) presents a great amount of material on American Indians from various formats, but its effectiveness is limited by the dated nature of some materials. Software design and searching features are…

  6. Limited Area Coverage/High Resolution Picture Transmission (LAC/HRPT) tape IJ grid pixel extraction processor user's manual

    NASA Technical Reports Server (NTRS)

    Obrien, S. O. (Principal Investigator)

    1980-01-01

    The program, LACREG, extracted all pixels that are contained in a specific IJ grid section. The pixels, along with a header record are stored in a disk file defined by the user. The program will extract up to 99 IJ grid sections.

  7. UNDELETE; a program to recover deleted RSX-11 disk files; program logic manual

    USGS Publications Warehouse

    Baker, L.M.

    1986-01-01

    This report presents a list of selected publications pertaining to the water resources in Virginia. The report includes a source-agency listing by publication type, which is arranged in alphabetical order by author. Information concerning the availability of the publications also is provided. (USGS)

  8. PCVLF User’s Guide

    DTIC Science & Technology

    1991-03-01

    29 3.3.2 Manual Frequency List Measurement ................... 29 3.3.3 Manual 200-kHz Spectrum Measurement ................ 30 1 on/ lity Codes...39 4.2.1 Frequency List Measurements ......................... 39 4.2.2 Calibration Measurements...Manual Frequency List Measurements .................. 43 4.3 D isk Files ............................................... 43 4.3.1 Program Disk

  9. A high-speed, large-capacity, 'jukebox' optical disk system

    NASA Technical Reports Server (NTRS)

    Ammon, G. J.; Calabria, J. A.; Thomas, D. T.

    1985-01-01

    Two optical disk 'jukebox' mass storage systems which provide access to any data in a store of 10 to the 13th bits (1250G bytes) within six seconds have been developed. The optical disk jukebox system is divided into two units, including a hardware/software controller and a disk drive. The controller provides flexibility and adaptability, through a ROM-based microcode-driven data processor and a ROM-based software-driven control processor. The cartridge storage module contains 125 optical disks housed in protective cartridges. Attention is given to a conceptual view of the disk drive unit, the NASA optical disk system, the NASA database management system configuration, the NASA optical disk system interface, and an open systems interconnect reference model.

  10. I/O Performance Characterization of Lustre and NASA Applications on Pleiades

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Rappleye, Jason; Chang, Johnny; Barker, David Peter; Biswas, Rupak; Mehrotra, Piyush

    2012-01-01

    In this paper we study the performance of the Lustre file system using five scientific and engineering applications representative of NASA workload on large-scale supercomputing systems such as NASA s Pleiades. In order to facilitate the collection of Lustre performance metrics, we have developed a software tool that exports a wide variety of client and server-side metrics using SGI's Performance Co-Pilot (PCP), and generates a human readable report on key metrics at the end of a batch job. These performance metrics are (a) amount of data read and written, (b) number of files opened and closed, and (c) remote procedure call (RPC) size distribution (4 KB to 1024 KB, in powers of 2) for I/O operations. RPC size distribution measures the efficiency of the Lustre client and can pinpoint problems such as small write sizes, disk fragmentation, etc. These extracted statistics are useful in determining the I/O pattern of the application and can assist in identifying possible improvements for users applications. Information on the number of file operations enables a scientist to optimize the I/O performance of their applications. Amount of I/O data helps users choose the optimal stripe size and stripe count to enhance I/O performance. In this paper, we demonstrate the usefulness of this tool on Pleiades for five production quality NASA scientific and engineering applications. We compare the latency of read and write operations under Lustre to that with NFS by tracing system calls and signals. We also investigate the read and write policies and study the effect of page cache size on I/O operations. We examine the performance impact of Lustre stripe size and stripe count along with performance evaluation of file per process and single shared file accessed by all the processes for NASA workload using parameterized IOR benchmark.

  11. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters.

    PubMed

    Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr

    2010-10-28

    Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.

  12. Interactive computer methods for generating mineral-resource maps

    USGS Publications Warehouse

    Calkins, James Alfred; Crosby, A.S.; Huffman, T.E.; Clark, A.L.; Mason, G.T.; Bascle, R.J.

    1980-01-01

    Inasmuch as maps are a basic tool of geologists, the U.S. Geological Survey's CRIB (Computerized Resources Information Bank) was constructed so that the data it contains can be used to generate mineral-resource maps. However, by the standard methods used-batch processing and off-line plotting-the production of a finished map commonly takes 2-3 weeks. To produce computer-generated maps more rapidly, cheaply, and easily, and also to provide an effective demonstration tool, we have devised two related methods for plotting maps as alternatives to conventional batch methods. These methods are: 1. Quick-Plot, an interactive program whose output appears on a CRT (cathode-ray-tube) device, and 2. The Interactive CAM (Cartographic Automatic Mapping system), which combines batch and interactive runs. The output of the Interactive CAM system is final compilation (not camera-ready) paper copy. Both methods are designed to use data from the CRIB file in conjunction with a map-plotting program. Quick-Plot retrieves a user-selected subset of data from the CRIB file, immediately produces an image of the desired area on a CRT device, and plots data points according to a limited set of user-selected symbols. This method is useful for immediate evaluation of the map and for demonstrating how trial maps can be made quickly. The Interactive CAM system links the output of an interactive CRIB retrieval to a modified version of the CAM program, which runs in the batch mode and stores plotting instructions on a disk, rather than on a tape. The disk can be accessed by a CRT, and, thus, the user can view and evaluate the map output on a CRT immediately after a batch run, without waiting 1-3 days for an off-line plot. The user can, therefore, do most of the layout and design work in a relatively short time by use of the CRT, before generating a plot tape and having the map plotted on an off-line plotter.

  13. PCACE- PERSONAL COMPUTER AIDED CABLING ENGINEERING

    NASA Technical Reports Server (NTRS)

    Billitti, J. W.

    1994-01-01

    A computerized interactive harness engineering program has been developed to provide an inexpensive, interactive system which is designed for learning and using an engineering approach to interconnection systems. PCACE is basically a database system that stores information as files of individual connectors and handles wiring information in circuit groups stored as records. This directly emulates the typical manual engineering methods of data handling, thus making the user interface to the program very natural. Data files can be created, viewed, manipulated, or printed in real time. The printed ouput is in a form ready for use by fabrication and engineering personnel. PCACE also contains a wide variety of error-checking routines including connector contact checks during hardcopy generation. The user may edit existing harness data files or create new files. In creating a new file, the user is given the opportunity to insert all the connector and harness boiler plate data which would be part of a normal connector wiring diagram. This data includes the following: 1) connector reference designator, 2) connector part number, 3) backshell part number, 4) cable reference designator, 5) cable part number, 6) drawing revision, 7) relevant notes, 8) standard wire gauge, and 9) maximum circuit count. Any item except the maximum circuit count may be left blank, and any item may be changed at a later time. Once a file is created and organized, the user is directed to the main menu and has access to the file boiler plate, the circuit wiring records, and the wiring records index list. The organization of a file is such that record zero contains the connector/cable boiler plate, and all other records contain circuit wiring data. Each wiring record will handle a circuit with as many as nine wires in the interface. The record stores the circuit name and wire count and the following data for each wire: 1) wire identifier, 2) contact, 3) splice, 4) wire gauge if different from standard, 5) wire/group type, 6) wire destination, and 7) note number. The PCACE record structure allows for a wide variety of wiring forms using splices and shields, yet retains sufficient structure to maintain ease of use. PCACE is written in TURBO Pascal 3.0 and has been implemented on IBM PC, XT, and AT systems under DOS 3.1 with a memory of 512K of 8 bit bytes, two floppy disk drives, an RGB monitor, and a printer with ASCII control characters. PCACE was originally developed in 1983, and the IBM version was released in 1986.

  14. Assessment of disk MHD generators for a base load powerplant

    NASA Technical Reports Server (NTRS)

    Chubb, D. L.; Retallick, F. D.; Lu, C. L.; Stella, M.; Teare, J. D.; Loubsky, W. J.; Louis, J. F.; Misra, B.

    1981-01-01

    Results from a study of the disk MHD generator are presented. Both open and closed cycle disk systems were investigated. Costing of the open cycle disk components (nozzle, channel, diffuser, radiant boiler, magnet and power management) was done. However, no detailed costing was done for the closed cycle systems. Preliminary plant design for the open cycle systems was also completed. Based on the system study results, an economic assessment of the open cycle systems is presented. Costs of the open cycle disk conponents are less than comparable linear generator components. Also, costs of electricity for the open cycle disk systems are competitive with comparable linear systems. Advantages of the disk design simplicity are considered. Improvements in the channel availability or a reduction in the channel lifetime requirement are possible as a result of the disk design.

  15. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  16. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE PAGES

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason; ...

    2018-04-05

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  17. 77 FR 39493 - Proposed Consent Decree, Clean Air Act Citizen Suit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-03

    ... Friday, excluding legal holidays. Comments on a disk or CD-ROM should be formatted in Word or ASCII file... question. EPA or the Department of Justice may withdraw or withhold consent to the proposed consent decree... Justice determines, based on any comment submitted, that consent to this consent decree should be...

  18. 78 FR 30919 - Proposed Consent Decree, Clean Air Act Citizen Suit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-23

    ..., excluding legal holidays. Comments on a disk or CD-ROM should be formatted in Word or ASCII file, avoiding... to the litigation in question. EPA or the Department of Justice may withdraw or withhold consent to... EPA or the Department of Justice determines that consent to this consent decree should be withdrawn...

  19. 34 CFR 668.24 - Record retention and examinations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... representative. (3) An institution may keep required records in hard copy or in microform, computer file, optical disk, CD-ROM, or other media formats, provided that— (i) Except for the records described in paragraph (d)(3)(ii) of this section, all record information must be retrievable in a coherent hard copy format...

  20. Electronic Imaging in Admissions, Records & Financial Aid Offices.

    ERIC Educational Resources Information Center

    Perkins, Helen L.

    Over the years, efforts have been made to work more efficiently with the ever increasing number of records and paper documents that cross workers' desks. Filing records on optical disk through electronic imaging is an alternative that many feel is the answer to successful document management. The pioneering efforts in electronic imaging in…

  1. 77 FR 64514 - Sunshine Act Meeting; Open Commission Meeting; Wednesday, October 17, 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-22

    .../Video coverage of the meeting will be broadcast live with open captioning over the Internet from the FCC... format and alternative media, including large print/ type; digital disk; and audio and video tape. Best.... 2012-26060 Filed 10-18-12; 4:15 pm] BILLING CODE 6712-01-P ...

  2. Clementine High Resolution Camera Mosaicking Project

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This report constitutes the final report for NASA Contract NASW-5054. This project processed Clementine I high resolution images of the Moon, mosaicked these images together, and created a 22-disk set of compact disk read-only memory (CD-ROM) volumes. The mosaics were produced through semi-automated registration and calibration of the high resolution (HiRes) camera's data against the geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic produced by the US Geological Survey (USGS). The HiRes mosaics were compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution nadir-looking observations. The images were spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel for sub-polar mosaics (below 80 deg. latitude) and using the stereographic projection at a scale of 30 m/pixel for polar mosaics. Only images with emission angles less than approximately 50 were used. Images from non-mapping cross-track slews, which tended to have large SPICE errors, were generally omitted. The locations of the resulting image population were found to be offset from the UV/Vis basemap by up to 13 km (0.4 deg.). Geometric control was taken from the 100 m/pixel global and 150 m/pixel polar USGS Clementine Basemap Mosaics compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Radiometric calibration was achieved by removing the image nonuniformity dominated by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap, that approximately transform the 8-bit HiRes data to photometric units. The sub-polar mosaics are divided into tiles that cover approximately 1.75 deg. of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. Polar mosaics are tiled into squares 2250 pixels on a side, which spans approximately 2.2 deg. Two mosaics are provided for each pole: one corresponding to data acquired while periapsis was in the south, the other while periapsis was in the north. The CD-ROMs also contain ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files.

  3. An Xrootd Italian Federation

    NASA Astrophysics Data System (ADS)

    Boccali, T.; Donvito, G.; Diacono, D.; Marzulli, G.; Pompili, A.; Della Ricca, G.; Mazzoni, E.; Argiro, S.; Gregori, D.; Grandi, C.; Bonacorsi, D.; Lista, L.; Fabozzi, F.; Barone, L. M.; Santocchia, A.; Riahi, H.; Tricomi, A.; Sgaravatto, M.; Maron, G.

    2014-06-01

    The Italian community in CMS has built a geographically distributed network in which all the data stored in the Italian region are available to all the users for their everyday work. This activity involves at different level all the CMS centers: the Tier1 at CNAF, all the four Tier2s (Bari, Rome, Legnaro and Pisa), and few Tier3s (Trieste, Perugia, Torino, Catania, Napoli, ...). The federation uses the new network connections as provided by GARR, our NREN (National Research and Education Network), which provides a minimum of 10 Gbit/s to all the sites via the GARR-X[2] project. The federation is currently based on Xrootd[1] technology, and on a Redirector aimed to seamlessly connect all the sites, giving the logical view of a single entity. A special configuration has been put in place for the Tier1, CNAF, where ad-hoc Xrootd changes have been implemented in order to protect the tape system from excessive stress, by not allowing WAN connections to access tape only files, on a file-by-file basis. In order to improve the overall performance while reading files, both in terms of bandwidth and latency, a hierarchy of xrootd redirectors has been implemented. The solution implemented provides a dedicated Redirector where all the INFN sites are registered, without considering their status (T1, T2, or T3 sites). An interesting use case were able to cover via the federation are disk-less Tier3s. The caching solution allows to operate a local storage with minimal human intervention: transfers are automatically done on a single file basis, and the cache is maintained operational by automatic removal of old files.

  4. A materials accounting system for an IBM PC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearse, R.C.; Thomas, R.J.; Henslee, S.P.

    1986-01-01

    The authors have adapted the Los Alamos MASS accounting system for use on an IBM PC/AT at the Fuels Manufacturing Facility (FMF) at Argonne National Laboratory West (ANL-WEST). Cost of hardware and proprietary software was less than $10,000 per station. The system consists of three stations between which accounting information is transferred using floppy disks accompanying special nuclear material shipments. The programs were implemented in dBASEIII and were compiled using the proprietary software CLIPPER. Modifications to the inventory can be posted in just a few minutes, and operator/computer interaction is nearly instantaneous. After the records are built by the user,more » it takes 4-5 seconds to post the results to the database files. A version of this system was specially adapted and is currently in use at the FMF facility at Argonne National Laboratory. Initial satisfaction is adequate and software and hardware problems are minimal.« less

  5. Digital ultrasonics signal processing: Flaw data post processing use and description

    NASA Technical Reports Server (NTRS)

    Buel, V. E.

    1981-01-01

    A modular system composed of two sets of tasks which interprets the flaw data and allows compensation of the data due to transducer characteristics is described. The hardware configuration consists of two main units. A DEC LSI-11 processor running under the RT-11 sngle job, version 2C-02 operating system, controls the scanner hardware and the ultrasonic unit. A DEC PDP-11/45 processor also running under the RT-11, version 2C-02, operating system, stores, processes and displays the flaw data. The software developed the Ultrasonics Evaluation System, is divided into two catagories; transducer characterization and flaw classification. Each category is divided further into two functional tasks: a data acquisition and a postprocessor ask. The flaw characterization collects data, compresses its, and writes it to a disk file. The data is then processed by the flaw classification postprocessing task. The use and operation of a flaw data postprocessor is described.

  6. Building an organic block storage service at CERN with Ceph

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel; Wiebalck, Arne

    2014-06-01

    Emerging storage requirements, such as the need for block storage for both OpenStack VMs and file services like AFS and NFS, have motivated the development of a generic backend storage service for CERN IT. The goals for such a service include (a) vendor neutrality, (b) horizontal scalability with commodity hardware, (c) fault tolerance at the disk, host, and network levels, and (d) support for geo-replication. Ceph is an attractive option due to its native block device layer RBD which is built upon its scalable, reliable, and performant object storage system, RADOS. It can be considered an "organic" storage solution because of its ability to balance and heal itself while living on an ever-changing set of heterogeneous disk servers. This work will present the outcome of a petabyte-scale test deployment of Ceph by CERN IT. We will first present the architecture and configuration of our cluster, including a summary of best practices learned from the community and discovered internally. Next the results of various functionality and performance tests will be shown: the cluster has been used as a backend block storage system for AFS and NFS servers as well as a large OpenStack cluster at CERN. Finally, we will discuss the next steps and future possibilities for Ceph at CERN.

  7. Medical image digital archive: a comparison of storage technologies

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy; Hutchings, Matt

    1998-07-01

    A cost effective, high capacity digital archive system is one of the remaining key factors that will enable a radiology department to eliminate film as an archive medium. The ever increasing amount of digital image data is creating the need for huge archive systems that can reliably store and retrieve millions of images and hold from a few terabytes of data to possibly hundreds of terabytes. Selecting the right archive solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, conformance to open standards, archive availability and reliability, security, cost, achievable benefits and cost savings, investment protection, and more. This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. New technologies will be discussed, such as DVD and high performance tape. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on random and pre-fetch retrieval time will be analyzed. The concept of automated migration of images from high performance, RAID disk storage devices to high capacity, NearlineR storage devices will be introduced as a viable way to minimize overall storage costs for an archive.

  8. VizieR Online Data Catalog: Spitzer solar-type stars list (Meyer+, 2006)

    NASA Astrophysics Data System (ADS)

    Meyer, M. R.; Hillenbrand, L. A.; Backman, D.; Beckwith, S.; Bouwman, J.; Brooke, T.; Carpenter, J.; Cohen, M.; Cortes, S.; Crockett, N.; Gorti, U.; Henning, T.; Hines, D.; Hollenbach, D.; Kim, J. S.; Lunine, J.; Malhotra, R.; Mamajek, E.; Metchev, S.; Moro-Martin, A.; Morris, P.; Najita, J.; Padgett, D.; Pascucci, I.; Rodmann, J.; Schlingman, W.; Silverstone, M.; Soderblom, D.; Stauffer, J.; Stobie, E.; Strom, S.; Watson, D.; Weidenschilling, S.; Wolf, S.; Young, E.

    2008-01-01

    We provide an overview of the Spitzer Legacy Program, Formation and Evolution of Planetary Systems, that was proposed in 2000, begun in 2001, and executed aboard the Spitzer Space Telescope between 2003 and 2006. This program exploits the sensitivity of Spitzer to carry out mid-infrared spectrophotometric observations of solar-type stars. With a sample of 328 stars ranging in age from 3Myr to 3Gyr, we trace the evolution of circumstellar gas and dust from primordial planet-building stages in young circumstellar disks through to older collisionally generated debris disks. When completed, our program will help define the timescales over which terrestrial and gas giant planets are built, constrain the frequency of planetesimal collisions as a function of time, and establish the diversity of mature planetary architectures. In addition to the observational program, we have coordinated a concomitant theoretical effort aimed at understanding the dynamics of circumstellar dust with and without the effects of embedded planets, dust spectral energy distributions, and atomic and molecular gas line emission. Together with the observations, these efforts will provide an astronomical context for understanding whether our solar system and its habitable planets a common or a rare circumstance. Additional information about the FEPS project can be found on the team Web site. (4 data files).

  9. Astrochem: Abundances of chemical species in the interstellar medium

    NASA Astrophysics Data System (ADS)

    Maret, Sébastien; Bergin, Edwin A.

    2015-07-01

    Astrochem computes the abundances of chemical species in the interstellar medium, as function of time. It studies the chemistry in a variety of astronomical objects, including diffuse clouds, dense clouds, photodissociation regions, prestellar cores, protostars, and protostellar disks. Astrochem reads a network of chemical reactions from a text file, builds up a system of kinetic rates equations, and solves it using a state-of-the-art stiff ordinary differential equation (ODE) solver. The Jacobian matrix of the system is computed implicitly, so the resolution of the system is extremely fast: large networks containing several thousands of reactions are usually solved in a few seconds. A variety of gas phase process are considered, as well as simple gas-grain interactions, such as the freeze-out and the desorption via several mechanisms (thermal desorption, cosmic-ray desorption and photo-desorption). The computed abundances are written in a HDF5 file, and can be plotted in different ways with the tools provided with Astrochem. Chemical reactions and their rates are written in a format which is meant to be easy to read and to edit. A tool to convert the chemical networks from the OSU and KIDA databases into this format is also provided. Astrochem is written in C, and its source code is distributed under the terms of the GNU General Public License (GPL).

  10. Visualization of Unsteady Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient problems require dealing with time.

  11. Solar heat collection with suspended metal roofing and whole house ventilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maynard, T.

    1996-10-01

    A south pitched roof is employed for solar collection directly onto a roofing with chocolate brown color. The roofing is structural and is suspended over plywood decking so as to create an air space which receives input from the coolest and lowest basement air of the house interior. Air heated beneath the metal roofing is returned to a basement storage wall. Full length plenum cavities are formed into the ordinary rafter truss framing--at the knee wall and collar tie spaces. Preliminary testing of BTU gain at known air flows is acquired with a microprocessor system continuously collecting input and outputmore » temperatures at the roof collector into disk data files.« less

  12. User's Guide for Computer Program that Routes Signal Traces

    NASA Technical Reports Server (NTRS)

    Hedgley, David R., Jr.

    2000-01-01

    This disk contains both a FORTRAN computer program and the corresponding user's guide that facilitates both its incorporation into your system and its utility. The computer program represents an efficient algorithm that routes signal traces on layers of a printed circuit with both through-pins and surface mounts. The computer program included is an implementation of the ideas presented in the theoretical paper titled "A Formal Algorithm for Routing Signal Traces on a Printed Circuit Board", NASA TP-3639 published in 1996. The computer program in the "connects" file can be read with a FORTRAN compiler and readily integrated into software unique to each particular environment where it might be used.

  13. THE KOZAI–LIDOV MECHANISM IN HYDRODYNAMICAL DISKS. II. EFFECTS OF BINARY AND DISK PARAMETERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Wen; Lubow, Stephen H.; Martin, Rebecca G., E-mail: wf5@rice.edu

    2015-07-01

    Martin et al. showed that a substantially misaligned accretion disk around one component of a binary system can undergo global damped Kozai–Lidov (KL) oscillations. During these oscillations, the inclination and eccentricity of the disk are periodically exchanged. However, the robustness of this mechanism and its dependence on the system parameters were unexplored. In this paper, we use three-dimensional hydrodynamical simulations to analyze how various binary and disk parameters affect the KL mechanism in hydrodynamical disks. The simulations include the effect of gas pressure and viscosity, but ignore the effects of disk self-gravity. We describe results for different numerical resolutions, binarymore » mass ratios and orbital eccentricities, initial disk sizes, initial disk surface density profiles, disk sound speeds, and disk viscosities. We show that the KL mechanism can operate for a wide range of binary-disk parameters. We discuss the applications of our results to astrophysical disks in various accreting systems.« less

  14. The Kozai-Lidov mechanism in hydrodynamical disks. II. Effects of binary and disk parameters

    DOE PAGES

    Fu, Wen; Lubow, Stephen H.; Martin, Rebecca G.

    2015-07-01

    Martin et al. (2014b) showed that a substantially misaligned accretion disk around one component of a binary system can undergo global damped Kozai–Lidov (KL) oscillations. During these oscillations, the inclination and eccentricity of the disk are periodically exchanged. However, the robustness of this mechanism and its dependence on the system parameters were unexplored. In this paper, we use three-dimensional hydrodynamical simulations to analyze how various binary and disk parameters affect the KL mechanism in hydrodynamical disks. The simulations include the effect of gas pressure and viscosity, but ignore the effects of disk self-gravity. We describe results for different numerical resolutions,more » binary mass ratios and orbital eccentricities, initial disk sizes, initial disk surface density profiles, disk sound speeds, and disk viscosities. We show that the KL mechanism can operate for a wide range of binary-disk parameters. We discuss the applications of our results to astrophysical disks in various accreting systems.« less

  15. Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitus, B.R.; Goddard, J.S.; Jatko, W.B.

    1993-06-01

    The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less

  16. FORTRAN Based Linear Programming for Microcomputers.

    DTIC Science & Technology

    1982-12-01

    OB.JECTIVE FUNCTION I C COEFFICIENTS AND MAXINIZATIONiIINIZATION CHOICE OF THE I c CURRENT NOBEL . USED IN CORRECTION OF MOST RECENi NOEL INPUT I C OP...WRITE (1, 200) 200 FeRNAT/5WINyALID ENTRY, PLEASE REENTER’) 60 TO 150 ENDIF C NOBEL WRTTEN TO DISK WAItE (3) PN, NXNN,NN,K1 , NE-C, N6C,NLC DO 250 121,10...CHAR4I2) WRITE’A,’Ql1’/,1X,’’INSCRE DISK( LPI1 5I AVAIL43LE.’’,7(/)’f PAUSE C NME OF NOBEL LAST SAVED WRITTEN TO TR.ANSFER FILE OFEN(3,FILEx’LPIsLPDATA

  17. 78 FR 26028 - Proposed Consent Decree, Clean Air Act Citizen Suit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-03

    ... Friday, excluding legal holidays. Comments on a disk or CD-ROM should be formatted in Word or ASCII file... question. EPA or the Department of Justice may withdraw or withhold consent to the proposed consent decree... Justice determines that consent to this consent decree should be withdrawn, the terms of the decree will...

  18. 78 FR 23560 - Proposed Consent Decree, Clean Air Act Citizen Suit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-19

    ... Friday, excluding legal holidays. Comments on a disk or CD-ROM should be formatted in Word or ASCII file... Justice may withdraw or withhold consent to the proposed consent decree if the comments disclose facts or... requirements of the Act. Unless EPA or the Department of Justice determines that consent to this consent decree...

  19. 77 FR 46759 - Proposed Consent Decree, Clean Air Act Citizen Suit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-06

    ... holidays. Comments on a disk or CD-ROM should be formatted in Word or ASCII file, avoiding the use of... U.S. Virgin Islands had failed to submit CAA SIPs for improving visibility in mandatory Federal... from 8:30 a.m. to 4:30 p.m., Monday through Friday, excluding legal holidays. The telephone number for...

  20. 76 FR 37686 - Wage Methodology for the Temporary Non-Agricultural Employment H-2B Program; Amendment of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-28

    ... electronic file on computer disk. The Department will consider providing the proposed rule in other formats... format, contact the Office of Policy Development and Research at (202) 693-3700 (VOICE) (this is not a... August 30, 2010 order,\\1\\ arguing that the Wage Rule violated the Administrative Procedure Act (APA...

  1. The Electronic Library: The Student/Scholar Workstation, CD-ROM and Hypertext.

    ERIC Educational Resources Information Center

    Triebwasser, Marc A.

    Predicting that a large component of the library of the not so distant future will be an electronic network of file servers where information is stored for access by personal computer workstations in remote locations as well as the library, this paper discusses innovative computer technologies--particularly CD-ROM (Compact Disk-Read Only Memory)…

  2. Enhancement of real-time EPICS IOC PV management for the data archiving system

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Ha

    2015-10-01

    The operation of a 100-MeV linear proton accelerator, the major driving values and experimental data need to be archived. According to the experimental conditions, different data are required. Functions that can add new data and delete data in real time need to be implemented. In an experimental physics and industrial control system (EPICS) input output controller (IOC), the value of process variables (PVs) are matched with the driving values and data. The PV values are archived in text file format by using the channel archiver. There is no need to create a database (DB) server, just a need for large hard disk. Through the web, the archived data can be loaded, and new PV values can be archived without stopping the archive engine. The details of the implementation of a data archiving system with channel archiver are presented, and some preliminary results are reported.

  3. The performance of disk arrays in shared-memory database machines

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  4. Practical and Secure Recovery of Disk Encryption Key Using Smart Cards

    NASA Astrophysics Data System (ADS)

    Omote, Kazumasa; Kato, Kazuhiko

    In key-recovery methods using smart cards, a user can recover the disk encryption key in cooperation with the system administrator, even if the user has lost the smart card including the disk encryption key. However, the disk encryption key is known to the system administrator in advance in most key-recovery methods. Hence user's disk data may be read by the system administrator. Furthermore, if the disk encryption key is not known to the system administrator in advance, it is difficult to achieve a key authentication. In this paper, we propose a scheme which enables to recover the disk encryption key when the user's smart card is lost. In our scheme, the disk encryption key is not preserved anywhere and then the system administrator cannot know the key before key-recovery phase. Only someone who has a user's smart card and knows the user's password can decrypt that user's disk data. Furthermore, we measured the processing time required for user authentication in an experimental environment using a virtual machine monitor. As a result, we found that this processing time is short enough to be practical.

  5. Modeling and Observations of Debris Disks

    NASA Astrophysics Data System (ADS)

    Moro-Martín, Amaya

    2009-08-01

    Debris disks are disks of dust observed around mature main sequence stars (generally A to K2 type). They are evidence that these stars harbor a reservoir of dust-producing plantesimals on spatial scales that are similar to those found for the small-body population of our solar system. Debris disks present a wide range of sizes and structural features (inner cavities, warps, offsets, rings, clumps) and there is growing evidence that, in some cases, they might be the result of the dynamical perturbations of a massive planet. Our solar system also harbors a debris disk and some of its properties resemble those of extra-solar debris disks. The study of these disks can shed light on the diversity of planetary systems and can help us place our solar system into context. This contribution is an introduction to the debris disk phenomenon, including a summary of debris disks main properties (§1-based mostly on results from extensive surveys carried out with Spitzer), and a discussion of what they can teach us about the diversity of planetary systems (§2).

  6. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    NASA Astrophysics Data System (ADS)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.

  7. Protoplanetary Disks in Multiple Star Systems

    NASA Astrophysics Data System (ADS)

    Harris, Robert J.

    Most stars are born in multiple systems, so the presence of a stellar companion may commonly influence planet formation. Theory indicates that companions may inhibit planet formation in two ways. First, dynamical interactions can tidally truncate circumstellar disks. Truncation reduces disk lifetimes and masses, leaving less time and material for planet formation. Second, these interactions might reduce grain-coagulation efficiency, slowing planet formation in its earliest stages. I present three observational studies investigating these issues. First is a spatially resolved Submillimeter Array (SMA) census of disks in young multiple systems in the Taurus-Auriga star-forming region to study their bulk properties. With this survey, I confirmed that disk lifetimes are preferentially decreased in multiples: single stars have detectable millimeter-wave continuum emission twice as often as components of multiples. I also verified that millimeter luminosity (proportional to disk mass) declines with decreasing stellar separation. Furthermore, by measuring resolved-disk radii, I quantitatively tested tidal-truncation theories: results were mixed, with a few disks much larger than expected. I then switch focus to the grain-growth properties of disks in multiple star systems. By combining SMA, Combined Array for Research in Millimeter Astronomy (CARMA), and Jansky Very Large Array (VLA) observations of the circumbinary disk in the UZ Tau quadruple system, I detected radial variations in the grain-size distribution: large particles preferentially inhabit the inner disk. Detections of these theoretically predicted variations have been rare. I related this to models of grain coagulation in gas disks and find that our results are consistent with growth limited by radial drift. I then present a study of grain growth in the disks of the AS 205 and UX Tau multiple systems. By combining SMA, Atacama Large Millimeter/submillimeter Array (ALMA), and VLA observations, I detected radial variations of the grain-size distribution in the AS 205 A disk, but not in the UX Tau A disk. I find that some combination of radial drift and fragmentation limits growth in the AS 205 A disk. In the final chapter, I summarize my findings that, while multiplicity clearly influences bulk disk properties, it does not obviously inhibit grain growth. Other investigations are suggested.

  8. General consumer communication tools for improved image management and communication in medicine.

    PubMed

    Rosset, Chantal; Rosset, Antoine; Ratib, Osman

    2005-12-01

    We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.

  9. NGC 5523: An isolated product of soft galaxy mergers?

    NASA Astrophysics Data System (ADS)

    Fulmer, Leah M.; Gallagher, John S.; Kotulla, Ralf

    2017-02-01

    Multi-band images of the very isolated spiral galaxy NGC 5523 show a number of unusual features consistent with NGC 5523 having experienced a significant merger. (1) Near-infrared images from the Spitzer Space Telescope (SST) and the WIYN 3.5-m telescope reveal a nucleated bulge-like structure embedded in a spiral disk; (2) the bulge is offset by 1.8 kpc from a brightness minimum at the center of the optically bright inner disk; (3) a tidal stream, possibly associated with an ongoing satellite interaction, extends from the nucleated bulge along the disk. We interpret these properties as the results of one or more non-disruptive mergers between NGC 5523 and companion galaxies or satellites, raising the possibility that some galaxies become isolated because they have merged with former companions. The reduced images (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A119

  10. Computer programs to characterize alloys and predict cyclic life using the total strain version of strainrange partitioning: Tutorial and users manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Saltsman, James F.

    1992-01-01

    This manual presents computer programs for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the total strain version of Strainrange Partitioning (TS-SRP). An extensive database has also been developed in a parallel effort. This database is probably the largest source of high-temperature, creep-fatigue test data available in the public domain and can be used with other life prediction methods as well. This users manual, software, and database are all in the public domain and are available through COSMIC (382 East Broad Street, Athens, GA 30602; (404) 542-3265, FAX (404) 542-4807). Two disks accompany this manual. The first disk contains the source code, executable files, and sample output from these programs. The second disk contains the creep-fatigue data in a format compatible with these programs.

  11. The Mac Internet Tour Guide: Cruising the Internet the Easy Way. [First Edition.

    ERIC Educational Resources Information Center

    Fraase, Michael

    Published exclusively for MacIntosh computer users, this guide provides an overview of Internet resources for new and experienced users. E-mail, file transfer, and decompression software used to access the resources are included on a 800k, 3.5 inch disk. The following chapters are included: (1) "What Is the Internet" covers finding your…

  12. Head-Disk Interface Technology: Challenges and Approaches

    NASA Astrophysics Data System (ADS)

    Liu, Bo

    Magnetic hard disk drive (HDD) technology is believed to be one of the most successful examples of modern mechatronics systems. The mechanical beauty of magnetic HDD includes simple but super high accuracy positioning head, positioning technology, high speed and stability spindle motor technology, and head-disk interface technology which keeps the millimeter sized slider flying over a disk surface at nanometer level slider-disk spacing. This paper addresses the challenges and possible approaches on how to further reduce the slider disk spacing whilst retaining the stability and robustness level of head-disk systems for future advanced magnetic disk drives.

  13. Optimising LAN access to grid enabled storage elements

    NASA Astrophysics Data System (ADS)

    Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.

    2008-07-01

    When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.

  14. An integrated solution for remote data access

    NASA Astrophysics Data System (ADS)

    Sapunenko, Vladimir; D'Urso, Domenico; dell'Agnello, Luca; Vagnoni, Vincenzo; Duranti, Matteo

    2015-12-01

    Data management constitutes one of the major challenges that a geographically- distributed e-Infrastructure has to face, especially when remote data access is involved. We discuss an integrated solution which enables transparent and efficient access to on-line and near-line data through high latency networks. The solution is based on the joint use of the General Parallel File System (GPFS) and of the Tivoli Storage Manager (TSM). Both products, developed by IBM, are well known and extensively used in the HEP computing community. Owing to a new feature introduced in GPFS 3.5, so-called Active File Management (AFM), the definition of a single, geographically-distributed namespace, characterised by automated data flow management between different locations, becomes possible. As a practical example, we present the implementation of AFM-based remote data access between two data centres located in Bologna and Rome, demonstrating the validity of the solution for the use case of the AMS experiment, an astro-particle experiment supported by the INFN CNAF data centre with the large disk space requirements (more than 1.5 PB).

  15. PCDAQ, A Windows Based DAQ System

    NASA Astrophysics Data System (ADS)

    Hogan, Gary

    1998-10-01

    PCDAQ is a Windows NT based general DAQ/Analysis/Monte Carlo shell developed as part of the Proton Radiography project at LANL (Los Alamos National Laboratory). It has been adopted by experiments outside of the Proton Radiography project at Brookhaven National Laboratory (BNL) and at LANL. The program provides DAQ, Monte Carlo, and replay (disk file input) modes. Data can be read from hardware (CAMAC) or other programs (ActiveX servers). Future versions will read VME. User supplied data analysis routines can be written in Fortran, C++, or Visual Basic. Histogramming, testing, and plotting packages are provided. Histogram data can be exported to spreadsheets or analyzed in user supplied programs. Plots can be copied and pasted as bitmap objects into other Windows programs or printed. A text database keyed by the run number is provided. Extensive software control flags are provided so that the user can control the flow of data through the program. Control flags can be set either in script command files or interactively. The program can be remotely controlled and data accessed over the Internet through its ActiveX DCOM interface.

  16. A self-defining hierarchical data system

    NASA Technical Reports Server (NTRS)

    Bailey, J.

    1992-01-01

    The Self-Defining Data System (SDS) is a system which allows the creation of self-defining hierarchical data structures in a form which allows the data to be moved between different machine architectures. Because the structures are self-defining they can be used for communication between independent modules in a distributed system. Unlike disk-based hierarchical data systems such as Starlink's HDS, SDS works entirely in memory and is very fast. Data structures are created and manipulated as internal dynamic structures in memory managed by SDS itself. A structure may then be exported into a caller supplied memory buffer in a defined external format. This structure can be written as a file or sent as a message to another machine. It remains static in structure until it is reimported into SDS. SDS is written in portable C and has been run on a number of different machine architectures. Structures are portable between machines with SDS looking after conversion of byte order, floating point format, and alignment. A Fortran callable version is also available for some machines.

  17. Identifying Likely Disk-hosting M dwarfs with Disk Detective

    NASA Astrophysics Data System (ADS)

    Silverberg, Steven; Wisniewski, John; Kuchner, Marc J.; Disk Detective Collaboration

    2018-01-01

    M dwarfs are critical targets for exoplanet searches. Debris disks often provide key information as to the formation and evolution of planetary systems around higher-mass stars, alongside the planet themselves. However, less than 300 M dwarf debris disks are known, despite M dwarfs making up 70% of the local neighborhood. The Disk Detective citizen science project has identified over 6000 new potential disk host stars from the AllWISE catalog over the past three years. Here, we present preliminary results of our search for new disk-hosting M dwarfs in the survey. Based on near-infrared color cuts and fitting stellar models to photometry, we have identified over 500 potential new M dwarf disk hosts, nearly doubling the known number of such systems. In this talk, we present our methodology, and outline our ongoing work to confirm systems as M dwarf disks.

  18. Gas in the Terrestrial Planet Region of Disks: CO Fundamental Emission from T Tauri Stars

    DTIC Science & Technology

    2003-06-01

    planetary systems: protoplanetary disks — stars: variables: other 1. INTRODUCTION As the likely birthplaces of planets, the inner regions of young...both low column density regions, such as disk gaps , and temperature inversion regions in disk atmospheres can produce significant emission. The esti...which planetary systems form. The moti- vation to study inner disks is all the more intense today given the discovery of planets outside the solar system

  19. Evidence for dust grain growth in young circumstellar disks.

    PubMed

    Throop, H B; Bally, J; Esposito, L W; McCaughrean, M J

    2001-06-01

    Hundreds of circumstellar disks in the Orion nebula are being rapidly destroyed by the intense ultraviolet radiation produced by nearby bright stars. These young, million-year-old disks may not survive long enough to form planetary systems. Nevertheless, the first stage of planet formation-the growth of dust grains into larger particles-may have begun in these systems. Observational evidence for these large particles in Orion's disks is presented. A model of grain evolution in externally irradiated protoplanetary disks is developed and predicts rapid particle size evolution and sharp outer disk boundaries. We discuss implications for the formation rates of planetary systems.

  20. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

  1. Stagger angle dependence of inertial and elastic coupling in bladed disks

    NASA Technical Reports Server (NTRS)

    Crawley, E. F.; Mokadam, D. R.

    1984-01-01

    Conditions which necessitate the inclusion of disk and shaft flexibility in the analysis of blade response in rotating blade-disk-shaft systems are derived in terms of nondimensional parameters. A simple semianalytical Rayleigh-Ritz model is derived in which the disk possesses all six rigid body degrees of freedom, which are elastically constrained by the shaft. Inertial coupling by the rigid body motion of the disk on a flexible shaft and out-of-plane elastic coupling due to disk flexure are included. Frequency ratios and mass ratios, which depend on the stagger angle, are determined for three typical rotors: a first stage high-pressure core compressor, a high bypass ratio fan, and an advanced turboprop. The stagger angle controls the degree of coupling in the blade-disk system. In the blade-disk-shaft system, the stagger angle determines whether blade-disk motion couples principally to the out-of-plane or in-plane motion of the disk on the shaft. The Ritz analysis shows excellent agreement with experimental results.

  2. Introduction to Data Acquisition 3.Let’s Acquire Data!

    NASA Astrophysics Data System (ADS)

    Nakanishi, Hideya; Okumura, Haruhiko

    In fusion experiments, diagnostic control and logging devices are usually connected through the field bus, e.g. GP-IB. Internet technologies are often applied for their remote operation. All equipment and digitizers are driven by pre-programmed sequences, in which clocks and triggers give the essential timing for data acquisition. Data production rate and amount must be checked in comparison with the transfer and store rates. To store binary raw data safely, journaling file systems are preferably used with redundant disks (RAID) or mirroring mechanism, such as “rsync”. A proper choice of the data compression method not only reduces the storage size but also improves the I/O throughputs. DBMS is even applicable to quick search or security around the table data.

  3. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    NASA Astrophysics Data System (ADS)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.

  4. Disks, Young Stars, and Radio Waves: The Quest for Forming Planetary Systems

    NASA Astrophysics Data System (ADS)

    Chandler, C. J.; Shepherd, D. S.

    2008-08-01

    Kant and Laplace suggested the Solar System formed from a rotating gaseous disk in the 18th century, but convincing evidence that young stars are indeed surrounded by such disks was not presented for another 200 years. As we move into the 21st century the emphasis is now on disk formation, the role of disks in star formation, and on how planets form in those disks. Radio wavelengths play a key role in these studies, currently providing some of the highest-spatial-resolution images of disks, along with evidence of the growth of dust grains into planetesimals. The future capabilities of EVLA and ALMA provide extremely exciting prospects for resolving disk structure and kinematics, studying disk chemistry, directly detecting protoplanets, and imaging disks in formation.

  5. HST/WFC3 Imaging and Multi-Wavelength Characterization of Edge-On Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Gould, Carolina; Williams, Hayley; Duchene, Gaspard

    2017-10-01

    In recent years, the imaging detail in resolved protoplanetary disks has vastly improved and created a critical mass of objects to survey and compare properties, leading us to better understandings of system formation. In particular, disks with an edge-on inclination offer an important perspective, not only for the imaging convenience since the disk blocks stellar light, but scientifically an edge-on disk provides an otherwise impossible opportunity to observe vertical dust structure of a protoplanetary system. In this contribution, we compare seven HST-imaged edge-on protoplanetary disks in the Taurus, Chamaeleon and Ophiuchus star-forming regions, making note the variation in morphology (settled vs flared), dust properties revealed by multiwavelength color mapping, brightness variability over years timescales, and the presence in some systems of a blue-colored atmosphere far above the disk midplane. By using a uniform approach for their analysis, together these seven edge-on protoplanetary disk systems can give insights on evolutionary processes and inform future projects that explore this critical stage of planet formation.

  6. On the role of disks in the formation of stellar systems: A numerical parameter study of rapid accretion

    DOE PAGES

    Kratter, Kaitlin M.; Matzner, Christopher D.; Krumholz, Mark R.; ...

    2009-12-23

    We study rapidly accreting, gravitationally unstable disks with a series of idealized global, numerical experiments using the code ORION. Our numerical parameter study focuses on protostellar disks, showing that one can predict disk behavior and the multiplicity of the accreting star system as a function of two dimensionless parameters which compare the infall rate to the disk sound speed and orbital period. Although gravitational instabilities become strong, we find that fragmentation into binary or multiple systems occurs only when material falls in several times more rapidly than the canonical isothermal limit. The disk-to-star accretion rate is proportional to the infallmore » rate and governed by gravitational torques generated by low-m spiral modes. Furthermore, we also confirm the existence of a maximum stable disk mass: disks that exceed ~50% of the total system mass are subject to fragmentation and the subsequent formation of binary companions.« less

  7. Modifying the Standard Disk Model for the Ultraviolet Spectral Analysis of Disk-dominated Cataclysmic Variables. I. The Novalikes MV Lyrae, BZ Camelopardalis, and V592 Cassiopeiae.

    PubMed

    Godon, Patrick; Sion, Edward M; Balman, Şölen; Blair, William P

    2017-09-01

    The standard disk is often inadequate to model disk-dominated cataclysmic variables (CVs) and generates a spectrum that is bluer than the observed UV spectra. X-ray observations of these systems reveal an optically thin boundary layer (BL) expected to appear as an inner hole in the disk. Consequently, we truncate the inner disk. However, instead of removing the inner disk, we impose the no-shear boundary condition at the truncation radius, thereby lowering the disk temperature and generating a spectrum that better fits the UV data. With our modified disk, we analyze the archival UV spectra of three novalikes that cannot be fitted with standard disks. For the VY Scl systems MV Lyr and BZ Cam, we fit a hot inflated white dwarf (WD) with a cold modified disk ( [Formula: see text] ~ a few 10 -9 M ⊙ yr -1 ). For V592 Cas, the slightly modified disk ( [Formula: see text] ~ 6 × 10 -9 M ⊙ yr -1 ) completely dominates the UV. These results are consistent with Swift X-ray observations of these systems, revealing BLs merged with ADAF-like flows and/or hot coronae, where the advection of energy is likely launching an outflow and heating the WD, thereby explaining the high WD temperature in VY Scl systems. This is further supported by the fact that the X-ray hardness ratio increases with the shallowness of the UV slope in a small CV sample we examine. Furthermore, for 105 disk-dominated systems, the International Ultraviolet Explorer spectra UV slope decreases in the same order as the ratio of the X-ray flux to optical/UV flux: from SU UMa's, to U Gem's, Z Cam's, UX UMa's, and VY Scl's.

  8. Millimeter Studies of Nearby Debris Disks

    NASA Astrophysics Data System (ADS)

    MacGregor, Meredith Ann

    2017-03-01

    At least 20% of nearby main sequence stars are known to be surrounded by disks of dusty material resulting from the collisional erosion of planetesimals, similar to asteroids and comets in our own Solar System. The material in these ‘debris disks’ is directly linked to the larger bodies, like planets, in the system through collisions and gravitational perturbations. Observations at millimeter wavelengths are especially critical to our understanding of these systems, since the large grains that dominate emission at these long wavelengths reliably trace the underlying planetesimal distribution. In this thesis, I have used state-of-the-art observations at millimeter wavelengths to address three related questions concerning debris disks and planetary system evolution: 1) How are wide-separation, substellar companions formed? 2) What is the physical nature of the collisional process in debris disks? And, 3) Can the structure and morphology of debris disks provide probes of planet formation and subsequent dynamical evolution? Using ALMA observations of GQ Lup, a pre-main sequence system with a wide-separation, substellar companion, I have placed constraints on the mass of a circumplanetary disk around the companion, informing formation scenarios for this and other similar systems (Chapter 2). I obtained observations of a sample of fifteen debris disks with both the VLA and ATCA at centimeter wavelengths, and robustly determined the millimeter spectral index of each disk and thus the slope of the grain size distribution, providing the first observational test of collision models of debris disks (Chapter 3). By applying an MCMC modeling framework to resolved millimeter observations with ALMA and SMA, I have placed the first constraints on the position, width, surface density gradient, and any asymmetric structure of the AU Mic, HD 15115, Epsilon Eridani, Tau Ceti, and Fomalhaut debris disks (Chapters 4–8). These observations of individual systems hint at trends in disk structure and dynamics, which can be explored further with a comparative study of a sample of the eight brightest debris disks around Sun-like stars within 20 pc (Chapter 9). This body of work has yielded the first resolved images of notable debris disks at millimeter wavelengths, and complements other ground- and space-based observations by providing constraints on these systems with uniquely high angular resolution and wavelength coverage. Together these results provide a foundation to investigate the dynamical evolution of planetary systems through multi-wavelength observations of debris disks.

  9. Materials accounting system for an IBM PC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearse, R.C.; Thomas, R.J.; Henslee, S.P.

    1986-01-01

    We have adapted the Los Alamos MASS accounting system for use on an IBM PC/AT at the Fuels Manufacturing Facility (FMF) at Argonne National Laboratory-West (ANL-WEST) in Idaho Falls, Idaho. Cost of hardware and proprietary software was less than $10,000 per station. The system consists of three stations between which accounting information is transferred using floppy disks accompanying special nuclear material shipments. The programs were implemented in dBASEIII and were compiled using the proprietary software CLIPPER. Modifications to the inventory can be posted in just a few minutes, and operator/computer interaction is nearly instantaneous. After the records are built bymore » the user, it takes 4 to 5 seconds to post the results to the database files. A version of this system was specially adapted and is currently in use at the FMF facility at Argonne National Laboratory in Idaho Falls. Initial satisfaction is adequate and software and hardware problems are minimal.« less

  10. From stars to dust: looking into a circumstellar disk through chondritic meteorites.

    PubMed

    Connolly, Harold C

    2005-01-07

    One of the most fundamental questions in planetary science is, How did the solar system form? In this special issue, astronomical observations and theories constraining circumstellar disks, their lifetimes, and the formation of planetary to subplanetary objects are reviewed. At present, it is difficult to observe what is happening within disks and to determine if another disk environment is comparable to the early solar system disk environment (called the protoplanetary disk). Fortunately, we have chondritic meteorites, which provide a record of the processes that operated and materials present within the protoplanetary disk.

  11. OPMILL - MICRO COMPUTER PROGRAMMING ENVIRONMENT FOR CNC MILLING MACHINES THREE AXIS EQUATION PLOTTING CAPABILITIES

    NASA Technical Reports Server (NTRS)

    Ray, R. B.

    1994-01-01

    OPMILL is a computer operating system for a Kearney and Trecker milling machine that provides a fast and easy way to program machine part manufacture with an IBM compatible PC. The program gives the machinist an "equation plotter" feature which plots any set of equations that define axis moves (up to three axes simultaneously) and converts those equations to a machine milling program that will move a cutter along a defined path. Other supported functions include: drill with peck, bolt circle, tap, mill arc, quarter circle, circle, circle 2 pass, frame, frame 2 pass, rotary frame, pocket, loop and repeat, and copy blocks. The system includes a tool manager that can handle up to 25 tools and automatically adjusts tool length for each tool. It will display all tool information and stop the milling machine at the appropriate time. Information for the program is entered via a series of menus and compiled to the Kearney and Trecker format. The program can then be loaded into the milling machine, the tool path graphically displayed, and tool change information or the program in Kearney and Trecker format viewed. The program has a complete file handling utility that allows the user to load the program into memory from the hard disk, save the program to the disk with comments, view directories, merge a program on the disk with one in memory, save a portion of a program in memory, and change directories. OPMILL was developed on an IBM PS/2 running DOS 3.3 with 1 MB of RAM. OPMILL was written for an IBM PC or compatible 8088 or 80286 machine connected via an RS-232 port to a Kearney and Trecker Data Mill 700/C Control milling machine. It requires a "D:" drive (fixed-disk or virtual), a browse or text display utility, and an EGA or better display. Users wishing to modify and recompile the source code will also need Turbo BASIC, Turbo C, and Crescent Software's QuickPak for Turbo BASIC. IBM PC and IBM PS/2 are registered trademarks of International Business Machines. Turbo BASIC and Turbo C are trademarks of Borland International.

  12. ALMA continuum observations of the protoplanetary disk AS 209. Evidence of multiple gaps opened by a single planet

    NASA Astrophysics Data System (ADS)

    Fedele, D.; Tazzari, M.; Booth, R.; Testi, L.; Clarke, C. J.; Pascucci, I.; Kospal, A.; Semenov, D.; Bruderer, S.; Henning, Th.; Teague, R.

    2018-02-01

    This paper presents new high angular resolution ALMA 1.3 mm dust continuum observations of the protoplanetary system AS 209 in the Ophiuchus star forming region. The dust continuum emission is characterized by a main central core and two prominent rings at r = 75 au and r = 130 au intervaled by two gaps at r = 62 au and r = 103 au. The two gaps have different widths and depths, with the inner one being narrower and shallower. We determined the surface density of the millimeter dust grains using the 3D radiative transfer disk code DALI. According to our fiducial model the inner gap is partially filled with millimeter grains while the outer gap is largely devoid of dust. The inferred surface density is compared to 3D hydrodynamical simulations (FARGO-3D) of planet-disk interaction. The outer dust gap is consistent with the presence of a giant planet (Mplanet 0.7 MSaturn); the planet is responsible for the gap opening and for the pile-up of dust at the outer edge of the planet orbit. The simulations also show that the same planet could be the origin of the inner gap at r = 62 au. The relative position of the two dust gaps is close to the 2:1 resonance and we have investigated the possibility of a second planet inside the inner gap. The resulting surface density (including location, width and depth of the two dust gaps) are in agreement with the observations. The properties of the inner gap pose a strong constraint to the mass of the inner planet (Mplanet < 0.1 MJ). In both scenarios (single or pair of planets), the hydrodynamical simulations suggest a very low disk viscosity (α < 10‑4). Given the young age of the system (0.5-1 Myr), this result implies that the formation of giant planets occurs on a timescale of ≲1 Myr. The reduced image (FITS file) is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/610/A24

  13. Millimeter observations of the disk around GW Orionis

    NASA Astrophysics Data System (ADS)

    Fang, M.; Sicilia-Aguilar, A.; Wilner, D.; Wang, Y.; Roccatagliata, V.; Fedele, D.; Wang, J. Z.

    2017-07-01

    The GW Ori system is a pre-main sequence triple system (GW Ori A/B/C) with companions (GW Ori B/C) at 1 AU and 8 AU, respectively, from the primary (GW Ori A). The primary of the system has a mass of 3.9 M⊙, but shows a spectral type of G8. Thus, GW Ori A could be a precursor of a B star, but it is still at an earlier evolutionary stage than Herbig Be stars. GW Ori provides an ideal target for experiments and observations (being a "blown-up" solar system with a very massive sun and at least two upscaled planets). We present the first spatially resolved millimeter interferometric observations of the disk around the triple pre-main sequence system GW Ori, obtained with the Submillimeter Array, both in continuum and in the 12CO J = 2-1, 13CO J = 2-1, and C18O J = 2-1 lines. These new data reveal a huge, massive, and bright disk in the GW Ori system. The dust continuum emission suggests a disk radius of around 400 AU, but the 12CO J = 2-1 emission shows a much more extended disk with a size around 1300 AU. Owing to the spatial resolution ( 1''), we cannot detect the gap in the disk that is inferred from spectral energy distribution (SED) modeling. We characterize the dust and gas properties in the disk by comparing the observations with the predictions from the disk models with various parameters calculated with a Monte Carlo radiative transfer code RADMC-3D. The disk mass is around0.12 M⊙, and the disk inclination with respect to the line of sight is around 35°. The kinematics in the disk traced by the CO line emission strongly suggest that the circumstellar material in the disk is in Keplerian rotation around GW Ori.Tentatively substantial C18O depletion in gas phase is required to explain the characteristics of the line emission from the disk.

  14. 78 FR 44054 - Wage Methodology for the Temporary Non-Agricultural Employment H-2B Program; Proposed Delay of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-23

    ... will provide you with appropriate aids such as readers or print magnifiers. The Department will make copies of the notice available, upon request, in large print and as an electronic file on computer disk.... v. Sec'y of Labor, 713 F.3d 1080 (11th Cir. 2013) (holding that the Department of Labor lacks...

  15. Sharp Eccentric Rings in Planetless Hydrodynamical Models of Debris Disks

    NASA Technical Reports Server (NTRS)

    Lyra, W.; Kuchner, M. J.

    2013-01-01

    Exoplanets are often associated with disks of dust and debris, analogs of the Kuiper Belt in our solar system. These "debris disks" show a variety of non-trivial structures attributed to planetary perturbations and utilized to constrain the properties of the planets. However, analyses of these systems have largely ignored the fact that, increasingly, debris disks are found to contain small quantities of gas, a component all debris disks should contain at some level. Several debris disks have been measured with a dust-to-gas ratio around unity where the effect of hydrodynamics on the structure of the disk cannot be ignored. Here we report that dust-gas interactions can produce some of the key patterns seen in debris disks that were previously attributed to planets. Through linear and nonlinear modeling of the hydrodynamical problem, we find that a robust clumping instability exists in this configuration, organizing the dust into narrow, eccentric rings, similar to the Fomalhaut debris disk. The hypothesis that these disks might contain planets, though thrilling, is not necessarily required to explain these systems.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meneses, Esteban; Ni, Xiang; Jones, Terry R

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of faultmore » tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.« less

  17. You’re Cut Off: HD and MHD Simulations of Truncated Accretion Disks

    NASA Astrophysics Data System (ADS)

    Hogg, J. Drew; Reynolds, Christopher S.

    2017-01-01

    Truncated accretion disks are commonly invoked to explain the spectro-temporal variability from accreting black holes in both small systems, i.e. state transitions in galactic black hole binaries (GBHBs), and large systems, i.e. low-luminosity active galactic nuclei (LLAGNs). In the canonical truncated disk model of moderately low accretion rate systems, gas in the inner region of the accretion disk occupies a hot, radiatively inefficient phase, which leads to a geometrically thick disk, while the gas in the outer region occupies a cooler, radiatively efficient phase that resides in the standard geometrically thin disk. Observationally, there is strong empirical evidence to support this phenomenological model, but a detailed understanding of the disk behavior is lacking. We present well-resolved hydrodynamic (HD) and magnetohydrodynamic (MHD) numerical models that use a toy cooling prescription to produce the first sustained truncated accretion disks. Using these simulations, we study the dynamics, angular momentum transport, and energetics of a truncated disk in the two different regimes. We compare the behaviors of the HD and MHD disks and emphasize the need to incorporate a full MHD treatment in any discussion of truncated accretion disk evolution.

  18. ALMA Observations of a Misaligned Binary Protoplanetary Disk System in Orion

    NASA Astrophysics Data System (ADS)

    Williams, Jonathan P.; Mann, Rita K.; Di Francesco, James; Andrews, Sean M.; Hughes, A. Meredith; Ricci, Luca; Bally, John; Johnstone, Doug; Matthews, Brenda

    2014-12-01

    We present Atacama Large Millimeter/Submillimeter Array (ALMA) observations of a wide binary system in Orion, with projected separation 440 AU, in which we detect submillimeter emission from the protoplanetary disks around each star. Both disks appear moderately massive and have strong line emission in CO 3-2, HCO+ 4-3, and HCN 3-2. In addition, CS 7-6 is detected in one disk. The line-to-continuum ratios are similar for the two disks in each of the lines. From the resolved velocity gradients across each disk, we constrain the masses of the central stars, and show consistency with optical-infrared spectroscopy, both indicative of a high mass ratio ~9. The small difference between the systemic velocities indicates that the binary orbital plane is close to face-on. The angle between the projected disk rotation axes is very high, ~72°, showing that the system did not form from a single massive disk or a rigidly rotating cloud core. This finding, which adds to related evidence from disk geometries in other systems, protostellar outflows, stellar rotation, and similar recent ALMA results, demonstrates that turbulence or dynamical interactions act on small scales well below that of molecular cores during the early stages of star formation.

  19. Probing for Exoplanets Hiding in Dusty Debris Disks: Disk Imaging, Characterization, and Exploration with HST-STIS Multi-roll Coronagraphy

    NASA Technical Reports Server (NTRS)

    Schneider, Glenn; Grady, Carol A.; Hines, Dean C.; Stark, Christopher C.; Debes, John; Carson, Joe; Kuchner, Marc J.; Perrin, Marshall; Weinberger, Alycia; Wisniewski, John P.; hide

    2014-01-01

    Spatially resolved scattered-light images of circumstellar debris in exoplanetary systems constrain the physical properties and orbits of the dust particles in these systems. They also inform on co-orbiting (but unseen) planets, the systemic architectures, and forces perturbing the starlight-scattering circumstellar material. Using HST/STIS broadband optical coronagraphy, we have completed the observational phase of a program to study the spatial distribution of dust in a sample of ten circumstellar debris systems, and one "mature" protoplanetrary disk all with HST pedigree, using PSF-subtracted multi-roll coronagraphy. These observations probe stellocentric distances greater than or equal to 5 AU for the nearest systems, and simultaneously resolve disk substructures well beyond corresponding to the giant planet and Kuiper belt regions within our own Solar System. They also disclose diffuse very low-surface brightness dust at larger stellocentric distances. Herein we present new results inclusive of fainter disks such as HD92945 (F (sub disk) /F (sub star) = 5x10 (sup -5) confirming, and better revealing, the existence of a narrow inner debris ring within a larger diffuse dust disk. Other disks with ring-like sub-structures and significant asymmetries and complex morphologies include: HD181327 for which we posit a spray of ejecta from a recent massive collision in an exo-Kuiper belt; HD61005 suggested to be interacting with the local ISM; HD15115 and HD32297, discussed also in the context of putative environmental interactions. These disks, and HD15745, suggest that debris system evolution cannot be treated in isolation. For AU Mic's edge-on disk we find out-of-plane surface brightness asymmetries at greater than or equal to 5 AU that may implicate the existence of one or more planetary perturbers. Time resolved images of the MP Mus proto-planetary disk provide spatially resolved temporal variability in the disk illumination. These and other new images from our HST/STIS GO/12228 program enable direct inter-comparison of the architectures of these exoplanetary debris systems in the context of our own Solar System.

  20. Probing for exoplanets hiding in dusty debris disks: Disk imaging, characterization, and exploration with HST/STIS multi-roll coronagraphy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Glenn; Hinz, Phillip M.; Grady, Carol A.

    Spatially resolved scattered-light images of circumstellar debris in exoplanetary systems constrain the physical properties and orbits of the dust particles in these systems. They also inform on co-orbiting (but unseen) planets, the systemic architectures, and forces perturbing the starlight-scattering circumstellar material. Using Hubble Space Telescope (HST)/Space Telescope Imaging Spectrograph (STIS) broadband optical coronagraphy, we have completed the observational phase of a program to study the spatial distribution of dust in a sample of 10 circumstellar debris systems and 1 'mature' protoplanetrary disk, all with HST pedigree, using point-spread-function-subtracted multi-roll coronagraphy. These observations probe stellocentric distances ≥5 AU for the nearestmore » systems, and simultaneously resolve disk substructures well beyond corresponding to the giant planet and Kuiper Belt regions within our own solar system. They also disclose diffuse very low-surface-brightness dust at larger stellocentric distances. Herein we present new results inclusive of fainter disks such as HD 92945 (F {sub disk}/F {sub star} = 5 × 10{sup –5}), confirming, and better revealing, the existence of a narrow inner debris ring within a larger diffuse dust disk. Other disks with ring-like substructures and significant asymmetries and complex morphologies include HD 181327, for which we posit a spray of ejecta from a recent massive collision in an exo-Kuiper Belt; HD 61005, suggested to be interacting with the local interstellar medium; and HD 15115 and HD 32297, also discussed in the context of putative environmental interactions. These disks and HD 15745 suggest that debris system evolution cannot be treated in isolation. For AU Mic's edge-on disk, we find out-of-plane surface brightness asymmetries at ≥5 AU that may implicate the existence of one or more planetary perturbers. Time-resolved images of the MP Mus protoplanetary disk provide spatially resolved temporal variability in the disk illumination. These and other new images from our HST/STIS GO/12228 program enable direct inter-comparison of the architectures of these exoplanetary debris systems in the context of our own solar system.« less

  1. Optoelectronic associative recall using motionless-head parallel readout optical disk

    NASA Astrophysics Data System (ADS)

    Marchand, P. J.; Krishnamoorthy, A. V.; Ambs, P.; Esener, S. C.

    1990-12-01

    High data rates, low retrieval times, and simple implementation are presently shown to be obtainable by means of a motionless-head 2D parallel-readout system for optical disks. Since the optical disk obviates mechanical head motions for access, focusing, and tracking, addressing is performed exclusively through the disk's rotation. Attention is given to a high-performance associative memory system configuration which employs a parallel readout disk.

  2. Planet Formation in Binary Star Systems

    NASA Astrophysics Data System (ADS)

    Martin, Rebecca

    About half of observed exoplanets are estimated to be in binary systems. Understanding planet formation and evolution in binaries is therefore essential for explaining observed exoplanet properties. Recently, we discovered that a highly misaligned circumstellar disk in a binary system can undergo global Kozai-Lidov (KL) oscillations of the disk inclination and eccentricity. These oscillations likely have a significant impact on the formation and orbital evolution of planets in binary star systems. Planet formation by core accretion cannot operate during KL oscillations of the disk. First, we propose to consider the process of disk mass transfer between the binary members. Secondly, we will investigate the possibility of planet formation by disk fragmentation. Disk self gravity can weaken or suppress the oscillations during the early disk evolution when the disk mass is relatively high for a narrow range of parameters. Thirdly, we will investigate the evolution of a planet whose orbit is initially aligned with respect to the disk, but misaligned with respect to the orbit of the binary. We will study how these processes relate to observations of star-spin and planet orbit misalignment and to observations of planets that appear to be undergoing KL oscillations. Finally, we will analyze the evolution of misaligned multi-planet systems. This theoretical work will involve a combination of analytic and numerical techniques. The aim of this research is to shed some light on the formation of planets in binary star systems and to contribute to NASA's goal of understanding of the origins of exoplanetary systems.

  3. The Fermilab Accelerator control system

    NASA Astrophysics Data System (ADS)

    Bogert, Dixon

    1986-06-01

    With the advent of the Tevatron, considerable upgrades have been made to the controls of all the Fermilab Accelerators. The current system is based on making as large an amount of data as possible available to many operators or end-users. Specifically there are about 100 000 separate readings, settings, and status and control registers in the various machines, all of which can be accessed by seventeen consoles, some in the Main Control Room and others distributed throughout the complex. A "Host" computer network of approximately eighteen PDP-11/34's, seven PDP-11/44's, and three VAX-11/785's supports a distributed data acquisition system including Lockheed MAC-16's left from the original Main Ring and Booster instrumentation and upwards of 1000 Z80, Z8002, and M68000 microprocessors in dozens of configurations. Interaction of the various parts of the system is via a central data base stored on the disk of one of the VAXes. The primary computer-hardware communication is via CAMAC for the new Tevatron and Antiproton Source; certain subsystems, among them vacuum, refrigeration, and quench protection, reside in the distributed microprocessors and communicate via GAS, an in-house protocol. An important hardware feature is an accurate clock system making a large number of encoded "events" in the accelerator supercycle available for both hardware modules and computers. System software features include the ability to save the current state of the machine or any subsystem and later restore it or compare it with the state at another time, a general logging facility to keep track of specific variables over long periods of time, detection of "exception conditions" and the posting of alarms, and a central filesharing capability in which files on VAX disks are available for access by any of the "Host" processors.

  4. Biomass Production System (BPS) plant growth unit.

    PubMed

    Morrow, R C; Crabb, T M

    2000-01-01

    The Biomass Production System (BPS) was developed under the Small Business Innovative Research (SBIR) program to meet science, biotechnology and commercial plant growth needs in the Space Station era. The BPS is equivalent in size to a double middeck locker, but uses its own custom enclosure with a slide out structure to which internal components mount. The BPS contains four internal growth chambers, each with a growing volume of more than 4 liters. Each of the growth chambers has active nutrient delivery, and independent control of temperature, humidity, lighting, and CO2 set-points. Temperature control is achieved using a thermoelectric heat exchanger system. Humidity control is achieved using a heat exchanger with a porous interface which can both humidify and dehumidify. The control software utilizes fuzzy logic for nonlinear, coupled temperature and humidity control. The fluorescent lighting system can be dimmed to provide a range of light levels. CO2 levels are controlled by injecting pure CO2 to the system based on input from an infrared gas analyzer. The unit currently does not scrub CO2, but has been designed to accept scrubber cartridges. In addition to providing environmental control, a number of features are included to facilitate science. The BPS chambers are sealed to allow CO2 and water vapor exchange measurements. The plant chambers can be removed to allow manipulation or sampling of specimens, and each chamber has gas/fluid sample ports. A video camera is provided for each chamber, and frame-grabs and complete environmental data for all science and hardware system sensors are stored on an internal hard drive. Data files can also be transferred to 3.5-inch disks using the front panel disk drive.

  5. Biomass Production System (BPS) Plant Growth Unit

    NASA Astrophysics Data System (ADS)

    Morrow, R. C.; Crabb, T. M.

    The Biomass Production System (BPS) was developed under the Small Business Innovative Research (SBIR) program to meet science, biotechnology and commercial plant growth needs in the Space Station era. The BPS is equivalent in size to a double middeck locker, but uses it's own custom enclosure with a slide out structure to which internal components mount. The BPS contains four internal growth chambers, each with a growing volume of more than 4 liters. Each of the growth chambers has active nutrient delivery, and independent control of temperature, humidity, lighting, and CO2 set-points. Temperature control is achieved using a thermoelectric heat exchanger system. Humidity control is achieved using a heat exchanger with a porous interface which can both humidify and dehumidify. The control software utilizes fuzzy logic for nonlinear, coupled temperature and humidity control. The fluorescent lighting system can be dimmed to provide a range of light levels. CO2 levels are controlled by injecting pure CO2 to the system based on input from an infrared gas analyzer. The unit currently does not scrub CO2, but has been designed to accept scrubber cartridges. In addition to providing environmental control, a number of features are included to facilitate science. The BPS chambers are sealed to allow CO2 and water vapor exchange measurements. The plant chambers can be removed to allow manipulation or sampling of specimens, and each chamber has gas/fluid sample ports. A video camera is provided for each chamber, and frame-grabs and complete environmental data for all science and hardware system sensors are stored on an internal hard drive. Data files can also be transferred to 3.5-inch disks using the front panel disk drive

  6. A Triple Protostar System in L1448 IRS3B Formed via Fragmentation of a Gravitationally Unstable Disk

    NASA Astrophysics Data System (ADS)

    Tobin, John J.; Kratter, Kaitlin M.; Persson, Magnus; Looney, Leslie; Dunham, Michael; Segura-Cox, Dominique; Li, Zhi-Yun; Chandler, Claire J.; Sadavoy, Sarah; Harris, Robert J.; Melis, Carl; Perez, Laura M.

    2017-01-01

    Binary and multiple star systems are a frequent outcome of the star formation process; most stars form as part of a binary/multiple protostar system. A possible pathway to the formation of close (< 500 AU) binary/multiple star systems is fragmentation of a massive protostellar disk due to gravitational instability. We observed the triple protostar system L1448 IRS3B with ALMA at 1.3 mm in dust continuum and molecular lines to determine if this triple protostar system, where all companions are separated by < 200 AU, is likely to have formed via disk fragmentation. From the dust continuum emission, we find a massive, 0.39 solar mass disk surrounding the three protostars with spiral structure. The disk is centered on two protostars that are separated by 61 AU and the third protostar is located in the outer disk at 183 AU. The tertiary companion is coincident with a spiral arm, and it is the brightest source of emission in the disk, surrounded by ~0.09 solar masses of disk material. Molecular line observations from 13CO and C18O confirm that the kinematic center of mass is coincident with the two central protostars and that the disk is consistent with being in Keplerian rotation; the combined mass of the two close protostars is ~1 solar mass. We demonstrate that the disk around L1448 IRS3B remains marginally unstable at radii between 150~AU and 320~AU, overlapping with the location of the tertiary protostar. This is consistent with models for a protostellar disk that has recently undergone gravitational instability, spawning the companion stars.

  7. Aerodynamic and torque characteristics of enclosed Co/counter rotating disks

    NASA Astrophysics Data System (ADS)

    Daniels, W. A.; Johnson, B. V.; Graber, D. J.

    1989-06-01

    Experiments were conducted to determine the aerodynamic and torque characteristics of adjacent rotating disks enclosed in a shroud, in order to obtain an extended data base for advanced turbine designs such as the counterrotating turbine. Torque measurements were obtained on both disks in the rotating frame of reference for corotating, counterrotating and one-rotating/one-static disk conditions. The disk models used in the experiments included disks with typical smooth turbine geometry, disks with bolts, disks with bolts and partial bolt covers, and flat disks. A windage diaphragm was installed at mid-cavity for some experiments. The experiments were conducted with various amounts of coolant throughflow injected into the disk cavity from the disk hub or from the disk OD with swirl. The experiments were conducted at disk tangential Reynolds number up to 1.6 x 10 to the 7th with air as the working fluid. The results of this investigation indicated that the static shroud contributes a significant amount to the total friction within the disk system; the torque on counterrotating disks is essentially independent of coolant flow total rate, flow direction, and tangential Reynolds number over the range of conditions tested; and a static windage diaphragm reduces disk friction in counterrotating disk systems.

  8. Transitional Disks Associated with Intermediate-Mass Stars: Results of the SEEDS YSO Survey

    NASA Technical Reports Server (NTRS)

    Grady, C.; Fukagawa, M.; Maruta, Y.; Ohta, Y.; Wisniewski, J.; Hashimoto, J.; Okamoto, Y.; Momose, M.; Currie, T.; McElwain, M.; hide

    2014-01-01

    Protoplanetary disks are where planets form, grow, and migrate to produce the diversity of exoplanet systems we observe in mature systems. Disks where this process has advanced to the stage of gap opening, and in some cases central cavity formation, have been termed pre-transitional and transitional disks in the hope that they represent intermediate steps toward planetary system formation. Recent reviews have focussed on disks where the star is of solar or sub-solar mass. In contrast to the sub-millimeter where cleared central cavities predominate, at H-band some T Tauri star transitional disks resemble primordial disks in having no indication of clearing, some show a break in the radial surface brightness profile at the inner edge of the outer disk, while others have partially to fully cleared gaps or central cavities. Recently, the Meeus Group I Herbig stars, intermediate-mass PMS stars with IR spectral energy distributions often interpreted as flared disks, have been proposed to have transitional and pre-transitional disks similar to those associated with solar-mass PMS stars, based on thermal-IR imaging, and sub-millimeter interferometry. We have investigated their appearance in scattered light as part of the Strategic Exploration of Exoplanets and Disks with Subaru (SEEDS), obtaining H-band polarimetric imagery of 10 intermediate-mass stars with Meeus Group I disks. Augmented by other disks with imagery in the literature, the sample is now sufficiently large to explore how these disks are similar to and differ from T Tauri star disks. The disk morphologies seen in the Tauri disks are also found for the intermediate-mass star disks, but additional phenomena are found; a hallmark of these disks is remarkable individuality and diversity which does not simply correlate with disk mass or stellar properties, including age, including spiral arms in remnant envelopes, arms in the disk, asymmetrically and potentially variably shadowed outer disks, gaps, and one disk where only half of the disk is seen in scattered light at H. We will discuss our survey results in terms of spiral arm theory, dust trapping vortices, and systematic differences in the relative scale height of these disks compared to those around Solar-mass stars. For the disks with spiral arms we discuss the planet-hosting potential, and limits on where giant planets can be located. We also discuss the implications for imaging with extreme adaptive optics instruments. Grady is supported under NSF AST 1008440 and through the NASA Origins of Solar Systems program on NNG13PB64P. JPW is supported NSF AST 100314. 0) in marked contrast to protoplanetary disks, transitional disks exhibit wide range of structural features1) arm visibility correlated with relative scale height in disk2) asymmetric and possibly variable shadowing of outer portions some transitional disks3) confirm pre-transitional disk nature of Oph IRS 48, MWC 758, HD 169142, etc.

  9. Development of a set of equations for incorporating disk flexibility effects in rotordynamical analyses

    NASA Technical Reports Server (NTRS)

    Flowers, George T.; Ryan, Stephen G.

    1991-01-01

    Rotordynamical equations that account for disk flexibility are developed. These equations employ free-free rotor modes to model the rotor system. Only transverse vibrations of the disks are considered, with the shaft/disk system considered to be torsionally rigid. Second order elastic foreshortening effects that couple with the rotor speed to produce first order terms in the equations of motion are included. The approach developed in this study is readily adaptable for usage in many of the codes that are current used in rotordynamical simulations. The equations are similar to those used in standard rigid disk analyses but with additional terms that include the effects of disk flexibility. An example case is presented to demonstrate the use of the equations and to show the influence of disk flexibility on the rotordynamical behavior of a sample system.

  10. Integral processing in beyond-Hartree-Fock calculations

    NASA Technical Reports Server (NTRS)

    Taylor, P. R.

    1986-01-01

    The increasing rate at which improvements in processing capacity outstrip improvements in input/output performance of large computers has led to recent attempts to bypass generation of a disk-based integral file. The direct self-consistent field (SCF) method of Almlof and co-workers represents a very successful implementation of this approach. This paper is concerned with the extension of this general approach to configuration interaction (CI) and multiconfiguration-self-consistent field (MCSCF) calculations. After a discussion of the particular types of molecular orbital (MO) integrals for which -- at least for most current generation machines -- disk-based storage seems unavoidable, it is shown how all the necessary integrals can be obtained as matrix elements of Coulomb and exchange operators that can be calculated using a direct approach. Computational implementations of such a scheme are discussed.

  11. Debris disks as signposts of terrestrial planet formation. II. Dependence of exoplanet architectures on giant planet and disk properties

    NASA Astrophysics Data System (ADS)

    Raymond, S. N.; Armitage, P. J.; Moro-Martín, A.; Booth, M.; Wyatt, M. C.; Armstrong, J. C.; Mandell, A. M.; Selsis, F.; West, A. A.

    2012-05-01

    We present models for the formation of terrestrial planets, and the collisional evolution of debris disks, in planetary systems that contain multiple marginally unstable gas giants. We previously showed that in such systems, the dynamics of the giant planets introduces a correlation between the presence of terrestrial planets and cold dust, i.e., debris disks, which is particularly pronounced at λ ~ 70 μm. Here we present new simulations that show that this connection is qualitatively robust to a range of parameters: the mass distribution of the giant planets, the width and mass distribution of the outer planetesimal disk, and the presence of gas in the disk when the giant planets become unstable. We discuss how variations in these parameters affect the evolution. We find that systems with equal-mass giant planets undergo the most violent instabilities, and that these destroy both terrestrial planets and the outer planetesimal disks that produce debris disks. In contrast, systems with low-mass giant planets efficiently produce both terrestrial planets and debris disks. A large fraction of systems with low-mass (M ≲ 30 M⊕) outermost giant planets have final planetary separations that, scaled to the planets' masses, are as large or larger than the Saturn-Uranus and Uranus-Neptune separations in the solar system. We find that the gaps between these planets are not only dynamically stable to test particles, but are frequently populated by planetesimals. The possibility of planetesimal belts between outer giant planets should be taken into account when interpreting debris disk SEDs. In addition, the presence of ~ Earth-mass "seeds" in outer planetesimal disks causes the disks to radially spread to colder temperatures, and leads to a slow depletion of the outer planetesimal disk from the inside out. We argue that this may explain the very low frequency of >1 Gyr-old solar-type stars with observed 24 μm excesses. Our simulations do not sample the full range of plausible initial conditions for planetary systems. However, among the configurations explored, the best candidates for hosting terrestrial planets at ~1 AU are stars older than 0.1-1 Gyr with bright debris disks at 70 μm but with no currently-known giant planets. These systems combine evidence for the presence of ample rocky building blocks, with giant planet properties that are least likely to undergo destructive dynamical evolution. Thus, we predict two correlations that should be detected by upcoming surveys: an anti-correlation between debris disks and eccentric giant planets and a positive correlation between debris disks and terrestrial planets. Three movies associated to Figs. 1, 3, and 7 are available in electronic form at http://www.aanda.org

  12. Modifying the Standard Disk Model for the Ultraviolet Spectral Analysis of Disk-dominated Cataclysmic Variables. I. The Novalikes MV Lyrae, BZ Camelopardalis, and V592 Cassiopeiae

    NASA Astrophysics Data System (ADS)

    Godon, Patrick; Sion, Edward M.; Balman, Şölen; Blair, William P.

    2017-09-01

    The standard disk is often inadequate to model disk-dominated cataclysmic variables (CVs) and generates a spectrum that is bluer than the observed UV spectra. X-ray observations of these systems reveal an optically thin boundary layer (BL) expected to appear as an inner hole in the disk. Consequently, we truncate the inner disk. However, instead of removing the inner disk, we impose the no-shear boundary condition at the truncation radius, thereby lowering the disk temperature and generating a spectrum that better fits the UV data. With our modified disk, we analyze the archival UV spectra of three novalikes that cannot be fitted with standard disks. For the VY Scl systems MV Lyr and BZ Cam, we fit a hot inflated white dwarf (WD) with a cold modified disk (\\dot{M} ˜ a few 10-9 M ⊙ yr-1). For V592 Cas, the slightly modified disk (\\dot{M}˜ 6× {10}-9 {M}⊙ {{yr}}-1) completely dominates the UV. These results are consistent with Swift X-ray observations of these systems, revealing BLs merged with ADAF-like flows and/or hot coronae, where the advection of energy is likely launching an outflow and heating the WD, thereby explaining the high WD temperature in VY Scl systems. This is further supported by the fact that the X-ray hardness ratio increases with the shallowness of the UV slope in a small CV sample we examine. Furthermore, for 105 disk-dominated systems, the International Ultraviolet Explorer spectra UV slope decreases in the same order as the ratio of the X-ray flux to optical/UV flux: from SU UMa’s, to U Gem’s, Z Cam’s, UX UMa’s, and VY Scl’s.

  13. A near-infrared imaging survey of interacting galaxies - The disk-disk merger candidates subset

    NASA Technical Reports Server (NTRS)

    Stanford, S. A.; Bushouse, H. A.

    1991-01-01

    Near-infrared imaging obtained for systems believed to be advanced disk-disk mergers are presented and discussed. These systems were chosen from a sample of approximately 170 objects from the Arp Atlas of Peculiar Galaxies which have been imaged in the JHK bands as part of an investigation into the stellar component of interacting galaxies. Of the eight remnants which show optical signs of a disk-disk merger, the near-infrared surface brightness profiles are well-fitted by an r exp 1/4 law over all measured radii in four systems, and out to radii of about 3 kpc in three systems. These K band profiles indicate that most of the remnants in the sample either have finished or are in the process of relaxing into a mass distribution like that of normal elliptical galaxies.

  14. Free Vibration Analysis of a Spinning Flexible DISK-SPINDLE System Supported by Ball Bearing and Flexible Shaft Using the Finite Element Method and Substructure Synthesis

    NASA Astrophysics Data System (ADS)

    JANG, G. H.; LEE, S. H.; JUNG, M. S.

    2002-03-01

    Free vibration of a spinning flexible disk-spindle system supported by ball bearing and flexible shaft is analyzed by using Hamilton's principle, FEM and substructure synthesis. The spinning disk is described by using the Kirchhoff plate theory and von Karman non-linear strain. The rotating spindle and stationary shaft are modelled by Rayleigh beam and Euler beam respectively. Using Hamilton's principle and including the rigid body translation and tilting motion, partial differential equations of motion of the spinning flexible disk and spindle are derived consistently to satisfy the geometric compatibility in the internal boundary between substructures. FEM is used to discretize the derived governing equations, and substructure synthesis is introduced to assemble each component of the disk-spindle-bearing-shaft system. The developed method is applied to the spindle system of a computer hard disk drive with three disks, and modal testing is performed to verify the simulation results. The simulation result agrees very well with the experimental one. This research investigates critical design parameters in an HDD spindle system, i.e., the non-linearity of a spinning disk and the flexibility and boundary condition of a stationary shaft, to predict the free vibration characteristics accurately. The proposed method may be effectively applied to predict the vibration characteristics of a spinning flexible disk-spindle system supported by ball bearing and flexible shaft in the various forms of computer storage device, i.e., FDD, CD, HDD and DVD.

  15. The Evolution of a Planet-Forming Disk Artist Concept Animation

    NASA Image and Video Library

    2004-12-09

    This frame from an animation shows the evolution of a planet-forming disk around a star. Initially, the young disk is bright and thick with dust, providing raw materials for building planets. In the first 10 million years or so, gaps appear within the disk as newborn planets coalesce out of the dust, clearing out a path. In time, this planetary "debris disk" thins out as gravitational interactions with numerous planets slowly sweep away the dust. Steady pressure from the starlight and solar winds also blows out the dust. After a few billion years, only a thin ring remains in the outermost reaches of the system, a faint echo of the once-brilliant disk. Our own solar system has a similar debris disk -- a ring of comets called the Kuiper Belt. Leftover dust in the inner portion of the solar system is known as "zodiacal dust." Bright, young disks can be imaged directly by visible-light telescopes, such as NASA's Hubble Space Telescope. Older, fainter debris disks can be detected only by infrared telescopes like NASA's Spitzer Space Telescope, which sense the disks' dim heat. http://photojournal.jpl.nasa.gov/catalog/PIA07099

  16. Deciphering Debris Disk Structure with the Submillimeter Array

    NASA Astrophysics Data System (ADS)

    MacGregor, Meredith Ann

    2018-01-01

    More than 20% of nearby main sequence stars are surrounded by dusty disks continually replenished via the collisional erosion of planetesimals, larger bodies similar to asteroids and comets in our own Solar System. The material in these ‘debris disks’ is directly linked to the larger bodies such as planets in the system. As a result, the locations, morphologies, and physical properties of dust in these disks provide important probes of the processes of planet formation and subsequent dynamical evolution. Observations at millimeter wavelengths are especially critical to our understanding of these systems, since they are dominated by larger grains that do not travel far from their origin and therefore reliably trace the underlying planetesimal distribution. The Submillimeter Array (SMA) plays a key role in advancing our understanding of debris disks by providing sensitivity at the short baselines required to determine the structure of wide-field disks, such as the HR 8799 debris disk. Many of these wide-field disks are among the closest systems to us, and will serve as cornerstone templates for the interpretation of more distant, less accessible systems.

  17. Model input and output files for the simulation of time of arrival of landfill leachate at the water table, Municipal Solid Waste Landfill Facility, U.S. Army Air Defense Artillery Center and Fort Bliss, El Paso County, Texas

    USGS Publications Warehouse

    Abeyta, Cynthia G.; Frenzel, Peter F.

    1999-01-01

    This report contains listings of model input and output files for the simulation of the time of arrival of landfill leachate at the water table from the Municipal Solid Waste Landfill Facility (MSWLF), about 10 miles northeast of downtown El Paso, Texas. This simulation was done by the U.S. Geological Survey in cooperation with the U.S. Department of the Army, U.S. Army Air Defense Artillery Center and Fort Bliss, El Paso, Texas. The U.S. Environmental Protection Agency-developed Hydrologic Evaluation of Landfill Performance (HELP) and Multimedia Exposure Assessment (MULTIMED) computer models were used to simulate the production of leachate by a landfill and transport of landfill leachate to the water table. Model input data files used with and output files generated by the HELP and MULTIMED models are provided in ASCII format on a 3.5-inch 1.44-megabyte IBM-PC compatible floppy disk.

  18. VizieR Online Data Catalog: AGN torus models. SED library (Siebenmorgen+, 2015)

    NASA Astrophysics Data System (ADS)

    Siebenmorgen, R.; Heymann, F.; Efstathiou, A.

    2015-08-01

    There are 3600 ASCII tables files in two columns format. The first is the wavelength in microns, the second column is the flux in Jy. SEDs are computed for AGNs at a distance of 50Mpc and a luminosity of 1011L⊙. The file names include the five basic model parameters: a) th: The viewing angle corresponding to bins at 86, 80, 73, 67, 60, 52, 43, 33, and 19 degree measured from the pole (z-axis). thx= th1 ,.., th9 b) R : The inner radius of the dusty torus. R= 300, 514, 772, 1000, 1545 in units: (10^15 cm) c) Vc: The cloud volume filling factor. Vc= 1.5, 7.7, 38.5, 77.7 (%). d) Ac: The optical depth (in V) of the individual clouds. Ac= 0, 4.5, 13.5, 45. e) Ad: The optical depth (in V) of the disk midplane. Ad= 0, 30, 100, 300, 1000. Example: File notation. RxxxxVcxxxAcxxxx_Adxxxx.thx R1545Vc777Ac0135_Ad1000.th9 (2 data files).

  19. Analysis Report for Exascale Storage Requirements for Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, Thomas M.

    Over the next 10 years, the Department of Energy will be transitioning from Petascale to Exascale Computing resulting in data storage, networking, and infrastructure requirements to increase by three orders of magnitude. The technologies and best practices used today are the result of a relatively slow evolution of ancestral technologies developed in the 1950s and 1960s. These include magnetic tape, magnetic disk, networking, databases, file systems, and operating systems. These technologies will continue to evolve over the next 10 to 15 years on a reasonably predictable path. Experience with the challenges involved in transitioning these fundamental technologies from Terascale tomore » Petascale computing systems has raised questions about how these will scale another 3 or 4 orders of magnitude to meet the requirements imposed by Exascale computing systems. This report is focused on the most concerning scaling issues with data storage systems as they relate to High Performance Computing- and presents options for a path forward. Given the ability to store exponentially increasing amounts of data, far more advanced concepts and use of metadata will be critical to managing data in Exascale computing systems.« less

  20. Experiences with http/WebDAV protocols for data access in high throughput computing

    NASA Astrophysics Data System (ADS)

    Bernabeu, Gerard; Martinez, Francisco; Acción, Esther; Bria, Arnau; Caubet, Marc; Delfino, Manuel; Espinal, Xavier

    2011-12-01

    In the past, access to remote storage was considered to be at least one order of magnitude slower than local disk access. Improvement on network technologies provide the alternative of using remote disk. For those accesses one can today reach levels of throughput similar or exceeding those of local disks. Common choices as access protocols in the WLCG collaboration are RFIO, [GSI]DCAP, GRIDFTP, XROOTD and NFS. HTTP protocol shows a promising alternative as it is a simple, lightweight protocol. It also enables the use of standard technologies such as http caching or load balancing which can be used to improve service resilience and scalability or to boost performance for some use cases seen in HEP such as the "hot files". WebDAV extensions allow writing data, giving it enough functionality to work as a remote access protocol. This paper will show our experiences with the WebDAV door for dCache, in terms of functionality and performance, applied to some of the HEP work flows in the LHC Tier1 at PIC.

  1. VizieR Online Data Catalog: VVV Survey RR Lyr stars in Southern Galactic plane (Minniti+, 2017)

    NASA Astrophysics Data System (ADS)

    Minniti, D.; Dekany, I.; Majaess, D.; Palma, T.; Pullen, J.; Rejkuba, M.; Alonso-Garcia, J.; Catelan, M.; Contreras Ramos, R.; Gonzalez, O. A.; Hempel, M.; Irwin, M.; Lucas, P. W.; Saito, R. K.; Tissera, P.; Valenti, E.; Zoccali, M.

    2017-08-01

    The NIR VISTA Variables in the Via Lactea (VVV) Survey observations were acquired with the VIRCAM camera at the VISTA 4.1m telescope at ESO Paranal Observatory. In the disk fields typically 70 epochs of observations were acquired in the Ks-band between the years 2010 and 2015, in addition to complementary single-epoch observations in the ZYJH bands. The 16 NIR detectors of VIRCAM produce an image of 11.6'*11.6' and a pixel scale of 0.34''/pixel. The deep multi-epoch Ks band photometry allows us to unveil faint variable sources deep in the disk regions of our Galaxy. A search for RRab stars was made throughout tiles d001 to d038 of the VVV survey's disk field, which is a thin slice through the Galactic plane spanning 295

  2. VizieR Online Data Catalog: Stars associated to Eagle Nebula (M16=NGC6611) (Guarcello+ 2010)

    NASA Astrophysics Data System (ADS)

    Guarcello, M. G.; Micela, G.; Peres, G.; Prisinzano, L.; Sciortino, S.

    2010-08-01

    This catalog contains coordinates and both optical and infrared photometry, plus usefull tags, of the candidate stars associated to the Eagle Nebula (M16), bost disk-less and disk-bearing, selected in Guarcello et al. 2010: "Chronology of star formation and disks evolution in the Eagle Nebula". The optical photometry in BVI bands comes from observations with WFI@ESO (Guarcello et al. 2007, Cat. J/A+A/462/245); JHK photometry have been obtained from 2MASS/PSC (Bonatto et al. 2006A&A...445..567B, Guarcello et al. 2007, Cat. J/A+A/462/245) and UKIDSS/GPS catalogs (Guarcello et al., 2010, in prep.) ; IRAC data are from GLIMPSE public survey (Indebetouw 2007ApJ...666..321I, Guarcello et al., 2009, Cat. J/A+A/496/453); X-ray data from three observations with Chandra/ACIS-I (Linsky et al., 2007, Cat. J/ApJ/654/347, Guarcello et al., 2007, J/A+A/462/245, Guarcello et al. 2010, in prep.). (1 data file).

  3. Mineral Resources Data System (MRDS)

    USGS Publications Warehouse

    Mason, G.T.; Arndt, R.E.

    1996-01-01

    The U.S. Geological Survey (USGS) operates the Mineral Resources Data System (MRDS), a digital system that contained 111,955 records on Sept. 1, 1995. Records describe metallic and industrial commodity deposits, mines, prospects, and occurrences in the United States and selected other countries. These records have been created over the years by USGS commodity specialists and through cooperative agreements with geological surveys of U.S. States and other countries. This CD-ROM contains the complete MRDS data base, several subsets of it, and software to allow data retrieval and display. Data retrievals are made by using GSSEARCH, a program that is included on this CD-ROM. Retrievals are made by specifying fields or any combination of the fields that provide information on deposit name, location, commodity, deposit model type, geology, mineral production, reserves, and references. A tutorial is included. Retrieved records may be printed or written to a hard disk file in four different formats: ascii, fixed, comma delimited, and DBASE compatible.

  4. Observational studies of the clearing phase in proto-planetary disk systems

    NASA Technical Reports Server (NTRS)

    Grady, Carol A.

    1994-01-01

    A summary of the work completed during the first year of a 5 year program to observationally study the clearing phase of proto-planetary disks is presented. Analysis of archival and current IUE data, together with supporting optical observations has resulted in the identification of 6 new proto-planetary disk systems associated with Herbig Ae/Be stars, the evolutionary precursors of the beta Pictoris system. These systems exhibit large amplitude light and optical color variations which enable us to identify additional systems which are viewed through their circumstellar disks including a number of classical T Tauri stars. On-going IUE observations of Herbig Ae/Be and T Tauri stars with this orientation have enabled us to detect bipolar emission plausibly associated with disk winds. Preliminary circumstellar extinction studies were completed for one star, UX Ori. Intercomparison of the available sample of edge-on systems, with stars ranging from 1-6 solar masses, suggests that the signatures of accreting gas, disk winds, and bipolar flows and the prominence of a dust-scattered light contribution to the integrated light of the system decreases with decreasing IR excess.

  5. Accretion Disks and the Formation of Stellar Systems

    NASA Astrophysics Data System (ADS)

    Kratter, Kaitlin Michelle

    2011-02-01

    In this thesis, we examine the role of accretion disks in the formation of stellar systems, focusing on young massive disks which regulate the flow of material from the parent molecular core down to the star. We study the evolution of disks with high infall rates that develop strong gravitational instabilities. We begin in chapter 1 with a review of the observations and theory which underpin models for the earliest phases of star formation and provide a brief review of basic accretion disk physics, and the numerical methods that we employ. In chapter 2 we outline the current models of binary and multiple star formation, and review their successes and shortcomings from a theoretical and observational perspective. In chapter 3 we begin with a relatively simple analytic model for disks around young, high mass stars, showing that instability in these disks may be responsible for the higher multiplicity fraction of massive stars, and perhaps the upper mass to which they grow. We extend these models in chapter 4 to explore the properties of disks and the formation of binary companions across a broad range of stellar masses. In particular, we model the role of global and local mechanisms for angular momentum transport in regulating the relative masses of disks and stars. We follow the evolution of these disks throughout the main accretion phase of the system, and predict the trajectory of disks through parameter space. We follow up on the predictions made in our analytic models with a series of high resolution, global numerical experiments in chapter 5. Here we propose and test a new parameterization for describing rapidly accreting, gravitationally unstable disks. We find that disk properties and system multiplicity can be mapped out well in this parameter space. Finally, in chapter 6, we address whether our studies of unstable disks are relevant to recently detected massive planets on wide orbits around their central stars.

  6. Numerical Simulations of Naturally Tilted, Retrogradely Precessing, Nodal Superhumping Accretion Disks

    NASA Astrophysics Data System (ADS)

    Montgomery, M. M.

    2012-02-01

    Accretion disks around black hole, neutron star, and white dwarf systems are thought to sometimes tilt, retrogradely precess, and produce hump-shaped modulations in light curves that have a period shorter than the orbital period. Although artificially rotating numerically simulated accretion disks out of the orbital plane and around the line of nodes generate these short-period superhumps and retrograde precession of the disk, no numerical code to date has been shown to produce a disk tilt naturally. In this work, we report the first naturally tilted disk in non-magnetic cataclysmic variables using three-dimensional smoothed particle hydrodynamics. Our simulations show that after many hundreds of orbital periods, the disk has tilted on its own and this disk tilt is without the aid of radiation sources or magnetic fields. As the system orbits, the accretion stream strikes the bright spot (which is on the rim of the tilted disk) and flows over and under the disk on different flow paths. These different flow paths suggest the lift force as a source to disk tilt. Our results confirm the disk shape, disk structure, and negative superhump period and support the source to disk tilt, source to retrograde precession, and location associated with X-ray and He II emission from the disk as suggested in previous works. Our results identify the fundamental negative superhump frequency as the indicator of disk tilt around the line of nodes.

  7. Dynamics of binary and planetary-system interaction with disks - Eccentricity changes

    NASA Technical Reports Server (NTRS)

    Atrymowicz, Pawel

    1992-01-01

    Protostellar and protoplanetary systems, as well as merging galactic nuclei, often interact tidally and resonantly with the astrophysical disks via gravity. Underlying our understanding of the formation processes of stars, planets, and some galaxies is a dynamical theory of such interactions. Its main goals are to determine the geometry of the binary-disk system and, through the torque calculations, the rate of change of orbital elements of the components. We present some recent developments in this field concentrating on eccentricity driving mechanisms in protoplanetary and protobinary systems. In those two types of systems the result of the interaction is opposite. A small body embedded in a disk suffers a decrease of orbital eccentricity, whereas newly formed binary stars surrounded by protostellar disks may undergo a significant orbital evolution increasing their eccentricities.

  8. Online performance evaluation of RAID 5 using CPU utilization

    NASA Astrophysics Data System (ADS)

    Jin, Hai; Yang, Hua; Zhang, Jiangling

    1998-09-01

    Redundant arrays of independent disks (RAID) technology is the efficient way to solve the bottleneck problem between CPU processing ability and I/O subsystem. For the system point of view, the most important metric of on line performance is the utilization of CPU. This paper first employs the way to calculate the CPU utilization of system connected with RAID level 5 using statistic average method. From the simulation results of CPU utilization of system connected with RAID level 5 subsystem can we see that using multiple disks as an array to access data in parallel is the efficient way to enhance the on-line performance of disk storage system. USing high-end disk drivers to compose the disk array is the key to enhance the on-line performance of system.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballering, Nicholas P.; Rieke, George H.; Gáspár, András, E-mail: ballerin@email.arizona.edu

    Observations of debris disks allow for the study of planetary systems, even where planets have not been detected. However, debris disks are often only characterized by unresolved infrared excesses that resemble featureless blackbodies, and the location of the emitting dust is uncertain due to a degeneracy with the dust grain properties. Here, we characterize the Spitzer Infrared Spectrograph spectra of 22 debris disks exhibiting 10 μm silicate emission features. Such features arise from small warm dust grains, and their presence can significantly constrain the orbital location of the emitting debris. We find that these features can be explained by themore » presence of an additional dust component in the terrestrial zones of the planetary systems, i.e., an exozodiacal belt. Aside from possessing exozodiacal dust, these debris disks are not particularly unique; their minimum grain sizes are consistent with the blowout sizes of their systems, and their brightnesses are comparable to those of featureless warm debris disks. These disks are in systems of a range of ages, though the older systems with features are found only around A-type stars. The features in young systems may be signatures of terrestrial planet formation. Analyzing the spectra of unresolved debris disks with emission features may be one of the simplest and most accessible ways to study the terrestrial regions of planetary systems.« less

  10. Selected Conference Proceedings from the 1985 Videodisc, Optical Disk, and CD-ROM Conference and Exposition (Philadelphia, PA, December 10-12, 1985).

    ERIC Educational Resources Information Center

    Cerva, John R.; And Others

    1986-01-01

    Eight papers cover: optical storage technology; cross-cultural videodisc design; optical disk technology use at the Library of Congress Research Service and National Library of Medicine; Internal Revenue Service image storage and retrieval system; solving business problems with CD-ROM; a laser disk operating system; and an optical disk for…

  11. Electron beam diagnostic for profiling high power beams

    DOEpatents

    Elmer, John W [Danville, CA; Palmer, Todd A [Livermore, CA; Teruya, Alan T [Livermore, CA

    2008-03-25

    A system for characterizing high power electron beams at power levels of 10 kW and above is described. This system is comprised of a slit disk assembly having a multitude of radial slits, a conducting disk with the same number of radial slits located below the slit disk assembly, a Faraday cup assembly located below the conducting disk, and a start-stop target located proximate the slit disk assembly. In order to keep the system from over-heating during use, a heat sink is placed in close proximity to the components discussed above, and an active cooling system, using water, for example, can be integrated into the heat sink. During use, the high power beam is initially directed onto a start-stop target and after reaching its full power is translated around the slit disk assembly, wherein the beam enters the radial slits and the conducting disk radial slits and is detected at the Faraday cup assembly. A trigger probe assembly can also be integrated into the system in order to aid in the determination of the proper orientation of the beam during reconstruction. After passing over each of the slits, the beam is then rapidly translated back to the start-stop target to minimize the amount of time that the high power beam comes in contact with the slit disk assembly. The data obtained by the system is then transferred into a computer system, where a computer tomography algorithm is used to reconstruct the power density distribution of the beam.

  12. Characterizing the scientific potential of satellite sensors. [San Francisco, California

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Eleven thematic mapper (TM) radiometric calibration programs were tested and evaluated in support of the task to characterize the potential of LANDSAT TM digital imagery for scientific investigations in the Earth sciences and terrestrial physics. Three software errors related to integer overflow, divide by zero, and nonexist file group were found and solved. Raw, calibrated, and corrected image groups that were created and stored on the Barker2 disk are enumerated. Black and white pixel print files were created for various subscenes of a San Francisco scene (ID 40392-18152). The development of linear regression software is discussed. The output of the software and its function are described. Future work in TM radiometric calibration, image processing, and software development is outlined.

  13. Imaging Transitional Disks with TMT: Lessons Learned from the SEEDS Survey

    NASA Technical Reports Server (NTRS)

    Grady, Carol A.; Fukagawa, M.; Muto, T.; Hashimoto, J.

    2014-01-01

    TMT studies of the early phases of giant planet formation will build on studies carried out in this decade using 8-meter class telescopes. One such study is the Strategic Exploration of Exoplanets and Disks with Subaru transitional disk survey. We have found a wealth of indirect signatures of giant planet presence, including spiral arms, pericenter offsets of the outer disk from the star, and changes in disk color at the inner edge of the outer disk in intermediate-mass PMS star disks. T Tauri star transitional disks are less flamboyant, but are also dynamically colder: any spiral arms in these diskswill be more tightly wound. Imaging such features at the distance of the nearest star-forming regions requires higher angular resolution than achieved with HiCIAO+ AO188. Imaging such disks with extreme AO systems requires use of laser guide stars, and are infeasible with the extreme AO systems currently commissioning on 8-meter class telescopes. Similarly, the JWST and AFTAWFIRST coronagraphs being considered have inner working angles 0.2, and will occult the inner 28 atomic units of systems at d140pc, a region where both high-contrast imagery and ALMA data indicate that giant planets are located in transitional disks. However, studies of transitional disks associated with solar-mass stars and their planet complement are feasible with TMT using NFIRAOS.

  14. Collective transport for active matter run-and-tumble disk systems on a traveling-wave substrate

    DOE PAGES

    Sándor, Csand; Libál, Andras; Reichhardt, Charles; ...

    2017-01-17

    Here, we examine numerically the transport of an assembly of active run-and-tumble disks interacting with a traveling-wave substrate. We show that as a function of substrate strength, wave speed, disk activity, and disk density, a variety of dynamical phases arise that are correlated with the structure and net flux of disks. We find that there is a sharp transition into a state in which the disks are only partially coupled to the substrate and form a phase-separated cluster state. This transition is associated with a drop in the net disk flux, and it can occur as a function of themore » substrate speed, maximum substrate force, disk run time, and disk density. Since variation of the disk activity parameters produces different disk drift rates for a fixed traveling-wave speed on the substrate, the system we consider could be used as an efficient method for active matter species separation. Within the cluster phase, we find that in some regimes the motion of the cluster center of mass is in the opposite direction to that of the traveling wave, while when the maximum substrate force is increased, the cluster drifts in the direction of the traveling wave. This suggests that swarming or clustering motion can serve as a method by which an active system can collectively move against an external drift.« less

  15. Collective transport for active matter run-and-tumble disk systems on a traveling-wave substrate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sándor, Csand; Libál, Andras; Reichhardt, Charles

    Here, we examine numerically the transport of an assembly of active run-and-tumble disks interacting with a traveling-wave substrate. We show that as a function of substrate strength, wave speed, disk activity, and disk density, a variety of dynamical phases arise that are correlated with the structure and net flux of disks. We find that there is a sharp transition into a state in which the disks are only partially coupled to the substrate and form a phase-separated cluster state. This transition is associated with a drop in the net disk flux, and it can occur as a function of themore » substrate speed, maximum substrate force, disk run time, and disk density. Since variation of the disk activity parameters produces different disk drift rates for a fixed traveling-wave speed on the substrate, the system we consider could be used as an efficient method for active matter species separation. Within the cluster phase, we find that in some regimes the motion of the cluster center of mass is in the opposite direction to that of the traveling wave, while when the maximum substrate force is increased, the cluster drifts in the direction of the traveling wave. This suggests that swarming or clustering motion can serve as a method by which an active system can collectively move against an external drift.« less

  16. The properties of the disk system of globular clusters

    NASA Technical Reports Server (NTRS)

    Armandroff, Taft E.

    1989-01-01

    A large refined data sample is used to study the properties and origin of the disk system of globular clusters. A scale height for the disk cluster system of 800-1500 pc is found which is consistent with scale-height determinations for samples of field stars identified with the Galactic thick disk. A rotational velocity of 193 + or - 29 km/s and a line-of-sight velocity dispersion of 59 + or - 14 km/s have been found for the metal-rich clusters.

  17. Method and system for managing power grid data

    DOEpatents

    Yin, Jian; Akyol, Bora A.; Gorton, Ian

    2015-11-10

    A system and method of managing time-series data for smart grids is disclosed. Data is collected from a plurality of sensors. An index is modified for a newly created block. A one disk operation per read or write is performed. The one disk operation per read includes accessing and looking up the index to locate the data without movement of an arm of the disk, and obtaining the data. The one disk operation per write includes searching the disk for free space, calculating an offset, modifying the index, and writing the data contiguously into a block of the disk the index points to.

  18. Studies of extra-solar Oort Clouds and the Kuiper Disk

    NASA Technical Reports Server (NTRS)

    Stern, S. Alan

    1994-01-01

    The March 1994 Semi-Annual report for Studies of Extra-Solar Oort Clouds and the Kuiper Disk is presented. We are conducting research designed to enhance our understanding of the evolution and detectability of comet clouds and disks. This area holds promise for also improving our understanding of outer solar system formation, the bombardment history of the planets, the transport of volatiles and organics from the outer solar system to the inner planets, and to the ultimate fate of comet clouds around the Sun and other stars. According to 'standard' theory, both the Kuiper Disk and Oort Cloud are (at least in part) natural products of the planetary accumulation stage of solar system formation. One expects such assemblages to be a common attribute of other solar systems. Therefore, searches for comet disks and clouds orbiting other stars offer a new method for inferring the presence of planetary systems. Our three-year effort consists of two major efforts: observational work to predict and search for the signatures of Oort Clouds and comet disks around other stars; and modeling studies of the formation and evolution of the Kuiper Disk (KD) and similar assemblages that may reside around other stars, including beta Pic.

  19. Performance of a distributed superscalar storage server

    NASA Technical Reports Server (NTRS)

    Finestead, Arlan; Yeager, Nancy

    1993-01-01

    The RS/6000 performed well in our test environment. The potential exists for the RS/6000 to act as a departmental server for a small number of users, rather than as a high speed archival server. Multiple UniTree Disk Server's utilizing one UniTree Disk Server's utilizing one UniTree Name Server could be developed that would allow for a cost effective archival system. Our performance tests were clearly limited by the network bandwidth. The performance gathered by the LibUnix testing shows that UniTree is capable of exceeding ethernet speeds on an RS/6000 Model 550. The performance of FTP might be significantly faster if asked to perform across a higher bandwidth network. The UniTree Name Server also showed signs of being a potential bottleneck. UniTree sites that would require a high ratio of file creations and deletions to reads and writes would run into this bottleneck. It is possible to improve the UniTree Name Server performance by bypassing the UniTree LibUnix Library altogether and communicating directly with the UniTree Name Server and optimizing creations. Although testing was performed in a less than ideal environment, hopefully the performance statistics stated in this paper will give end-users a realistic idea as to what performance they can expect in this type of setup.

  20. Optical Disk for Digital Storage and Retrieval Systems.

    ERIC Educational Resources Information Center

    Rose, Denis A.

    1983-01-01

    Availability of low-cost digital optical disks will revolutionize storage and retrieval systems over next decade. Three major factors will effect this change: availability of disks and controllers at low-cost and in plentiful supply; availability of low-cost and better output means for system users; and more flexible, less expensive communication…

  1. Modeling circumbinary planets: The case of Kepler-38

    NASA Astrophysics Data System (ADS)

    Kley, Wilhelm; Haghighipour, Nader

    2014-04-01

    Context. Recently, a number of planets orbiting binary stars have been discovered by the Kepler space telescope. In a few systems the planets reside close to the dynamical stability limit. Owing to the difficulty of forming planets in such close orbits, it is believed that they have formed farther out in the disk and migrated to their present locations. Aims: Our goal is to construct more realistic models of planet migration in circumbinary disks and to determine the final position of these planets more accurately. In our work, we focus on the system Kepler-38 where the planet is close to the stability limit. Methods: The evolution of the circumbinary disk is studied using two-dimensional hydrodynamical simulations. We study locally isothermal disks as well as more realistic models that include full viscous heating, radiative cooling from the disk surfaces, and radiative diffusion in the disk midplane. After the disk has been brought into a quasi-equilibrium state, a 115 Earth-mass planet is embedded and its evolution is followed. Results: In all cases the planets stop inward migration near the inner edge of the disk. In isothermal disks with a typical disk scale height of H/r = 0.05, the final outcome agrees very well with the observed location of planet Kepler-38b. For the radiative models, the disk thickness and location of the inner edge is determined by the mass in the system. For surface densities on the order of 3000 g/cm2 at 1 AU, the inner gap lies close to the binary and planets stop in the region between the 5:1 and 4:1 mean-motion resonances with the binary. A model with a disk with approximately a quarter of the mass yields a final position very close to the observed one. Conclusions: For planets migrating in circumbinary disks, the final position is dictated by the structure of the disk. Knowing the observed orbits of circumbinary planets, radiative disk simulations with embedded planets can provide important information on the physical state of the system during the final stages of its evolution. Movies are available in electronic form at http://www.aanda.org

  2. Evolution of protoplanetary disks with dynamo magnetic fields

    NASA Technical Reports Server (NTRS)

    Reyes-Ruiz, M.; Stepinski, Tomasz F.

    1994-01-01

    The notion that planetary systems are formed within dusty disks is certainly not a new one; the modern planet formation paradigm is based on suggestions made by Laplace more than 200 years ago. More recently, the foundations of accretion disk theory where initially developed with this problem in mind, and in the last decade astronomical observations have indicated that many young stars have disks around them. Such observations support the generally accepted model of a viscous Keplerian accretion disk for the early stages of planetary system formation. However, one of the major uncertainties remaining in understanding the dynamical evolution of protoplanetary disks is the mechanism responsible for the transport of angular momentum and subsequent mass accretion through the disk. This is a fundamental piece of the planetary system genesis problem since such mechanisms will determine the environment in which planets are formed. Among the mechanisms suggested for this effect is the Maxwell stress associated with a magnetic field treading the disk. Due to the low internal temperatures through most of the disk, even the question of the existence of a magnetic field must be seriously studied before including magnetic effects in the disk dynamics. On the other hand, from meteoritic evidence it is believed that magnetic fields of significant magnitude existed in the earliest, PP-disk-like, stage of our own solar system's evolution. Hence, the hypothesis that PP disks are magnetized is not made solely on the basis of theory. Previous studies have addressed the problem of the existence of a magnetic field in a steady-state disk and have found that the low conductivity results in a fast diffusion of the magnetic field on timescales much shorter than the evolutionary timescale. Hence the only way for a magnetic field to exist in PP disks for a considerable portion of their lifetimes is for it to be continuously regenerated. In the present work, we present results on the self-consistent evolution of a turbulent PP disk including the effects of a dynamo-generated magnetic field.

  3. Can Eccentric Debris Disks Be Long-lived? A First Numerical Investigation and Application to Zeta(exp 2) Reticuli

    NASA Technical Reports Server (NTRS)

    Faramaz, V.; Beust, H.; Thebault, P.; Augereau, J.-C.; Bonsor, A.; delBurgo, C.; Ertel, S.; Marshall, J. P.; Milli, J.; Montesinos, B.; hide

    2014-01-01

    Context. Imaging of debris disks has found evidence for both eccentric and offset disks. One hypothesis is that they provide evidence for massive perturbers, for example, planets or binary companions, which sculpt the observed structures. One such disk was recently observed in the far-IR by the Herschel Space Observatory around Zeta2 Reticuli. In contrast with previously reported systems, the disk is significantly eccentric, and the system is several Gyr old. Aims. We aim to investigate the long-term evolution of eccentric structures in debris disks caused by a perturber on an eccentric orbit around the star. We hypothesise that the observed eccentric disk around Zeta2 Reticuli might be evidence of such a scenario. If so, we are able to constrain the mass and orbit of a potential perturber, either a giant planet or a binary companion. Methods. Analytical techniques were used to predict the effects of a perturber on a debris disk. Numerical N-body simulations were used to verify these results and further investigate the observable structures that may be produced by eccentric perturbers. The long-term evolution of the disk geometry was examined, with particular application to the Zeta2 Reticuli system. In addition, synthetic images of the disk were produced for direct comparison with Herschel observations. Results. We show that an eccentric companion can produce both the observed offsets and eccentric disks. These effects are not immediate, and we characterise the timescale required for the disk to develop to an eccentric state (and any spirals to vanish). For Zeta2 Reticuli, we derive limits on the mass and orbit of the companion required to produce the observations. Synthetic images show that the pattern observed around Zeta2 Reticuli can be produced by an eccentric disk seen close to edge-on, and allow us to bring additional constraints on the disk parameters of our model (disk flux and extent). Conclusions. We conclude that eccentric planets or stellar companions can induce long-lived eccentric structures in debris disks. Observations of such eccentric structures thus provide potential evidence of the presence of such a companion in a planetary system. We considered the specific example of Zeta2 Reticuli, whose observed eccentric disk can be explained by a distant companion (at tens of AU) on an eccentric orbit (ep greater than approx. 0.3).

  4. Procedure for Tooth Contact Analysis of a Face Gear Meshing With a Spur Gear Using Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Bibel, George; Lewicki, David G. (Technical Monitor)

    2002-01-01

    A procedure was developed to perform tooth contact analysis between a face gear meshing with a spur pinion using finite element analysis. The face gear surface points from a previous analysis were used to create a connected tooth solid model without gaps or overlaps. The face gear surface points were used to create a five tooth face gear Patran model (with rim) using Patran PCL commands. These commands were saved in a series of session files suitable for Patran input. A four tooth spur gear that meshes with the face gear was designed and constructed with Patran PCL commands. These commands were also saved in a session files suitable for Patran input. The orientation of the spur gear required for meshing with the face gear was determined. The required rotations and translations are described and built into the session file for the spur gear. The Abaqus commands for three-dimensional meshing were determined and verified for a simplified model containing one spur tooth and one face gear tooth. The boundary conditions, loads, and weak spring constraints were determined to make the simplified model work. The load steps and load increments to establish contact and obtain a realistic load was determined for the simplified two tooth model. Contact patterns give some insight into required mesh density. Building the two gears in two different local coordinate systems and rotating the local coordinate systems was verified as an easy way to roll the gearset through mesh. Due to limitation of swap space, disk space and time constraints of the summer period, the larger model was not completed.

  5. Force Network of a 2D Frictionless Emulsion System

    NASA Astrophysics Data System (ADS)

    Desmond, Kenneth; Weeks, Eric R.

    2010-03-01

    We use a quasi-two-dimensional emulsion as a new experimental system to measure various jamming transition properties. Our system consist of confining oil-in-water emulsion droplets between two parallel plates, so that the droplets are squeezed into quasi-two dimensional disks, analogous to granular photoelastic disks. By varying the droplet area fraction, we investigate the force network of this system as we cross through the jamming transition. At a critical area fraction, the composition of the system is no longer characterized primarily by circular disks, but by disks deformed to varying degrees. Quantifying the deformation provides information about the forces acting upon each droplet, and ultimately the force network. The probability distribution of forces is similar to that found for photoelastic disks, with the width of the force distribution narrowing with increasing packing fraction.

  6. Planetary Systems Dynamics Eccentric patterns in debris disks & Planetary migration in binary systems

    NASA Astrophysics Data System (ADS)

    Faramaz, V.; Beust, H.; Augereau, J.-C.; Bonsor, A.; Thébault, P.; Wu, Y.; Marshall, J. P.; del Burgo, C.; Ertel, S.; Eiroa, C.; Montesinos, B.; Mora, A.

    2014-01-01

    We present some highlights of two ongoing investigations that deal with the dynamics of planetary systems. Firstly, until recently, observed eccentric patterns in debris disks were found in young systems. However recent observations of Gyr-old eccentric debris disks leads to question the survival timescale of this type of asymmetry. One such disk was recently observed in the far-IR by the Herschel Space Observatory around ζ2 Reticuli. Secondly, as a binary companion orbits a circumprimary disk, it creates regions where planet formation is strongly handicapped. However, some planets were detected in this zone in tight binary systems (γ Cep, HD 196885). We aim to determine whether a binary companion can affect migration such that planets are brought in these regions and focus in particular on the planetesimal-driven migration mechanism.

  7. NanoRocks: A Long-Term Microgravity Experiment to Stydy Planet Formation and Planetary Ring Particles

    NASA Astrophysics Data System (ADS)

    Brisset, J.; Colwell, J. E.; Dove, A.; Maukonen, D.; Brown, N.; Lai, K.; Hoover, B.

    2015-12-01

    We report on the results of the NanoRocks experiment on the International Space Station (ISS), which simulates collisions that occur in protoplanetary disks and planetary ring systems. A critical stage of the process of early planet formation is the growth of solid bodies from mm-sized chondrules and aggregates to km-sized planetesimals. To characterize the collision behavior of dust in protoplanetary conditions, experimental data is required, working hand in hand with models and numerical simulations. In addition, the collisional evolution of planetary rings takes place in the same collisional regime. The objective of the NanoRocks experiment is to study low-energy collisions of mm-sized particles of different shapes and materials. An aluminum tray (~8x8x2cm) divided into eight sample cells holding different types of particles gets shaken every 60 s providing particles with initial velocities of a few cm/s. In September 2014, NanoRocks reached ISS and 220 video files, each covering one shaking cycle, have already been downloaded from Station. The data analysis is focused on the dynamical evolution of the multi-particle systems and on the formation of cluster. We track the particles down to mean relative velocities less than 1 mm/s where we observe cluster formation. The mean velocity evolution after each shaking event allows for a determination of the mean coefficient of restitution for each particle set. These values can be used as input into protoplanetary disk and planetary rings simulations. In addition, the cluster analysis allows for a determination of the mean final cluster size and the average particle velocity of clustering onset. The size and shape of these particle clumps is crucial to understand the first stages of planet formation inside protoplanetary disks as well as many a feature of Saturn's rings. We report on the results from the ensemble of these collision experiments and discuss applications to planetesimal formation and planetary ring evolution.

  8. Discovery of a point-like source and a third spiral arm in the transition disk around the Herbig Ae star MWC 758

    NASA Astrophysics Data System (ADS)

    Reggiani, M.; Christiaens, V.; Absil, O.; Mawet, D.; Huby, E.; Choquet, E.; Gomez Gonzalez, C. A.; Ruane, G.; Femenia, B.; Serabyn, E.; Matthews, K.; Barraza, M.; Carlomagno, B.; Defrère, D.; Delacroix, C.; Habraken, S.; Jolivet, A.; Karlsson, M.; Orban de Xivry, G.; Piron, P.; Surdej, J.; Vargas Catalan, E.; Wertz, O.

    2018-03-01

    Context. Transition disks offer the extraordinary opportunity to look for newly born planets and to investigate the early stages of planet formation. Aim. In this context we observed the Herbig A5 star MWC 758 with the L'-band vector vortex coronagraph installed in the near-infrared camera and spectrograph NIRC2 at the Keck II telescope, with the aim of unveiling the nature of the spiral structure by constraining the presence of planetary companions in the system. Methods: Our high-contrast imaging observations show a bright (ΔL' = 7.0 ± 0.3 mag) point-like emission south of MWC 758 at a deprojected separation of 20 au (r = 0.''111 ± 0.''004) from the central star. We also recover the two spiral arms (southeast and northwest), already imaged by previous studies in polarized light, and discover a third arm to the southwest of the star. No additional companions were detected in the system down to 5 Jupiter masses beyond 0.''6 from the star. Results: We propose that the bright L'-band emission could be caused by the presence of an embedded and accreting protoplanet, although the possibility of it being an asymmetric disk feature cannot be excluded. The spiral structure is probably not related to the protoplanet candidate, unless on an inclined and eccentric orbit, and it could be due to one (or more) yet undetected planetary companions at the edge of or outside the spiral pattern. Future observations and additional simulations will be needed to shed light on the true nature of the point-like source and its link with the spiral arms. The reduced images (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A74

  9. The mass storage testing laboratory at GSFC

    NASA Technical Reports Server (NTRS)

    Venkataraman, Ravi; Williams, Joel; Michaud, David; Gu, Heng; Kalluri, Atri; Hariharan, P. C.; Kobler, Ben; Behnke, Jeanne; Peavey, Bernard

    1998-01-01

    Industry-wide benchmarks exist for measuring the performance of processors (SPECmarks), and of database systems (Transaction Processing Council). Despite storage having become the dominant item in computing and IT (Information Technology) budgets, no such common benchmark is available in the mass storage field. Vendors and consultants provide services and tools for capacity planning and sizing, but these do not account for the complete set of metrics needed in today's archives. The availability of automated tape libraries, high-capacity RAID systems, and high- bandwidth interconnectivity between processor and peripherals has led to demands for services which traditional file systems cannot provide. File Storage and Management Systems (FSMS), which began to be marketed in the late 80's, have helped to some extent with large tape libraries, but their use has introduced additional parameters affecting performance. The aim of the Mass Storage Test Laboratory (MSTL) at Goddard Space Flight Center is to develop a test suite that includes not only a comprehensive check list to document a mass storage environment but also benchmark code. Benchmark code is being tested which will provide measurements for both baseline systems, i.e. applications interacting with peripherals through the operating system services, and for combinations involving an FSMS. The benchmarks are written in C, and are easily portable. They are initially being aimed at the UNIX Open Systems world. Measurements are being made using a Sun Ultra 170 Sparc with 256MB memory running Solaris 2.5.1 with the following configuration: 4mm tape stacker on SCSI 2 Fast/Wide; 4GB disk device on SCSI 2 Fast/Wide; and Sony Petaserve on Fast/Wide differential SCSI 2.

  10. OT1_ipascucc_1: Understanding the Origin of Transition Disks via Disk Mass Measurements

    NASA Astrophysics Data System (ADS)

    Pascucci, I.

    2010-07-01

    Transition disks are a distinguished group of few Myr-old systems caught in the phase of dispersing their inner dust disk. Three different processes have been proposed to explain this inside-out clearing: grain growth, photoevaporation driven by the central star, and dynamical clearing by a forming giant planet. Which of these processes lead to a transition disk? Distinguishing between them requires the combined knowledge of stellar accretion rates and disk masses. We propose here to use 43.8 hours of PACS spectroscopy to detect the [OI] 63 micron emission line from a sample of 21 well-known transition disks with measured mass accretion rates. We will use this line, in combination with ancillary CO millimeter lines, to measure their gas disk mass. Because gas dominates the mass of protoplanetary disks our approach and choice of lines will enable us to trace the bulk of the disk mass that resides beyond tens of AU from young stars. Our program will quadruple the number of transition disks currently observed with Herschel in this setting and for which disk masses can be measured. We will then place the transition and the ~100 classical/non-transition disks of similar age (from the Herschel KP "Gas in Protoplanetary Systems") in the mass accretion rate-disk mass diagram with two main goals: 1) reveal which gaps have been created by grain growth, photoevaporation, or giant planet formation and 2) from the statistics, determine the main disk dispersal mechanism leading to a transition disk.

  11. UBVR observation of V1357 Cyg = Cyg X-1. Search of the optical radiation of the accretion disk

    NASA Technical Reports Server (NTRS)

    Shevchenko, V. S.

    1979-01-01

    Data from 30 nights of V 1357 Cyg observations in July, August, and September of 1977 are presented. The contribution of the disk to the optic brightness of the system is computed with regard for the heating of its surface by ultraviolet radiation from V 1357 Cyg and X-ray radiation from Cyg X-1. The disk radiation explains the irregular variability in the system brightness. The possibility of the eclipse of the star by the disk and the disk by the star is discussed.

  12. Compact laser amplifier system

    DOEpatents

    Carr, R.B.

    1974-02-26

    A compact laser amplifier system is described in which a plurality of face-pumped annular disks, aligned along a common axis, independently radially amplify a stimulating light pulse. Partially reflective or lasing means, coaxially positioned at the center of each annualar disk, radially deflects a stimulating light directed down the common axis uniformly into each disk for amplification, such that the light is amplified by the disks in a parallel manner. Circumferential reflecting means coaxially disposed around each disk directs amplified light emission, either toward a common point or in a common direction. (Official Gazette)

  13. VizieR Online Data Catalog: APOGEE kinematics. I. Galactic bulge overview (Ness+, 2016)

    NASA Astrophysics Data System (ADS)

    Ness, M.; Zasowski, G.; Johnson, J. A.; Athanassoula, E.; Majewski, S. R.; Garcia Perez, A. E.; Bird, J.; Nidever, D.; Schneider, D. P.; Sobeck, J.; Frinchaboy, P.; Pan, K.; Bizyaev, D.; Oravetz, D.; Simmons, A.

    2016-05-01

    We use the APOGEE spectra (R=22500) from the SDSS-III Data Release 12 (DR12; Ahn et al. 2014ApJS..211...17A) for about 20000 stars toward the Galactic bulge and surrounding disk. The APOGEE survey, part of the SDSS-III project (Eisenstein et al. 2011AJ....142...72E), operates at the 2.5m telescope of the Apache Point Observatory. (1 data file).

  14. A program to compute three-dimensional subsonic unsteady aerodynamic characteristics using the doublet lattic method, L216 (DUBFLX). Volume 1: Engineering and usage

    NASA Technical Reports Server (NTRS)

    Richard, M.; Harrison, B. A.

    1979-01-01

    The program input presented consists of configuration geometry, aerodynamic parameters, and modal data; output includes element geometry, pressure difference distributions, integrated aerodynamic coefficients, stability derivatives, generalized aerodynamic forces, and aerodynamic influence coefficient matrices. Optionally, modal data may be input on magnetic file (tape or disk), and certain geometric and aerodynamic output may be saved for subsequent use.

  15. Real-Time Processing of Pressure-Sensitive Paint Images

    DTIC Science & Technology

    2006-12-01

    intermediate or final data to the hard disk in 3D grid format. In addition to the pressure or pressure coefficient at every grid point, the saved file may...occurs. Nevertheless, to achieve an accurate mapping between 2D image coordinates and 3D spatial coordinates, additional parameters must be introduced. A...improved mapping between the 2D and 3D coordinates. In a more sophisticated approach, additional terms corresponding to specific deformation modes

  16. Steamy Solar System

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] Annotated Version

    This diagram illustrates the earliest journeys of water in a young, forming star system. Stars are born out of icy cocoons of gas and dust. As the cocoon collapses under its own weight in an inside-out fashion, a stellar embryo forms at the center surrounded by a dense, dusty disk. The stellar embryo 'feeds' from the disk for a few million years, while material in the disk begins to clump together to form planets.

    NASA's Spitzer Space Telescope was able to probe a crucial phase of this stellar evolution - a time when the cocoon is vigorously falling onto the pre-planetary disk. The infrared telescope detected water vapor as it smacks down on a disk circling a forming star called NGC 1333-IRAS 4B. This vapor started out as ice in the outer envelope, but vaporized upon its arrival at the disk.

    By analyzing the water in the system, astronomers were also able learn about other characteristics of the disk, such as its size, density and temperature.

    How did Spitzer see the water vapor deep in the NGC 1333-IRAS 4B system? This is most likely because the system is oriented in just the right way, such that its thicker disk is seen face-on from our Earthly perspective. In this 'face-on' orientation, Spitzer can peer through a window carved by an outflow of material from the embryonic star. This system in this drawing is shown in the opposite 'edge-on' configuration.

  17. VizieR Online Data Catalog: CGS. V. Statistical study of bars and buckled bars (Li+, 2017)

    NASA Astrophysics Data System (ADS)

    Li, Z.-Y.; Ho, L. C.; Barth, A. J.

    2018-04-01

    Images in B-, V-, R-, and I-band filters were taken with the du Pont 2.5m telescope at Las Campanas Observatory, with a field of view (FOV) of 8.9'x8.9'. The typical depths of the B-, V-, R-, and I-band images are 27.5, 26.9, 26.4, and 25.3mag/arcsec2, respectively. More information about the Carnegie-Irvine Galaxy Survey (CGS) design, data reduction, and photometric measurements can be found in Papers I (Ho+, 2011, J/ApJS/197/21) and II (Li+, 2011, J/ApJS/197/22). In this work, we use the CGS I-band images to minimize the effect of dust extinction. The selected sample contains 376 disk galaxies with 264 disks hosting bars. (1 data file).

  18. Stokes Profile Compression Applied to VSM Data

    NASA Astrophysics Data System (ADS)

    Toussaint, W. A.; Henney, C. J.; Harvey, J. W.

    2012-02-01

    The practical details of applying the Expansion in Hermite Functions (EHF) method to compression of full-disk full-Stokes solar spectroscopic data from the SOLIS/VSM instrument are discussed in this paper. The algorithm developed and discussed here preserves the 630.15 and 630.25 nm Fe i lines, along with the local continuum and telluric lines. This compression greatly reduces the amount of space required to store these data sets while maintaining the quality of the data, allowing these observations to be archived and made publicly available with limited bandwidth. Applying EHF to the full-Stokes profiles and saving the coefficient files with Rice compression reduces the disk space required to store these observations by a factor of 20, while maintaining the quality of the data and with a total compression time only 35% slower than the standard gzip (GNU zip) compression.

  19. VizieR Online Data Catalog: PHAT. XIX. Formation history of M31 disk (Williams+, 2017)

    NASA Astrophysics Data System (ADS)

    Williams, B. F.; Dolphin, A. E.; Dalcanton, J. J.; Weisz, D. R.; Bell, E. F.; Lewis, A. R.; Rosenfield, P.; Choi, Y.; Skillman, E.; Monachesi, A.

    2018-05-01

    The data for this study come from the Panchromatic Hubble Andromeda Treasury (PHAT) survey (Dalcanton+ 2012ApJS..200...18D ; Williams+ 2014, J/ApJS/215/9). Briefly, PHAT is a multiwavelength HST survey mapping 414 contiguous HST fields of the northern M31 disk and bulge in six broad wavelength bands from the near-ultraviolet to the near-infrared. The survey obtained data in the F275W and F336W bands with the UVIS detectors of the Wide-Field Camera 3 (WFC3) camera, the F475W and F814W bands in the WFC detectors of the Advanced Camera for Surveys (ACS) camera, and the F110W and F160W bands in the IR detectors of the WFC3 camera. (4 data files).

  20. Optimizing a tandem disk model

    NASA Astrophysics Data System (ADS)

    Healey, J. V.

    1983-08-01

    The optimum values of the solidity ratio, tip speed ratio (TSR), and the preset angle of attack, the corresponding distribution, and the breakdown mechanism for a tandem disk model for a crosswind machine such as a Darrieus are examined analytically. Equations are formulated for thin blades with zero drag in consideration of two plane rectangular disks, both perpendicular to the wind flow. Power coefficients are obtained for both disks and comparisons are made between a single-disk system and a two-disk system. The power coefficient for the tandem disk model is shown to be a sum of the coefficients of the individual disks, with a maximum value of twice the Betz limit at an angle of attack of -1 deg and the TSR between 4-7. The model, applied to the NACA 0012 profile, gives a maximum power coefficient of 0.967 with a solidity ratio of 0.275 and highly limited ranges for the angle of attack and TSR.

  1. Formation of Sharp Eccentric Rings in Debris Disks with Gas but Without Planets

    NASA Technical Reports Server (NTRS)

    Lyra, W.; Kuchner, M.

    2013-01-01

    'Debris disks' around young stars (analogues of the Kuiper Belt in our Solar System) show a variety of non-trivial structures attributed to planetary perturbations and used to constrain the properties of those planets. However, these analyses have largely ignored the fact that some debris disks are found to contain small quantities of gas, a component that all such disks should contain at some level. Several debris disks have been measured with a dust-to-gas ratio of about unity, at which the effect of hydrodynamics on the structure of the disk cannot be ignored. Here we report linear and nonlinear modelling that shows that dust-gas interactions can produce some of the key patterns attributed to planets. We find a robust clumping instability that organizes the dust into narrow, eccentric rings, similar to the Fomalhaut debris disk. The conclusion that such disks might contain planets is not necessarily required to explain these systems.

  2. A disk of scattered icy objects and the origin of Jupiter-family comets.

    PubMed

    Duncan, M J; Levison, H F

    1997-06-13

    Orbital integrations carried out for 4 billion years produced a disk of scattered objects beyond the orbit of Neptune. Objects in this disk can be distinguished from Kuiper belt objects by a greater range of eccentricities and inclinations. This disk was formed in the simulations by encounters with Neptune during the early evolution of the outer solar system. After particles first encountered Neptune, the simulations show that about 1 percent of the particles survive in this disk for the age of the solar system. A disk currently containing as few as approximately 6 x 10(8) objects could supply all of the observed Jupiter-family comets. Two recently discovered objects, 1996 RQ20 and 1996 TL66, have orbital elements similar to those predicted for objects in this disk, suggesting that they are thus far the only members of this disk to be identified.

  3. MESHMAKER (MM) V1.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MORIDIS, GEORGE

    2016-05-02

    MeshMaker v1.5 is a code that describes the system geometry and discretizes the domain in problems of flow and transport through porous and fractured media that are simulated using the TOUGH+ [Moridis and Pruess, 2014] or TOUGH2 [Pruess et al., 1999; 2012] families of codes. It is a significantly modified and drastically enhanced version of an earlier simpler facility that was embedded in the TOUGH2 codes [Pruess et al., 1999; 2012], from which it could not be separated. The code (MeshMaker.f90) is a stand-alone product written in FORTRAN 95/2003, is written according to the tenets of Object-Oriented Programming, has amore » modular structure and can perform a number of mesh generation and processing operations. It can generate two-dimensional radially symmetric (r,z) meshes, and one-, two-, and three-dimensional rectilinear (Cartesian) grids in (x,y,z). The code generates the file MESH, which includes all the elements and connections that describe the discretized simulation domain and conforming to the requirements of the TOUGH+ and TOUGH2 codes. Multiple-porosity processing for simulation of flow in naturally fractured reservoirs can be invoked by means of a keyword MINC, which stands for Multiple INteracting Continua. The MINC process operates on the data of the primary (porous medium) mesh as provided on disk file MESH, and generates a secondary mesh containing fracture and matrix elements with identical data formats on file MINC.« less

  4. Molecular Gas in Young Debris Disks

    NASA Technical Reports Server (NTRS)

    Moor, A.; Abraham, P.; Juhasz, A.; Kiss, Cs.; Pascucci, I.; Kospal, A.; Apai, D.; Henning, T.; Csengeri, T.; Grady, C.

    2011-01-01

    Gas-rich primordial disks and tenuous gas-poor debris disks are usually considered as two distinct evolutionary phases of the circumstellar matter. Interestingly, the debris disk around the young main-sequence star 49 Ceti possesses a substantial amount of molecular gas and possibly represents the missing link between the two phases. Motivated to understand the evolution of the gas component in circumstellar disks via finding more 49 Ceti-like systems, we carried out a CO J = 3-2 survey with the Atacama Pathfinder EXperiment, targeting 20 infrared-luminous debris disks. These systems fill the gap between primordial and old tenuous debris disks in terms of fractional luminosity. Here we report on the discovery of a second 49 Ceti-like disk around the 30 Myr old A3-type star HD21997, a member of the Columba Association. This system was also detected in the CO(2-1) transition, and the reliable age determination makes it an even clearer example of an old gas-bearing disk than 49 Ceti. While the fractional luminosities of HD21997 and 49 Ceti are not particularly high, these objects seem to harbor the most extended disks within our sample. The double-peaked profiles of HD21997 were reproduced by a Keplerian disk model combined with the LIME radiative transfer code. Based on their similarities, 49 Ceti and HD21997 may be the first representatives of a so far undefined new class of relatively old > or approx.8 Myr), gaseous dust disks. From our results, neither primordia1 origin nor steady secondary production from icy planetesima1s can unequivocally explain the presence of CO gas in the disk ofHD21997.

  5. The Dynamics of Truncated Black Hole Accretion Disks. I. Viscous Hydrodynamic Case

    NASA Astrophysics Data System (ADS)

    Hogg, J. Drew; Reynolds, Christopher S.

    2017-07-01

    Truncated accretion disks are commonly invoked to explain the spectro-temporal variability in accreting black holes in both small systems, I.e., state transitions in galactic black hole binaries (GBHBs), and large systems, I.e., low-luminosity active galactic nuclei (LLAGNs). In the canonical truncated disk model of moderately low accretion rate systems, gas in the inner region of the accretion disk occupies a hot, radiatively inefficient phase, which leads to a geometrically thick disk, while the gas in the outer region occupies a cooler, radiatively efficient phase that resides in the standard geometrically thin disk. Observationally, there is strong empirical evidence to support this phenomenological model, but a detailed understanding of the dynamics of truncated disks is lacking. We present a well-resolved viscous, hydrodynamic simulation that uses an ad hoc cooling prescription to drive a thermal instability and, hence, produce the first sustained truncated accretion disk. With this simulation, we perform a study of the dynamics, angular momentum transport, and energetics of a truncated disk. We find that the time variability introduced by the quasi-periodic transition of gas from efficient cooling to inefficient cooling impacts the evolution of the simulated disk. A consequence of the thermal instability is that an outflow is launched from the hot/cold gas interface, which drives large, sub-Keplerian convective cells into the disk atmosphere. The convective cells introduce a viscous θ - ϕ stress that is less than the generic r - ϕ viscous stress component, but greatly influences the evolution of the disk. In the truncated disk, we find that the bulk of the accreted gas is in the hot phase.

  6. The Dynamics of Truncated Black Hole Accretion Disks. I. Viscous Hydrodynamic Case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogg, J. Drew; Reynolds, Christopher S.

    Truncated accretion disks are commonly invoked to explain the spectro-temporal variability in accreting black holes in both small systems, i.e., state transitions in galactic black hole binaries (GBHBs), and large systems, i.e., low-luminosity active galactic nuclei (LLAGNs). In the canonical truncated disk model of moderately low accretion rate systems, gas in the inner region of the accretion disk occupies a hot, radiatively inefficient phase, which leads to a geometrically thick disk, while the gas in the outer region occupies a cooler, radiatively efficient phase that resides in the standard geometrically thin disk. Observationally, there is strong empirical evidence to supportmore » this phenomenological model, but a detailed understanding of the dynamics of truncated disks is lacking. We present a well-resolved viscous, hydrodynamic simulation that uses an ad hoc cooling prescription to drive a thermal instability and, hence, produce the first sustained truncated accretion disk. With this simulation, we perform a study of the dynamics, angular momentum transport, and energetics of a truncated disk. We find that the time variability introduced by the quasi-periodic transition of gas from efficient cooling to inefficient cooling impacts the evolution of the simulated disk. A consequence of the thermal instability is that an outflow is launched from the hot/cold gas interface, which drives large, sub-Keplerian convective cells into the disk atmosphere. The convective cells introduce a viscous θ − ϕ stress that is less than the generic r − ϕ viscous stress component, but greatly influences the evolution of the disk. In the truncated disk, we find that the bulk of the accreted gas is in the hot phase.« less

  7. Coronagraphic Imaging of Debris Disks from a High Altitude Balloon Platform

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen; Traub, Wesley; Bryden, Geoffrey; Brugarolas, Paul; Chen, Pin; Guyon, Olivier; Hillenbrand, Lynne; Kasdin, Jeremy; Krist, John; Macintosh, Bruce; hide

    2012-01-01

    Debris disks around nearby stars are tracers of the planet formation process, and they are a key element of our understanding of the formation and evolution of extrasolar planetary systems. With multi-color images of a significant number of disks, we can probe important questions: can we learn about planetary system evolution; what materials are the disks made of; and can they reveal the presence of planets? Most disks are known to exist only through their infrared flux excesses as measured by the Spitzer Space Telescope, and through images measured by Herschel. The brightest, most extended disks have been imaged with HST, and a few, such as Fomalhaut, can be observed using ground-based telescopes. But the number of good images is still very small, and there are none of disks with densities as low as the disk associated with the asteroid belt and Edgeworth-Kuiper belt in our own Solar System. Direct imaging of disks is a major observational challenge, demanding high angular resolution and extremely high dynamic range close to the parent star. The ultimate experiment requires a space-based platform, but demonstrating much of the needed technology, mitigating the technical risks of a space-based coronagrap, and performing valuable measurements of circumstellar debris disks, can be done from a high-altitude balloon platform. In this paper we present a balloon-borne telescope experiment based on the Zodiac II design that would undertake compelling studies of a sample of debris disks.

  8. Coronagraphic Imaging of Debris Disks from a High Altitude Balloon Platform

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen; Traub, Wesley; Bryden, Geoffrey; Brugarolas, Paul; Chen, Pin; Guyon, Olivier; Hillenbrand, Lynne; Krist, John; Macintosh, Bruce; Mawet, Dimitri; hide

    2012-01-01

    Debris disks around nearby stars are tracers of the planet formation process, and they are a key element of our understanding of the formation and evolution of extrasolar planetary systems. With multi-color images of a significant number of disks, we can probe important questions: can we learn about planetary system evolution; what materials are the disks made of; and can they reveal the presence of planets? Most disks are known to exist only through their infrared flux excesses as measured by the Spitzer Space Telescope, and through images measaured by Herschel. The brightest, most extended disks have been imaged with HST, and a few, such as Fomalhaut, can be observed using ground-based telescopes. But the number of good images is still very small, and there are none of disks with densities as low as the disk associated with the asteroid belt and Edgeworth-Kuiper belt in our own Solar System. Direct imaging of disks is major observational challenge, demanding high angular resolution and extremely high dynamic range close to the parent star. The ultimate experiment requires a space-based platform, but demonstrating much of the needed technology, mitigating the technical risks of a space-based coronagraph, and performing valuable measurements of circumstellar debris disks, can be done from a high-altitude balloon platform. In this paper we present a balloon-borne telescope concept based on the Zodiac II design that could undertake compelling studies of a sample of debris disks.

  9. Finite Element Analysis of Flexural Vibrations in Hard Disk Drive Spindle Systems

    NASA Astrophysics Data System (ADS)

    LIM, SEUNGCHUL

    2000-06-01

    This paper is concerned with the flexural vibration analysis of the hard disk drive (HDD) spindle system by means of the finite element method. In contrast to previous research, every system component is here analytically modelled taking into account its structural flexibility and also the centrifugal effect particularly on the disk. To prove the effectiveness and accuracy of the formulated models, commercial HDD systems with two and three identical disks are selected as examples. Then their major natural modes are computed with only a small number of element meshes as the shaft rotational speed is varied, and subsequently compared with the existing numerical results obtained using other methods and newly acquired experimental ones. Based on such a series of studies, the proposed method can be concluded as a very promising tool for the design of HDDs and various other high-performance computer disk drives such as floppy disk drives, CD ROM drives, and their variations having spindle mechanisms similar to those of HDDs.

  10. The influence of disk's flexibility on coupling vibration of shaft disk blades systems

    NASA Astrophysics Data System (ADS)

    Yang, Chia-Hao; Huang, Shyh-Chin

    2007-03-01

    The coupling vibrations among shaft-torsion, disk-transverse and blade-bending in a shaft-disk-blades unit are investigated. The equations of motion for the shaft-disk-blades unit are first derived from the energy approach in conjunction with the assumed modes method. The effects of disk flexibility, blade's stagger angle and rotational speed upon the natural frequencies and mode shapes are particularly studied. Previous studies have shown that there were four types of coupling modes, the shaft-blade (SB), the shaft-disk-blades (SDBs), the disk-blades (DB) and the blade-blade (BB) in such a unit. The present research focuses on the influence of disk flexibility on the coupling behavior and discovers that disk's flexibility strongly affects the modes bifurcation and the transition of modes. At slightly flexible disk, the BB modes bifurcate into BB and DB modes. As disk goes further flexible, SB modes shift into SDB modes. If it goes furthermore, additional disk-predominating modes are generated and DB modes appear before the SDB mode. Examination of stagger angle β proves that at two extreme cases; at β=0° the shaft and blades coupled but not the disk, and at β=90° the disk and blades coupled but not the shaft. In between, coupling exists among three components. Increasing β may increase or decrease SB modes, depending on which, the disk or shaft's first mode, is more rigid. The natural frequencies of DB modes usually decrease with the increase of β. Rotation effects show that bifurcation, veering and merging phenomena occur due to disk flexibility. Disk flexibility is also observed to induce more critical speeds in the SDBs systems.

  11. The Dynamics and Implications of Gap Clearing via Planets in Planetesimal (Debris) Disks

    NASA Astrophysics Data System (ADS)

    Morrison, Sarah Jane

    Exoplanets and debris disks are examples of solar systems other than our own. As the dusty reservoirs of colliding planetesimals, debris disks provide indicators of planetary system evolution on orbital distance scales beyond those probed by the most prolific exoplanet detection methods, and on timescales 10 r to 10 Gyr. The Solar System possesses both planets and small bodies, and through studying the gravitational interactions between both, we gain insight into the Solar System's past. As we enter the era of resolved observations of debris disks residing around other stars, I add to our theoretical understanding of the dynamical interactions between debris, planets, and combinations thereof. I quantify how single planets clear material in their vicinity and how long this process takes for the entire planetary mass regime. I use these relationships to assess the lowest mass planet that could clear a gap in observed debris disks over the system's lifetime. In the distant outer reaches of gaps in young debris systems, this minimum planet mass can exceed Neptune's. To complement the discoveries of wide-orbit, massive, exoplanets by direct imaging surveys, I assess the dynamical stability of high mass multi-planet systems to estimate how many high mass planets could be packed into young, gapped debris disks. I compare these expectations to the planet detection rates of direct imaging surveys and find that high mass planets are not the primary culprits for forming gaps in young debris disk systems. As an alternative model for forming gaps in planetesimal disks with planets, I assess the efficacy of creating gaps with divergently migrating pairs of planets. I find that migrating planets could produce observed gaps and elude detection. Moreover, the inferred planet masses when neglecting migration for such gaps could be expected to be observable by direct imaging surveys for young, nearby systems. Wide gaps in young systems would likely still require more than two planets even with plantesimal-driven migration. These efforts begin to probe the types of potential planets carving gaps in disks of different evolutionary stages and at wide orbit separations on scales similar to our outer Solar System.

  12. Self-gravity, Resonances, and Orbital Diffusion in Stellar Disks

    NASA Astrophysics Data System (ADS)

    Fouvry, Jean-Baptiste; Binney, James; Pichon, Christophe

    2015-06-01

    Fluctuations in a stellar system's gravitational field cause the orbits of stars to evolve. The resulting evolution of the system can be computed with the orbit-averaged Fokker-Planck equation once the diffusion tensor is known. We present the formalism that enables one to compute the diffusion tensor from a given source of noise in the gravitational field when the system's dynamical response to that noise is included. In the case of a cool stellar disk we are able to reduce the computation of the diffusion tensor to a one-dimensional integral. We implement this formula for a tapered Mestel disk that is exposed to shot noise and find that we are able to explain analytically the principal features of a numerical simulation of such a disk. In particular the formation of narrow ridges of enhanced density in action space is recovered. As the disk's value of Toomre's Q is reduced and the disk becomes more responsive, there is a transition from a regime of heating in the inner regions of the disk through the inner Lindblad resonance to one of radial migration of near-circular orbits via the corotation resonance in the intermediate regions of the disk. The formalism developed here provides the ideal framework in which to study the long-term evolution of all kinds of stellar disks.

  13. Studies of extra-solar OORT clouds and the Kuiper disk

    NASA Technical Reports Server (NTRS)

    Stern, S. Alan

    1993-01-01

    This is the second report for NAGW-3023, Studies of Extra-Solar Oort Clouds and the Kuiper Disk. We are conducting research designed to enhance our understanding of the evolution and detectability of comet clouds and disks. This area holds promise for also improving our understanding of outer solar system formation, the bombardment history of the planets, the transport of volatiles and organics from the outer solar system to the inner planets, and the ultimate fate of comet clouds around the Sun and other stars. According to 'standard' theory, both the Kuiper Disk and Oort Cloud are (at least in part) natural products of the planetary accumulation stage of solar system formation. One expects such assemblages to be a common attribute of other solar systems. Therefore, searches for comet disks and clouds orbiting other stars offer a new method for infering the presence of planetary systems. Our three-year effort consists of two major efforts: (1) observational work to predict and search for the signatures of Oort Clouds and comet disks around other stars; and (2) modelling studies of the formation and evolution of the Kuiper Disk (KD) and similar assemblages that may reside around other stars, including Beta Pic. These efforts are referred to as Task 1 and 2, respectively.

  14. Studies of extra-solar Oort Clouds and the Kuiper Disk

    NASA Technical Reports Server (NTRS)

    Stern, Alan

    1995-01-01

    This is the September 1995 Semi-Annual report for Studies of Extra-Solar Oort Clouds and the Kuiper Disk. We are conducting research designed to enhance our understanding of the evolution and detectability of comet clouds and disks. This area holds promise for also improving our understanding of outer solar system formation the bombardment history of the planets, the transport of volatiles and organics from the outer solar system to the inner planets, and to the ultimate fate of comet clouds around the Sun and other stars. According to 'standard' theory, both the Kuiper Disk and the Oort Cloud are (at least in part) natural products of the planetary accumulation stage of solar system formation. One expects such assemblages to be a common attribute of other solar systems. Therefore, searches for comet disks and clouds orbiting other stars offer a new method for inferring the presence of planetary systems. This project consists of two major efforts: (1) observational work to predict and search for the signatures of Oort Clouds and comet disks around other stars; and (2) modelling studies of the formation and evolution of the Kuiper Disk (KD) and similar assemblages that may reside around other stars, including beta Pic. These efforts are referred to as Task 1 and 2.

  15. [Development and evaluation of the medical imaging distribution system with dynamic web application and clustering technology].

    PubMed

    Yokohama, Noriya; Tsuchimoto, Tadashi; Oishi, Masamichi; Itou, Katsuya

    2007-01-20

    It has been noted that the downtime of medical informatics systems is often long. Many systems encounter downtimes of hours or even days, which can have a critical effect on daily operations. Such systems remain especially weak in the areas of database and medical imaging data. The scheme design shows the three-layer architecture of the system: application, database, and storage layers. The application layer uses the DICOM protocol (Digital Imaging and Communication in Medicine) and HTTP (Hyper Text Transport Protocol) with AJAX (Asynchronous JavaScript+XML). The database is designed to decentralize in parallel using cluster technology. Consequently, restoration of the database can be done not only with ease but also with improved retrieval speed. In the storage layer, a network RAID (Redundant Array of Independent Disks) system, it is possible to construct exabyte-scale parallel file systems that exploit storage spread. Development and evaluation of the test-bed has been successful in medical information data backup and recovery in a network environment. This paper presents a schematic design of the new medical informatics system that can be accommodated from a recovery and the dynamic Web application for medical imaging distribution using AJAX.

  16. Managing People's Data

    NASA Technical Reports Server (NTRS)

    Le, Diana; Cooper, David M. (Technical Monitor)

    1994-01-01

    Just imagine a mass storage system that consists of a machine with 2 CPUs, 1 Gigabyte (GB) of memory, 400 GB of disk space, 16800 cartridge tapes in the automated tape silos, 88,000 tapes located in the vault, and the software to manage the system. This system is designed to be a data repository; it will always have disk space to store all the incoming data. Currently 9.14 GB of new data per day enters the system with this rate doubling each year. To assure there is always disk space available for new data, the system. has to move data reside from the expensive disk to a much less expensive medium such as the 3480 cartridge tapes. Once the data is archived to tape, it should be able to move back to disk when someone wants to access it and the data movement should be transparent to the user. Now imagine all the tasks that a system administrator must perform to keep this system running 24 hour a day, 7 days a week. Since the filesystem maintains the illusion of unlimited disk space, data that comes to the system must get moved to tapes in an efficient manner. This paper will describe the mass storage system running at the Numerical Aerodynamic Simulation (NAS) at NASA Ames Research Center in both software and hardware aspects, then it will describe all of the tasks the system administrator has to perform on this system.

  17. Shifting of the resonance location for planets embedded in circumstellar disks

    NASA Astrophysics Data System (ADS)

    Marzari, F.

    2018-03-01

    Context. In the early evolution of a planetary system, a pair of planets may be captured in a mean motion resonance while still embedded in their nesting circumstellar disk. Aims: The goal is to estimate the direction and amount of shift in the semimajor axis of the resonance location due to the disk gravity as a function of the gas density and mass of the planets. The stability of the resonance lock when the disk dissipates is also tested. Methods: The orbital evolution of a large number of systems is numerically integrated within a three-body problem in which the disk potential is computed as a series of expansion. This is a good approximation, at least over a limited amount of time. Results: Two different resonances are studied: the 2:1 and the 3:2. In both cases the shift is inwards, even if by a different amount, when the planets are massive and carve a gap in the disk. For super-Earths, the shift is instead outwards. Different disk densities, Σ, are considered and the resonance shift depends almost linearly on Σ. The gas dissipation leads to destabilization of a significant number of resonant systems, in particular if it is fast. Conclusions: The presence of a massive circumstellar disk may significantly affect the resonant behavior of a pair of planets by shifting the resonant location and by decreasing the size of the stability region. The disk dissipation may explain some systems found close to a resonance but not locked in it.

  18. The Mass Evolution of Protostellar Disks and Envelopes in the Perseus Molecular Cloud

    NASA Astrophysics Data System (ADS)

    Andersen, Bridget; Stephens, Ian; Dunham, Michael; Pokhrel, Riwaj; Jørgensen, Jes; Frimann, Søren

    2018-01-01

    In the standard picture for low-mass star formation, a dense molecular cloud undergoes gravitational collapse to form a protostellar system consisting of a new central star, a circumstellar disk, and a surrounding envelope of remaining material. The mass distribution of the system evolves as matter accretes from the large-scale envelope through the disk and onto the protostar. While this general picture is supported by simulations and indirect observational measurements, the specific timescales related to disk growth and envelope dissipation remain poorly constrained. We present a rigorous test of a method introduced by Jørgensen et al. (2009) to obtain observational mass measurements of disks and envelopes around embedded protostars from unresolved (resolution of ~1000 AU) observations. Using data from the recent Mass Assembly of Stellar Systems and their Evolution with the SMA (MASSES) survey, we derive disk and envelope mass estimates for 59 protostellar systems in the Perseus molecular cloud. We compare our results to independent disk mass measurements from the VLA Nascent Disk and Multiplicity (VANDAM) survey and find a strong linear correlation. Then, leveraging the size and uniformity of our sample, we find no significant trend in protostellar mass distribution as a function of age, as approximated from bolometric temperatures. These results may indicate that the disk mass of a protostar is set near the onset of the Class 0 protostellar stage and remains roughly constant throughout the Class I protostellar stage.

  19. Protoplanetary disk formation and evolution models: DM Tau and GM Aur

    NASA Astrophysics Data System (ADS)

    Hueso, R.; Guillot, T.

    2002-09-01

    We study the formation and evolution of protoplanetary disks using an axisymmetric turbulent disk model. We compare model results with observational parameters derived for the DM Tau and GM Aur systems. These are relatively old T Tauri stars with large and massive protoplanetary disks. Early disk formation is studied in the standard scenario of slowly rotating isothermal collapsing spheres and is strongly dependent on the initial angular momentum and the collapse accretion rate. The viscous evolution of the disk is integrated in time using the classical Alpha prescription of turbulence. We follow the temporal evolution of the disks until their characteristics fit the observed characteristics of DM Tau and GM Aur. We therefore obtain the set of model parameters that are able to explain the present state of these disks. We also study the disk evolution under the Beta parameterization of turbulence, recently proposed for sheared flows on protoplanetary disks. Both parameterizations allow explaining the present state of both DM Tau and GM Aur. We infer a value of Alpha between 5x10-3 to 0.02 for DM Tau and one order of magnitude smaller for GM Aur. Values of the Beta parameter are in accordance with theoretical predictions of Beta around 2x10-5 but with a larger dispersion on other model parameters, which make us favor the Alpha parameterization of turbulence. Implications for planetary system development in these systems are presented. In particular, GM Aur is a massive and slowly evolving disk where conditions are very favorable for planetesimal growth. The large value of present disk mass and the relatively small observed accretion rate of this system may also be indicative of the presence of an inner gas giant planet. Acknowledgements: This work has been supported by Programme Nationale de Planetologie. R. Hueso acknowledges a post-doctoral fellowship from Gobierno Vasco.

  20. Disk-loss and disk-renewal phases in classical Be stars. II. Contrasting with stable and variable disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draper, Zachary H.; Wisniewski, John P.; Bjorkman, Karen S.

    2014-05-10

    Recent observational and theoretical studies of classical Be stars have established the utility of polarization color diagrams (PCDs) in helping to constrain the time-dependent mass decretion rates of these systems. We expand on our pilot observational study of this phenomenon, and report the detailed analysis of a long-term (1989-2004) spectropolarimetric survey of nine additional classical Be stars, including systems exhibiting evidence of partial disk-loss/disk-growth episodes as well as systems exhibiting long-term stable disks. After carefully characterizing and removing the interstellar polarization along the line of sight to each of these targets, we analyze their intrinsic polarization behavior. We find thatmore » many steady-state Be disks pause at the top of the PCD, as predicted by theory. We also observe sharp declines in the Balmer jump polarization for later spectral type, near edge-on steady-state disks, again as recently predicted by theory, likely caused when the base density of the disk is very high, and the outer region of the edge-on disk starts to self absorb a significant number of Balmer jump photons. The intrinsic V-band polarization and polarization position angle of γ Cas exhibits variations that seem to phase with the orbital period of a known one-armed density structure in this disk, similar to the theoretical predictions of Halonen and Jones. We also observe stochastic jumps in the intrinsic polarization across the Balmer jump of several known Be+sdO systems, and speculate that the thermal inflation of part of the outer region of these disks could be responsible for producing this observational phenomenon. Finally, we estimate the base densities of this sample of stars to be between ≈8 × 10{sup –11} and ≈4 × 10{sup –12} g cm{sup –3} during quasi steady state periods given there maximum observed polarization.« less

  1. Tracker: Image-Processing and Object-Tracking System Developed

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Theodore W.

    1999-01-01

    Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in extracting numerical instrumentation data that are embedded in images. All the results are saved in files for further data reduction and graphing. There are currently three Tracking Systems (workstations) operating near the laboratories and offices of Lewis Microgravity Science Division researchers. These systems are used independently by students, scientists, and university-based principal investigators. The researchers bring their tapes or films to the workstation and perform the tracking analysis. The resultant data files generated by the tracking process can then be analyzed on the spot, although most of the time researchers prefer to transfer them via the network to their offices for further analysis or plotting. In addition, many researchers have installed Tracker on computers in their office for desktop analysis of digital image sequences, which can be digitized by the Tracking System or some other means. Tracker has not only provided a capability to efficiently and automatically analyze large volumes of data, saving many hours of tedious work, but has also provided new capabilities to extract valuable information and phenomena that was heretofore undetected and unexploited.

  2. EARTH, MOON, SUN, AND CV ACCRETION DISKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montgomery, M. M.

    2009-11-01

    Net tidal torque by the secondary on a misaligned accretion disk, like the net tidal torque by the Moon and the Sun on the equatorial bulge of the spinning and tilted Earth, is suggested by others to be a source to retrograde precession in non-magnetic, accreting cataclysmic variable (CV) dwarf novae (DN) systems that show negative superhumps in their light curves. We investigate this idea in this work. We generate a generic theoretical expression for retrograde precession in spinning disks that are misaligned with the orbital plane. Our generic theoretical expression matches that which describes the retrograde precession of Earths'more » equinoxes. By making appropriate assumptions, we reduce our generic theoretical expression to those generated by others, or to those used by others, to describe retrograde precession in protostellar, protoplanetary, X-ray binary, non-magnetic CV DN, quasar, and black hole systems. We find that spinning, tilted CV DN systems cannot be described by a precessing ring or by a precessing rigid disk. We find that differential rotation and effects on the disk by the accretion stream must be addressed. Our analysis indicates that the best description of a retrogradely precessing spinning, tilted, CV DN accretion disk is a differentially rotating, tilted disk with an attached rotating, tilted ring located near the innermost disk annuli. In agreement with the observations and numerical simulations by others, we find that our numerically simulated CV DN accretion disks retrogradely precess as a unit. Our final, reduced expression for retrograde precession agrees well with our numerical simulation results and with selective observational systems that seem to have main-sequence secondaries. Our results suggest that a major source to retrograde precession is tidal torques like that by the Moon and the Sun on the Earth. In addition, these tidal torques should be common to a variety of systems where one member is spinning and tilted, regardless if accretion disks are present or not. Our results suggest that the accretion disk's geometric shape directly affects the disk's precession rate.« less

  3. NSSDC activities with 12-inch optical disk drives

    NASA Technical Reports Server (NTRS)

    Lowrey, Barbara E.; Lopez-Swafford, Brian

    1986-01-01

    The development status of optical-disk data transfer and storage technology at the National Space Science Data Center (NSSDC) is surveyed. The aim of the R&D program is to facilitate the exchange of large volumes of data. Current efforts focus on a 12-inch 1-Gbyte write-once/read-many disk and a disk drive which interfaces with VAX/VMS computer systems. The history of disk development at NSSDC is traced; the results of integration and performance tests are summarized; the operating principles of the 12-inch system are explained and illustrated with diagrams; and the need for greater standardization is indicated.

  4. HERschel Observations of Edge-on Spirals (HEROES). II. Tilted-ring modelling of the atomic gas disks

    NASA Astrophysics Data System (ADS)

    Allaert, F.; Gentile, G.; Baes, M.; De Geyter, G.; Hughes, T. M.; Lewis, F.; Bianchi, S.; De Looze, I.; Fritz, J.; Holwerda, B. W.; Verstappen, J.; Viaene, S.

    2015-10-01

    Context. Edge-on galaxies can offer important insight into galaxy evolution because they are the only systems where the distribution of the different components can be studied both radially and vertically. The HEROES project was designed to investigate the interplay between the gas, dust, stars, and dark matter (DM) in a sample of 7 massive edge-on spiral galaxies. Aims: In this second HEROES paper, we present an analysis of the atomic gas content of 6 out of 7 galaxies in our sample. The remaining galaxy was recently analysed according to the same strategy. The primary aim of this work is to constrain the surface density distribution, the rotation curve, and the geometry of the gas disks in a homogeneous way. In addition we identify peculiar features and signs of recent interactions. Methods: We have constructed detailed tilted-ring models of the atomic gas disks based on new GMRT 21-cm observations of NGC 973 and UGC 4277 and re-reduced archival H i data of NGC 5907, NGC 5529, IC 2531, and NGC 4217. Potential degeneracies between different models were resolved by requiring good agreement with the data in various representations of the data cubes. Results: From our modelling we find that all but one galaxy are warped along the major axis. In addition, we identify warps along the line of sight in three galaxies. A flaring gas layer is required to reproduce the data for only one galaxy, but (moderate) flares cannot be ruled out for the other galaxies either. A coplanar ring-like structure is detected outside the main disk of NGC 4217, which we suggest could be the remnant of a recent minor merger event. We also find evidence of a radial inflow of 15 ± 5 km s-1 in the disk of NGC 5529, which might be related to the ongoing interaction with two nearby companions. For NGC 5907, the extended, asymmetric, and strongly warped outer regions of the H i disk also suggest a recent interaction. In contrast, the inner disks of these three galaxies (NGC 4217, NGC 5529, and NGC 5907) show regular behaviour and seem largely unaffected by the interactions. Our models further support earlier claims of prominent spiral arms in the disks of IC 2531 and NGC 5529. Finally, we detect a dwarf companion galaxy at a projected distance of 36 kpc from the centre of NGC 973. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.Appendices are available in electronic form at http://www.aanda.orgThe H i cleaned data cubes as FITS files are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/582/A18

  5. AN ADA NAMELIST PACKAGE

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    The Ada Namelist Package, developed for the Ada programming language, enables a calling program to read and write FORTRAN-style namelist files. A namelist file consists of any number of assignment statements in any order. Features of the Ada Namelist Package are: the handling of any combination of user-defined types; the ability to read vectors, matrices, and slices of vectors and matrices; the handling of mismatches between variables in the namelist file and those in the programmed list of namelist variables; and the ability to avoid searching the entire input file for each variable. The principle user benefits of this software are the following: the ability to write namelist-readable files, the ability to detect most file errors in the initialization phase, a package organization that reduces the number of instantiated units to a few packages rather than to many subprograms, a reduced number of restrictions, and an increased execution speed. The Ada Namelist reads data from an input file into variables declared within a user program. It then writes data from the user program to an output file, printer, or display. The input file contains a sequence of assignment statements in arbitrary order. The output is in namelist-readable form. There is a one-to-one correspondence between namelist I/O statements executed in the user program and variables read or written. Nevertheless, in the input file, mismatches are allowed between assignment statements in the file and the namelist read procedure statements in the user program. The Ada Namelist Package itself is non-generic. However, it has a group of nested generic packages following the nongeneric opening portion. The opening portion declares a variety of useraccessible constants, variables and subprograms. The subprograms are procedures for initializing namelists for reading, reading and writing strings. The subprograms are also functions for analyzing the content of the current dataset and diagnosing errors. Two nested generic packages follow the opening portion. The first generic package contains procedures that read and write objects of scalar type. The second contains subprograms that read and write one and two-dimensional arrays whose components are of scalar type and whose indices are of either of the two discrete types (integer or enumeration). Subprograms in the second package also read and write vector and matrix slices. The Ada Namelist ASCII text files are available on a 360k 5.25" floppy disk written on an IBM PC/AT running under the PC DOS operating system. The largest subprogram in the package requires 150k of memory. The package was developed using VAX Ada v. 1.5 under DEC VMS v. 4.5. It should be portable to any validated Ada compiler. The software was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  6. Teleradiology system using a magneto-optical disk and N-ISDN

    NASA Astrophysics Data System (ADS)

    Ban, Hideyuki; Osaki, Takanobu; Matsuo, Hitoshi; Okabe, Akifumi; Nakajima, Kotaro; Ohyama, Nagaaki

    1997-05-01

    We have developed a new teleradiology system that provides a fast response and secure data transmission while using N- ISDN communication and an ISC magneto-optical disk that is specialized for medical use. The system consists of PC-based terminals connected to a N-ISDN line and the ISC disk. The system uses two types of data: the control data needed for various operational functions and the image data. For quick response, only the much smaller quantity of control data is sent through the N-ISDN during the actual conference. The bulk of the image data is sent to each site on duplicate ISC disks before the conference. The displaying and processing of images are executed using the local data on the ISC disk. We used this system for a trial teleconsultation between two hospitals. The response time needed to display a 2-Mbyte image was 4 seconds. The telepointer could be controlled with no noticeable delay by sending only the pointer's coordinates. Also, since the patient images were exchanged via the ISC disks only, unauthorized access to the patient images through the N-ISDN was prevented. Thus, this trial provides a preliminary demonstration of the usefulness of this system for clinical use.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodigas, Timothy J.; Hinz, Philip M.; Malhotra, Renu, E-mail: rodigas@as.arizona.edu

    Planets can affect debris disk structure by creating gaps, sharp edges, warps, and other potentially observable signatures. However, there is currently no simple way for observers to deduce a disk-shepherding planet's properties from the observed features of the disk. Here we present a single equation that relates a shepherding planet's maximum mass to the debris ring's observed width in scattered light, along with a procedure to estimate the planet's eccentricity and minimum semimajor axis. We accomplish this by performing dynamical N-body simulations of model systems containing a star, a single planet, and an exterior disk of parent bodies and dustmore » grains to determine the resulting debris disk properties over a wide range of input parameters. We find that the relationship between planet mass and debris disk width is linear, with increasing planet mass producing broader debris rings. We apply our methods to five imaged debris rings to constrain the putative planet masses and orbits in each system. Observers can use our empirically derived equation as a guide for future direct imaging searches for planets in debris disk systems. In the fortuitous case of an imaged planet orbiting interior to an imaged disk, the planet's maximum mass can be estimated independent of atmospheric models.« less

  8. Dynamical simulations of the interacting galaxies in the NGC 520/UGC 957 system

    NASA Technical Reports Server (NTRS)

    Stanford, S. A.; Balcells, Marc

    1991-01-01

    Numerical simulations of the interacting galaxies in the NGC 520/UGC 957 system are presented. Two sets of models were produced to investigate the postulated three-galaxy system of two colliding disk galaxies within NGC 520 and the dwarf galaxy UGC 957. The first set of models simulated a dwarf perturbing one-disk galaxy, which tested the possibility that NGC 520 contains only one galaxy disturbed by the passage of UGC 957. The resulting morphology of the perturbed single disk in the simulation fails to reproduce the observed tidal tails and northwest mass condensation of NGC 520. A second set of models simulated two colliding disks, which tested the hypothesis that NGC 520 itself contains two galaxies in a strong collision and UGC 957 is unimportant to the interaction. These disk-disk models produced a good match to the morphology of the present NGC 520. It is concluded that (1) NGC 520 contains two colliding disk galaxies which have produced the brighter southern half of the long tidal tail and (2) UGC 957, which may originally have been a satellite of one of the disk galaxies, formed the diffuse northern tail as it orbited NGC 520.

  9. Studies of Disks Around the Sun and Other Stars

    NASA Technical Reports Server (NTRS)

    Stern, S. Alan (Principal Investigator)

    1996-01-01

    We are conducting research designed to enhance our understanding of the evolution and detectability of comet clouds and disks. This area holds promise for also improving our understanding of outer solar system formation, the bombardment history of the planets, the transport of volatiles and organics from the outer solar system to the inner planets, and to the ultimate fate of comet clouds around the Sun and other stars. According to 'standard' theory, both the Kuiper Disk and the Oort Cloud are (at least in part) natural products of the planetary accumulation stage of solar system formation. One expects such assemblages to be a common attribute of other solar systems. Therefore, searches for comet disks and clouds orbiting other stars offer a new method for inferring the presence of planetary systems. This two-element program consists modeling collisions in the Kuiper Disk and the dust disks around other stars. The modeling effort focuses on moving from our simple, first-generation, Kuiper disk collision rate model, to a time-dependent, second-generation model that incorporates physical collisions, velocity evolution, dynamical erosion, and various dust transport mechanisms. This second generation model will be used to study the evolution of surface mass density and the object-size spectrum in the disk. The observational effort focuses on obtaining submm/mm-wave flux density measurements of 25-30 IR excess stars in order to better constrain the masses, spatial extents and structure of their dust ensembles.

  10. The thermo magnetic instability in hot viscose plasmas

    NASA Astrophysics Data System (ADS)

    Haghani, A.; Khosravi, A.; Khesali, A.

    2017-10-01

    Magnetic Rotational Instability (MRI) can not performed well in accretion disks with strong magnetic field. Studies have indicated a new type of instability called thermomagnetic instability (TMI) in systems where Nernst coefficient and gradient temperature were considered. Nernst coefficient would appear if Boltzman equation could be expanded through ω_{Be} (cyclotron frequency). However, the growth rate of this instability was two magnitude orders below MRI growth (Ωk), which could not act the same as MRI. Therefor, a higher growth rate of unstable modes was needed. In this paper, rotating viscid hot plasma with strong magnetic filed was studied. Firstly, a constant alpha viscosity was studied and then a temperature sensitive viscosity. The results showed that the temperature sensitive viscosity would be able to increase the growth rate of TMI modes significantly, hence capable of acting similar to MRI.

  11. Integrating Micro-computers with a Centralized DBMS: ORACLE, SEED AND INGRES

    NASA Technical Reports Server (NTRS)

    Hoerger, J.

    1984-01-01

    Users of ADABAS, a relational-like data base management system (ADABAS) with its data base programming language (NATURAL) are acquiring microcomputers with hopes of solving their individual word processing, office automation, decision support, and simple data processing problems. As processor speeds, memory sizes, and disk storage capacities increase, individual departments begin to maintain "their own" data base on "their own" micro-computer. This situation can adversely affect several of the primary goals set for implementing a centralized DBMS. In order to avoid this potential problem, these micro-computers must be integrated with the centralized DBMS. An easy to use and flexible means for transferring logic data base files between the central data base machine and micro-computers must be provided. Some of the problems encounted in an effort to accomplish this integration and possible solutions are discussed.

  12. Can eccentric debris disks be long-lived?. A first numerical investigation and application to ζ2 Reticuli

    NASA Astrophysics Data System (ADS)

    Faramaz, V.; Beust, H.; Thébault, P.; Augereau, J.-C.; Bonsor, A.; del Burgo, C.; Ertel, S.; Marshall, J. P.; Milli, J.; Montesinos, B.; Mora, A.; Bryden, G.; Danchi, W.; Eiroa, C.; White, G. J.; Wolf, S.

    2014-03-01

    Context. Imaging of debris disks has found evidence for both eccentric and offset disks. One hypothesis is that they provide evidence for massive perturbers, for example, planets or binary companions, which sculpt the observed structures. One such disk was recently observed in the far-IR by the Herschel Space Observatory around ζ2 Reticuli. In contrast with previously reported systems, the disk is significantly eccentric, and the system is several Gyr old. Aims: We aim to investigate the long-term evolution of eccentric structures in debris disks caused by a perturber on an eccentric orbit around the star. We hypothesise that the observed eccentric disk around ζ2 Reticuli might be evidence of such a scenario. If so, we are able to constrain the mass and orbit of a potential perturber, either a giant planet or a binary companion. Methods: Analytical techniques were used to predict the effects of a perturber on a debris disk. Numerical N-body simulations were used to verify these results and further investigate the observable structures that may be produced by eccentric perturbers. The long-term evolution of the disk geometry was examined, with particular application to the ζ2 Reticuli system. In addition, synthetic images of the disk were produced for direct comparison with Herschel observations. Results: We show that an eccentric companion can produce both the observed offsets and eccentric disks. These effects are not immediate, and we characterise the timescale required for the disk to develop to an eccentric state (and any spirals to vanish). For ζ2 Reticuli, we derive limits on the mass and orbit of the companion required to produce the observations. Synthetic images show that the pattern observed around ζ2 Reticuli can be produced by an eccentric disk seen close to edge-on, and allow us to bring additional constraints on the disk parameters of our model (disk flux and extent). Conclusions: We conclude that eccentric planets or stellar companions can induce long-lived eccentric structures in debris disks. Observations of such eccentric structures thus provide potential evidence of the presence of such a companion in a planetary system. We considered the specific example of ζ2 Reticuli, whose observed eccentric disk can be explained by a distant companion (at tens of AU) on an eccentric orbit (ep ≳ 0.3). Appendices are available in electronic form at http://www.aanda.orgHerschel Space Observatory is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  13. Optical Disk Technology.

    ERIC Educational Resources Information Center

    Abbott, George L.; And Others

    1987-01-01

    This special feature focuses on recent developments in optical disk technology. Nine articles discuss current trends, large scale image processing, data structures for optical disks, the use of computer simulators to create optical disks, videodisk use in training, interactive audio video systems, impacts on federal information policy, and…

  14. The Mercury System: Embedding Computation into Disk Drives

    DTIC Science & Technology

    2004-08-20

    enabling technologies to build extremely fast data search engines . We do this by moving the search closer to the data, and performing it in hardware...engine searches in parallel across a disk or disk surface 2. System Parallelism: Searching is off-loaded to search engines and main processor can

  15. Educational use of World Wide Web pages on CD-ROM.

    PubMed

    Engel, Thomas P; Smith, Michael

    2002-01-01

    The World Wide Web is increasingly important for medical education. Internet served pages may also be used on a local hard disk or CD-ROM without a network or server. This allows authors to reuse existing content and provide access to users without a network connection. CD-ROM offers several advantages over network delivery of Web pages for several applications. However, creating Web pages for CD-ROM requires careful planning. Issues include file names, relative links, directory names, default pages, server created content, image maps, other file types and embedded programming. With care, it is possible to create server based pages that can be copied directly to CD-ROM. In addition, Web pages on CD-ROM may reference Internet served pages to provide the best features of both methods.

  16. Software for Managing Parametric Studies

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian

    2003-01-01

    The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.

  17. Dynamics of circumstellar disks. III. The case of GG Tau A

    DOE PAGES

    Nelson, Andrew F.; Marzari, Francesco

    2016-08-11

    Here, we present two-dimensional hydrodynamic simulations using the Smoothed Particle Hydrodynamic code, VINE, to model a self-gravitating binary system. We model configurations in which a circumbinary torus+disk surrounds a pair of stars in orbit around each other and a circumstellar disk surrounds each star, similar to that observed for the GG Tau A system. We assume that the disks cool as blackbodies, using rates determined independently at each location in the disk by the time dependent temperature of the photosphere there. We assume heating due to hydrodynamical processes and to radiation from the two stars, using rates approximated from amore » measure of the radiation intercepted by the disk at its photosphere.« less

  18. Formatting scripts with computers and Extended BASIC.

    PubMed

    Menning, C B

    1984-02-01

    A computer program, written in the language of Extended BASIC, is presented which enables scripts, for educational media, to be quickly written in a nearly unformatted style. From the resulting script file, stored on magnetic tape or disk, the computer program formats the script into either a storyboard , a presentation, or a narrator 's script. Script headings and page and paragraph numbers are automatic features in the word processing. Suggestions are given for making personal modifications to the computer program.

  19. Free-Field Spatialized Aural Cues for Synthetic Environments

    DTIC Science & Technology

    1994-09-01

    any of the references previously listed. B. MIDI Other than electronic musicians and a few hobbyists, the Musical Instrument Digital Interface (MIDI...developed in 1983 and still has a long way to go in improving its capabilities, but the advantages are numerous. An entire musical score can be stored...the same musical file on a computer in one of the various digital sound formats could easily occupy 90 megabytes of disk space. 7 K III. PREVIOUS WORK

  20. Transition from lab to flight demo for model-based FLIR ATR and SAR-FLIR fusion

    NASA Astrophysics Data System (ADS)

    Childs, Martin B.; Carlson, Karen M.; Pujara, Neeraj

    2000-08-01

    Model-based automatic target recognition (ATR) using forward- looking infrared (FLIR) imagery, and using FLIR imagery combined with cues from a synthetic aperture radar (SAR) system, has been successfully demonstrated in the laboratory. For the laboratory demonstration, FLIR images, platform location, sensor data, and SAR cues were read in from files stored on computer disk. This ATR system, however, was intended to ultimately be flown in a fighter aircraft. We discuss the transition from laboratory demonstration to flight demonstration for this system. The obvious changes required were in the interfaces: the flight system must get live FLIR imagery from a sensor; it must get platform location, sensor data, and controls from the avionics computer in the aircraft via 1553 bus; and it must get SAR cues from the on-board SAR system, also via 1553 bus. Other changes included the transition to rugged hardware that would withstand the fighter aircraft environment, and the need for the system to be compact and self-contained. Unexpected as well as expected challenges were encountered. We discuss some of these challenges, how they were met, and the performance of the flight-demonstration system.

  1. Digital aeromagnetic anomaly data from eastern-most Guyana

    USGS Publications Warehouse

    Pierce, Herbert A.; Backjinski, Natalka; Manes, John-James

    1995-01-01

    The Center for Inter-American Mineral Resource Investigations (CIMRI) supported distribution and analysis of geoscientific and mineral resource related information concerning Latin America. CIMRI staff digitized aeromagnetic data for eastern-most Guyana as part of a preliminary regional assessment of minerals in the Guyana Shield, South America. The data were digitized from 145 aeromagnetic contour maps at a scale of 1:50,000 and merged into a single digital data set. The data were used to examine the Precambrian shield, greenstone belts, and other tectonic boundaries as well as explore ideas concerning mineral deposits within the area. A subset of these digital data were presented to the Guyanan government during early 1995 (Pierce, 1994). This Open-File report, consisting of this text and seven (7) 3.5" IBM-PC compatible ASCII magnetic disks, makes the digital data available to the public. Information regarding the source of data and subsequent processing is included below. The data were collected in Guyana by two contractors at different times. The first data were collected from 1962 to 1963. These data are several aeromagnetic surveys covering parts of 12 quadrangles funded by the United Nations and flown by Aero Service Corporation. The second and more extensive data set was collected from 1971 to 1972 by the Canadian International Development Agency flown by Terra Surveys Ltd. under a contract with the Geological Survey of Guyana. The Guyana Government published the data as contour maps that are available in Georgetown through the Guyana Government. Coverage extends from about 2°45*N to 8°30*N latitude and from 60°0'W to 57°0'W longitude (see Figure 1.). The contour maps were digitized at points where the magnetic contours intersect the flight lines. The data files include XYZ ASCII files, XYZ binary files, ASCII grids, and binary "standard USGS" grids. There are four grids consisting of the following data types: unprotected raw data gridunprotected residual or International Geomagnetic Reference Field (IGRF) removed gridUTM projected residual (IGRF removed) gridUTM projected residual with a second order surface removedThese data files were transferred to 3.5" 1.44 megabyte floppy disks readable on IBM-compatible personal computers. These data are also available from the Department of Commerce National Geophysical Data Center.

  2. Performance of redundant disk array organizations in transaction processing environments

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1993-01-01

    A performance evaluation is conducted for two redundant disk-array organizations in a transaction-processing environment, relative to the performance of both mirrored disk organizations and organizations using neither striping nor redundancy. The proposed parity-striping alternative to striping with rotated parity is shown to furnish rapid recovery from failure at the same low storage cost without interleaving the data over multiple disks. Both noncached systems and systems using a nonvolatile cache as the controller are considered.

  3. Planet Formation in Disks with Inclined Binary Companions: Can Primordial Spin-Orbit Misalignment be Produced?

    NASA Astrophysics Data System (ADS)

    Zanazzi, J. J.; Lai, Dong

    2018-04-01

    Many hot Jupiter (HJ) systems have been observed to have their stellar spin axis misaligned with the planet's orbital angular momentum axis. The origin of this spin-orbit misalignment and the formation mechanism of HJs remain poorly understood. A number of recent works have suggested that gravitational interactions between host stars, protoplanetary disks, and inclined binary companions may tilt the stellar spin axis with respect to the disk's angular angular momentum axis, producing planetary systems with misaligned orbits. These previous works considered idealized disk evolution models and neglected the gravitational influence of newly formed planets. In this paper, we explore how disk photoevaporation and planet formation and migration affect the inclination evolution of planet-star-disk-binary systems. We take into account planet-disk interactions and the gravitational spin-orbit coupling between the host star and the planet. We find that the rapid depletion of the inner disk via photoevaporation reduces the excitation of stellar obliquities. Depending on the formation and migration history of HJs, the spin-orbit coupling between the star and the planet may reduces and even completely suppress the excitation of stellar obliquities. Our work constrains the formation/migration history of HJs. On the other hand, planetary systems with "cold" Jupiters or close-in super-earths may experience excitation of stellar obliquities in the presence of distant inclined companions.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moór, A.; Ábrahám, P.; Kóspál, Á.

    Debris disks are considered to be gas-poor, but recent observations revealed molecular or atomic gas in several 10–40 Myr old systems. We used the APEX and IRAM 30 m radio telescopes to search for CO gas in 20 bright debris disks. In one case, around the 16 Myr old A-type star HD 131835, we discovered a new gas-bearing debris disk, where the CO 3–2 transition was successfully detected. No other individual system exhibited a measurable CO signal. Our Herschel Space Observatory far-infrared images of HD 131835 marginally resolved the disk at both 70 and 100 μm, with a characteristic radiusmore » of ∼170 AU. While in stellar properties HD 131835 resembles β Pic, its dust disk properties are similar to those of the most massive young debris disks. With the detection of gas in HD 131835 the number of known debris disks with CO content has increased to four, all of them encircling young (≤40 Myr) A-type stars. Based on statistics within 125 pc, we suggest that the presence of a detectable amount of gas in the most massive debris disks around young A-type stars is a common phenomenon. Our current data cannot conclude on the origin of gas in HD 131835. If the gas is secondary, arising from the disruption of planetesimals, then HD 131835 is a comparably young, and in terms of its disk, more massive analog of the β Pic system. However, it is also possible that this system, similar to HD 21997, possesses a hybrid disk, where the gas material is predominantly primordial, while the dust grains are mostly derived from planetesimals.« less

  5. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.

  6. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Bockelman, B.; Blomer, J.

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less

  7. Near-Infrared Polarimetry of the GG Tauri A Binary System

    NASA Technical Reports Server (NTRS)

    Itoh, Yoichi; Oasa, Yumiko; Kudo, Tomoyuki; Kusakabe, Nobuhiko; Hashimoto, Jun; Abe, Lyu; Brandner, Wolfgang; Brandt, Timothy D.; Carson, Joseph C.; Egner, Sebastian; hide

    2014-01-01

    A high angular resolution near-infrared image that shows the intensity of polarization for the GG Tau A binary system was obtained with the Subaru Telescope. The image shows a circumbinary disk scattering the light from the central binary. The azimuthal profile of the intensity of polarization for the circumbinary disk is roughly reproduced by a simple disk model with the Henyey-Greenstein phase function and the Rayleigh function, indicating there are small dust grains at the surface of the disk. Combined with a previous observation of the circumbinary disk, our image indicates that the gap structure in the circumbinary disk orbits counterclockwise, but material in the disk orbits clockwise. We propose that there is a shadow caused by material located between the central binary and the circumbinary disk. The separations and position angles of the stellar components of the binary in the past 20 yr are consistent with the binary orbit with a = 33.4 AU and e = 0.34.

  8. DOE Fire Protection Handbook, Volume I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The Department of Energy (DOE) Fire Protection Program is delineated in a number of source documents including; the Code of Federal Regulations (CFR), DOE Policy Statements and Orders, DOE and national consensus standards (such as those promulgated by the National Fire Protection Association), and supplementary guidance, This Handbook is intended to bring together in one location as much of this material as possible to facilitate understanding and ease of use. The applicability of any of these directives to individual Maintenance and Operating Contractors or to given facilities and operations is governed by existing contracts. Questions regarding applicability should be directedmore » to the DOE Authority Having Jurisdiction for fire safety. The information provided within includes copies of those DOE directives that are directly applicable to the implementation of a comprehensive fire protection program. They are delineated in the Table of Contents. The items marked with an asterisk (*) are included on the disks in WordPerfect 5.1 format, with the filename noted below. The items marked with double asterisks are provided as hard copies as well as on the disk. For those using MAC disks, the files are in Wordperfect 2.1 for MAC.« less

  9. System Identification of Mistuned Bladed Disks from Traveling Wave Response Measurements

    NASA Technical Reports Server (NTRS)

    Feiner, D. M.; Griffin, J. H.; Jones, K. W.; Kenyon, J. A.; Mehmed, O.; Kurkov, A. P.

    2003-01-01

    A new approach to modal analysis is presented. By applying this technique to bladed disk system identification methods, one can determine the mistuning in a rotor based on its response to a traveling wave excitation. This allows system identification to be performed under rotating conditions, and thus expands the applicability of existing mistuning identification techniques from integrally bladed rotors to conventional bladed disks.

  10. The Space Infrared Interferometric Telescope (SPIRIT): Mission Study Results

    DTIC Science & Technology

    2006-01-01

    how planetary systems form it is essential to obtain spatially-resolved far-IR observations of protostars and protoplanetary disks . At the distance...accomplish three primary scientific objectives: (1) Learn how planetary systems form from protostellar disks , and how they acquire their chemical...organization; (2) Characterize the family of extrasolar planetary systems by imaging the structure in debris disks to understand how and where planets

  11. THE EVOLUTION OF INNER DISK GAS IN TRANSITION DISKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoadley, K.; France, K.; McJunkin, M.

    2015-10-10

    Investigating the molecular gas in the inner regions of protoplanetary disks (PPDs) provides insight into how the molecular disk environment changes during the transition from primordial to debris disk systems. We conduct a small survey of molecular hydrogen (H{sub 2}) fluorescent emission, using 14 well-studied Classical T Tauri stars at two distinct dust disk evolutionary stages, to explore how the structure of the inner molecular disk changes as the optically thick warm dust dissipates. We simulate the observed Hi-Lyman α-pumped H{sub 2} disk fluorescence by creating a 2D radiative transfer model that describes the radial distributions of H{sub 2} emissionmore » in the disk atmosphere and compare these to observations from the Hubble Space Telescope. We find the radial distributions that best describe the observed H{sub 2} FUV emission arising in primordial disk targets (full dust disk) are demonstrably different than those of transition disks (little-to-no warm dust observed). For each best-fit model, we estimate inner and outer disk emission boundaries (r{sub in} and r{sub out}), describing where the bulk of the observed H{sub 2} emission arises in each disk, and we examine correlations between these and several observational disk evolution indicators, such as n{sub 13–31}, r{sub in,} {sub CO}, and the mass accretion rate. We find strong, positive correlations between the H{sub 2} radial distributions and the slope of the dust spectral energy distribution, implying the behavior of the molecular disk atmosphere changes as the inner dust clears in evolving PPDs. Overall, we find that H{sub 2} inner radii are ∼4 times larger in transition systems, while the bulk of the H{sub 2} emission originates inside the dust gap radius for all transitional sources.« less

  12. A DWARF TRANSITIONAL PROTOPLANETARY DISK AROUND XZ TAU B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osorio, Mayra; Macías, Enrique; Anglada, Guillem

    We report the discovery of a dwarf protoplanetary disk around the star XZ Tau B that shows all the features of a classical transitional disk but on a much smaller scale. The disk has been imaged with the Atacama Large Millimeter/submillimeter Array (ALMA), revealing that its dust emission has a quite small radius of ∼3.4 au and presents a central cavity of ∼1.3 au in radius that we attribute to clearing by a compact system of orbiting (proto)planets. Given the very small radii involved, evolution is expected to be much faster in this disk (observable changes in a few months)more » than in classical disks (observable changes requiring decades) and easy to monitor with observations in the near future. From our modeling we estimate that the mass of the disk is large enough to form a compact planetary system.« less

  13. Moving mode shape function approach for spinning disk and asymmetric disc brake squeal

    NASA Astrophysics Data System (ADS)

    Kang, Jaeyoung

    2018-06-01

    The solution approach of an asymmetric spinning disk under stationary friction loads requires the mode shape function fixed in the disk in the assumed mode method when the equations of motion is described in the space-fixed frame. This model description will be termed the 'moving mode shape function approach' and it allows us to formulate the stationary contact load problem in both the axisymmetric and asymmetric disk cases. Numerical results show that the eigenvalues of the time-periodic axisymmetric disk system are time-invariant. When the axisymmetry of the disk is broken, the positive real parts of the eigenvalues highly vary with the rotation of the disk in the slow speeds in such application as disc brake squeal. By using the Floquet stability analysis, it is also shown that breaking the axisymmetry of the disc alters the stability boundaries of the system.

  14. The End of Protoplanetary Disk Evolution: An ALMA Survey of Upper Scorpius

    NASA Astrophysics Data System (ADS)

    Barenfeld, Scott A.; Carpenter, John M.; Sargent, Anneila I.; Ricci, Luca; Isella, Andrea

    2017-01-01

    The evolution of the mass of solids in circumstellar disks is a key factor in determining how planets form. Infrared observations have established that the dust in primordial disks vanishes around the majority of stars by an age of 5-10 Myr. However, how this disappearance proceeds is poorly constrained. Only with longer wavelength observations, where the dust emission is optically thin, is it possible to measure disk dust mass and how it varies as a function of age. To this end, we have obtained ALMA 0.88 mm observations of over 100 sources with suspected circumstellar disks in the Upper Scorpius OB Association (Upper Sco). The 5-11 Myr age of Upper Sco suggests that any such disks will be quite evolved, making this association an ideal target to compare to systems of younger disks in order to study evolution. With ALMA, we achieve an order of magnitude improvement in sensitivity over previous (sub)millimeter surveys of Upper Sco and detect 58 disks in the continuum. We calculate the total dust masses of these disks and compare their masses to those of younger disks in Taurus, Lupus, and Chamaeleon. We find strong evidence for a decline in disk dust mass between these 1-3 Myr old systems and the 5-11 Myr old Upper Sco. Our results represent the first definitive measurement of a decline in disk dust mass with age.

  15. ACS Imaging of beta Pic: Searching for the origin of rings and asymmetry in planetesimal disks

    NASA Astrophysics Data System (ADS)

    Kalas, Paul

    2003-07-01

    The emerging picture for planetesimal disks around main sequence stars is that their radial and azimuthal symmetries are significantly deformed by the dynamical effects of either planets interior to the disk, or stellar objects exterior to the disk. The cause of these structures, such as the 50 AU cutoff of our Kuiper Belt, remains mysterious. Structure in the beta Pic planetesimal disk could be due to dynamics controlled by an extrasolar planet, or by the tidal influence of a more massive object exterior to the disk. The hypothesis of an extrasolar planet causing the vertical deformation in the disk predicts a blue color to the disk perpendicular to the disk midplane. The hypothesis that a stellar perturber deforms the disk predicts a globally uniform color and the existence of ring-like structure beyond 800 AU radius. We propose to obtain deep, multi-color images of the beta Pic disk ansae in the region 15"-220" {200-4000 AU} radius with the ACS WFC. The unparalleled stability of the HST PSF means that these data are uniquely capable of delivering the color sensitivity that can distinguish between the two theories of beta Pic's disk structure. Ascertaining the cause of such structure provide a meaningful context for understanding the dynamical history of our early solar system, as well as other planetesimal systems imaged around main sequence stars.

  16. IOPA: I/O-aware parallelism adaption for parallel programs

    PubMed Central

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236

  17. IOPA: I/O-aware parallelism adaption for parallel programs.

    PubMed

    Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei

    2017-01-01

    With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.

  18. Debris Disks as Tracers of Nearby Planetary Systems

    NASA Technical Reports Server (NTRS)

    Stapelfeldt, Karl

    2012-01-01

    Many main-sequence stars possess tenuous circumstellar dust clouds believed to trace extrasolar analogs of the Sun's asteroid and Kuiper Belts. While most of these "debris disks" are known only from far-infrared photometry, dozens are now spatially resolved. In this talk, I'll review the observed structural properties of debris disks as revealed by imaging with the Hubble, Spitzer, and Herschel Space Telescopes. I will show how modeling of the far-infrared spectral energy distributions of resolved disks can be used to constrain their dust particle sizes and albedos. I will review cases of disks whose substructures suggest planetary perturbations, including a newly-discovered eccentric ring system. I'll conclude with thoughts on the potential of upcoming and proposed facilities to resolve similar structures around a greatly expanded sample of nearby debris systems.

  19. Planetary astronomy program

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A program was developed in which asteroids and two planets, namely, Saturn and Uranus, were investigated. This included: (1) asteroid spectrophotometry; (2) the nature of the Trojan asteroids; (3) an investigation to determine asteroid masses; (4) the photometry, structure, and dynamics of the rings surrounding the planet Saturn; and (5) aerosol distribution in the atmosphere of Uranus. Plans were finalized to obtain observations of the nucleus of the dying comet P/Arend-Rigaux. Further work was accomplished in asteroid data reduction. Data were entered into the TRIAD data file and a program generated classifications for over 560 different asteroids. A photoelectric area scanner was used to obtain UBV scans of the disk of the planet Saturn on several winter and spring nights in 1977. Intensity profiles show pronounced limb brightening in U, moderate limb brightening in B, and limb darkening in V. Narrow band photoelectric area-scanning photometry of the Uranus disk is also reported. Results are given.

  20. VizieR Online Data Catalog: Black hole masses in megamaser disk galaxies (Greene+, 2016)

    NASA Astrophysics Data System (ADS)

    Greene, J. E.; Seth, A.; Kim, M.; Lasker, R.; Goulding, A.; Gao, F.; Braatz, J. A.; Henkel, C.; Condon, J.; Lo, K. Y.; Zhao, W.

    2016-11-01

    The velocity dispersion (σ*) presented here for megamaser disk galaxies are measured from three data sets. Two galaxies (NGC1320, NGC5495) were observed with the B&C spectrograph on the Dupont telescope at the Las Campanas Observatory. These spectra have an instrumental resolution of σr~120km/s and a wavelength range of 3400-6000Å. Two galaxies (Mrk1029, ESO558-G009) have σ* measurements from the cross-dispersed near-infrared spectrograph Triplespec on the 3.5m telescope at Apache Point. Triplespec has a wavelength range of 0.9-2.4um with a spectral resolution of σr~37km/s. Finally, three galaxies (J0437+2456, NGC5765b, UGC6093) have spectra from the SDSS. They have a spectral resolution of σr~65km/s and cover a range of 3800-9200Å. (1 data file).

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Andrew F.; Marzari, Francesco

    Here, we present two-dimensional hydrodynamic simulations using the Smoothed Particle Hydrodynamic code, VINE, to model a self-gravitating binary system. We model configurations in which a circumbinary torus+disk surrounds a pair of stars in orbit around each other and a circumstellar disk surrounds each star, similar to that observed for the GG Tau A system. We assume that the disks cool as blackbodies, using rates determined independently at each location in the disk by the time dependent temperature of the photosphere there. We assume heating due to hydrodynamical processes and to radiation from the two stars, using rates approximated from amore » measure of the radiation intercepted by the disk at its photosphere.« less

  2. The structure of protostellar accretion disks and the origin of bipolar flows

    NASA Technical Reports Server (NTRS)

    Wardle, Mark; Koenigl, Arieh

    1993-01-01

    Equations are obtained which govern the disk-wind structure and identify the physical parameters relevant to circumstellar disks. The system of equations is analyzed in the thin-disk approximation, and it is shown that the system can be consistently reduced to a set of ordinary differential equations in z. Representative solutions are presented, and it is shown that the apparent paradox discussed by Shu (1991) is resolved when the finite thickness of the disk is taken into account. Implications of the results for the origin of bipolar flows in young stellar objects and possible application to active galactic nuclei are discussed.

  3. Performance measurements of the first RAID prototype

    NASA Technical Reports Server (NTRS)

    Chervenak, Ann L.

    1990-01-01

    The performance is examined of Redundant Arrays of Inexpensive Disks (RAID) the First, a prototype disk array. A hierarchy of bottlenecks was discovered in the system that limit overall performance. The most serious is the memory system contention on the Sun 4/280 host CPU, which limits array bandwidth to 2.3 MBytes/sec. The array performs more successfully on small random operations, achieving nearly 300 I/Os per second before the Sun 4/280 becomes CPU limited. Other bottlenecks in the system are the VME backplane, bandwidth on the disk controller, and overheads associated with the SCSI protocol. All are examined in detail. The main conclusion is that to achieve the potential bandwidth of arrays, more powerful CPU's alone will not suffice. Just as important are adequate host memory bandwidth and support for high bandwidth on disk controllers. Current disk controllers are more often designed to achieve large numbers of small random operations, rather than high bandwidth. Operating systems also need to change to support high bandwidth from disk arrays. In particular, they should transfer data in larger blocks, and should support asynchronous I/O to improve sequential write performance.

  4. Studying Notable Debris Disks In L-band with the Vortex Coronagraph

    NASA Astrophysics Data System (ADS)

    Patel, Rahul; Beichman, Charles; Choquet, Elodie; Mawet, Dimitri; Meshkat, Tiffany; ygouf, marie

    2018-01-01

    Resolved images of circumstellar disks are integral to our understanding of planetary systems, as the micron sized dust grains that comprise the disk are born from the collisional grinding of planetesimals by larger planets in the system. Resolved images are essential to determining grain properties that might otherwise be degenerate from analyzing the star’s spectral energy distribution. Though the majority of scattered light images of disks are obtained at optical and near-IR wavelengths, only a few have been imaged in the thermal IR at L-band. Probing the spatial features of disks at L-band opens up the possibility of constraining additional grain properties, such as water/ice features.Here, we present the results of our effort to image the disks of a few notable systems at L-band using the NIRC2 imager at Keck, in conjunction with the newly commissioned vector vortex coronagraph. The vortex, along with the QACITS fine guiding program installed at Keck, enables us to probe the small ~lambda/D angular separations of these systems, and reach contrasts of 1/100,000. We will discuss the systems that have been imaged, and lessons learned while imaging in L-band. Our analysis of these disks reveal features previously unseen, and will lay the foundation for followup studies by missions such as JWST at similar wavelengths from space.

  5. OUTWARD MIGRATION OF JUPITER AND SATURN IN 3:2 OR 2:1 RESONANCE IN RADIATIVE DISKS: IMPLICATIONS FOR THE GRAND TACK AND NICE MODELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierens, Arnaud; Raymond, Sean N.; Nesvorny, David

    Embedded in the gaseous protoplanetary disk, Jupiter and Saturn naturally become trapped in 3:2 resonance and migrate outward. This serves as the basis of the Grand Tack model. However, previous hydrodynamical simulations were restricted to isothermal disks, with moderate aspect ratio and viscosity. Here we simulate the orbital evolution of the gas giants in disks with viscous heating and radiative cooling. We find that Jupiter and Saturn migrate outward in 3:2 resonance in modest-mass (M {sub disk} ≈ M {sub MMSN}, where MMSN is the {sup m}inimum-mass solar nebula{sup )} disks with viscous stress parameter α between 10{sup –3} andmore » 10{sup –2}. In disks with relatively low-mass (M {sub disk} ≲ M {sub MMSN}), Jupiter and Saturn get captured in 2:1 resonance and can even migrate outward in low-viscosity disks (α ≤ 10{sup –4}). Such disks have a very small aspect ratio (h ∼ 0.02-0.03) that favors outward migration after capture in 2:1 resonance, as confirmed by isothermal runs which resulted in a similar outcome for h ∼ 0.02 and α ≤ 10{sup –4}. We also performed N-body runs of the outer solar system starting from the results of our hydrodynamical simulations and including 2-3 ice giants. After dispersal of the gaseous disk, a Nice model instability starting with Jupiter and Saturn in 2:1 resonance results in good solar systems analogs. We conclude that in a cold solar nebula, the 2:1 resonance between Jupiter and Saturn can lead to outward migration of the system, and this may represent an alternative scenario for the evolution of the solar system.« less

  6. Disks around stars and the growth of planetary systems.

    PubMed

    Greaves, Jane S

    2005-01-07

    Circumstellar disks play a vital evolutionary role, providing a way to move gas inward and onto a young star. The outward transfer of angular momentum allows the star to contract without breaking up, and the remnant disk of gas and particles is the reservoir for forming planets. High-resolution spectroscopy is uncovering planetary dynamics and motion within the remnant disk, and imaging at infrared to millimeter wavelengths resolves disk structure over billions of years of evolution. Most stars are born with a disk, and models of planet formation need to form such bodies from the disk material within the disk's 10-million-year life-span.

  7. Eighth Goddard Conference on Mass Storage Systems and Technologies in Cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    2000-01-01

    This document contains copies of those technical papers received in time for publication prior to the Eighth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center March 27-30, 2000. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, new technology with a special emphasis on holographic storage, performance, standards, site reports, vendor solutions. Tutorials will be available on stability of optical media, disk subsystem performance evaluation, I/O and storage tuning, functionality and performance evaluation of file systems for storage area networks.

  8. The HD 163296 Circumstellar Disk in Scattered Light: Evidence of Time-Variable Self-Shadowing

    NASA Technical Reports Server (NTRS)

    Wisniewski, John P.; Clampin, Mark; Grady, Carol A.; Ardila, David R.; Ford, Holland C.; Golimowski, David A.; Illingworth, Garth D.; Krist, John E.

    2008-01-01

    We present the first multi-color view of the scattered light disk of the Herbig Ae star HD 163296, based on coronagraphic observations from the Hubble Space Telescope Advanced Camera for Surveys (ACS). Radial profile fits of the surface brightness along the disk's semi-major axis indicates that the disk is not continuously flared, and extends to approx.540 AU. The disk's color (V-I)=1.1 at a radial distance of 3.5" is redder than the observed stellar color (V-I)=0.15. This red disk color might be indicative of either an evolution in the grain size distribution (i.e. grain growth) and/or composition, both of which would be consistent with the observed non-flared geometry of the outer disk. We also identify a single ansa morphological structure in our F435W ACS data, which is absent from earlier epoch F606W and F814W ACS data, but corresponds to one of the two ansa observed in archival HST STIS coronagraphic data. Following transformation to similar band-passes, we find that the scattered light disk of HD 163296 is 1 mag arcsec(sup -2) fainter at 3.5" in the STIS data than in the ACS data. Moreover, variations are seen in (i) the visibility of the ansa(e) structures, in (ii) the relative surface brightness of the ansa(e) structures, and in (iii) the (known) intrinsic polarization of the system. These results indicate that the scattered light from the HD 163296 disk is variable. We speculate that the inner disk wall, which Sitko et al. suggests has a variable scale height as diagnosed by near-IR SED variability, induces variable self-shadowing of the outer disk. We further speculate that the observed surface brightness variability of the ansa(e) structures may indicate that the inner disk wall is azimuthally asymmetric. Subject headings: circumstellar matter - stars: individual (HD 163296) - planetary systems: formation - planetary systems: protoplanetary disks

  9. Formation of the terrestrial planets in the solar system around 1 au via radial concentration of planetesimals

    NASA Astrophysics Data System (ADS)

    Ogihara, Masahiro; Kokubo, Eiichiro; Suzuki, Takeru K.; Morbidelli, Alessandro

    2018-05-01

    Context. No planets exist inside the orbit of Mercury and the terrestrial planets of the solar system exhibit a localized configuration. According to thermal structure calculation of protoplanetary disks, a silicate condensation line ( 1300 K) is located around 0.1 au from the Sun except for the early phase of disk evolution, and planetesimals could have formed inside the orbit of Mercury. A recent study of disk evolution that includes magnetically driven disk winds showed that the gas disk obtains a positive surface density slope inside 1 au from the central star. In a region with positive midplane pressure gradient, planetesimals undergo outward radial drift. Aims: We investigate the radial drift of planetesimals and type I migration of planetary embryos in a disk that viscously evolves with magnetically driven disk winds. We show a case in which no planets remain in the close-in region. Methods: Radial drifts of planetesimals are simulated using a recent disk evolution model that includes effects of disk winds. The late stage of planet formation is also examined by performing N-body simulations of planetary embryos. Results: We demonstrate that in the middle stage of disk evolution, planetesimals can undergo convergent radial drift in a magnetorotational instability (MRI)-inactive disk, in which the pressure maximum is created, and accumulate in a narrow ring-like region with an inner edge at 0.7 au from the Sun. We also show that planetary embryos that may grow from the narrow planetesimal ring do not exhibit significant type I migration in the late stage of disk evolution. Conclusions: The origin of the localized configuration of the terrestrial planets of the solar system, in particular the deficit of close-in planets, can be explained by the convergent radial drift of planetesimals in disks with a positive pressure gradient in the close-in region.

  10. Photonic content-addressable memory system that uses a parallel-readout optical disk

    NASA Astrophysics Data System (ADS)

    Krishnamoorthy, Ashok V.; Marchand, Philippe J.; Yayla, Gökçe; Esener, Sadik C.

    1995-11-01

    We describe a high-performance associative-memory system that can be implemented by means of an optical disk modified for parallel readout and a custom-designed silicon integrated circuit with parallel optical input. The system can achieve associative recall on 128 \\times 128 bit images and also on variable-size subimages. The system's behavior and performance are evaluated on the basis of experimental results on a motionless-head parallel-readout optical-disk system, logic simulations of the very-large-scale integrated chip, and a software emulation of the overall system.

  11. Disk Memories: What You Should Know before You Buy Them.

    ERIC Educational Resources Information Center

    Bursky, Dave

    1981-01-01

    Explains the basic features of floppy disk and hard disk computer storage systems and the purchasing decisions which must be made, particularly in relation to certain popular microcomputers. A disk vendors directory is included. Journal availability: Hayden Publishing Company, 50 Essex Street, Rochelle Park, NJ 07662. (SJL)

  12. The Space Infrared Interferometric Telescope (SPIRIT): High-Resolution Imaging and Spectroscopy in the Far-Infrared (Preprint)

    DTIC Science & Technology

    2007-01-01

    primary scientific objectives: (1) Learn how planetary systems form from protostellar disks , and how they acquire their inhomogeneous composition; (2...characterize the family of extrasolar planetary systems by imaging the structure in debris disks to understand how and where planets of different...scientific objectives: (1) Learn how planetary systems form from protostellar disks , and how they acquire their inhomogeneous composition; (2

  13. Characterizing Protoplanetary Disks in a Young Binary in Orion

    NASA Astrophysics Data System (ADS)

    Powell, Jonas; Hughes, A. Meredith; Mann, Rita; Flaherty, Kevin; Di Francesco, James; Williams, Jonathan

    2018-01-01

    Planetary systems form in circumstellar disks of gas and dust surrounding young stars. One open question in the study of planet formation involves understanding how different environments affect the properties of the disks and planets they generate. Understanding the properties of disks in high-mass star forming regions (SFRs) is critical since most stars - probably including our Sun - form in those regions. By comparing the disks in high-mass SFRs to those in better-studied low-mass SFRs we can learn about the role environment plays in planet formation. Here we present 0.5" resolution observations of the young two-disk binary system V2434 Ori in the Orion Nebula from the Atacama Large Millimeter/submillimeter Array (ALMA) in molecular line tracers of CO(3-2), HCN(4-3), HCO+(4-3) and CS(7-6). We model each disk’s mass, radius, temperature structure, and molecular abundances, by creating synthetic images using an LTE ray-tracing code and comparing simulated observations with the ALMA data in the visibility domain. We then compare our results to a previous study of molecular line emission from a single Orion proplyd, modeled using similar methods, and to previously characterized disks in low-mass SFRs to investigate the role of environment in disk chemistry and planetary system formation.

  14. Characterizing the Disk of a Recent Massive Collisional Event

    NASA Astrophysics Data System (ADS)

    Song, Inseok

    2015-10-01

    Debris disks play a key role in the formation and evolution of planetary systems. On rare occasions, circumstellar material appears as strictly warm infrared excess in regions of expected terrestrial planet formation and so present an interesting opportunity for the study of terrestrial planetary regions. There are only a few known cases of extreme, warm, dusty disks which lack any colder outer component including BD+20 307, HD 172555, EF Cha, and HD 23514. We have recently found a new system TYC 8830-410-1 belonging to this rare group. Warm dust grains are extremely short-lived, and the extraordinary amount of warm dust near these stars can only be plausibly explainable by a recent (or on-going) massive transient event such as the Late Heavy Bombardment (LHB) or plantary collisions. LHB-like events are seen generally in a system with a dominant cold disk, however, warm dust only systems show no hint of a massive cold disk. Planetary collisions leave a telltale sign of strange mid-IR spectral feature such as silica and we want to fully characterize the spectral shape of the newly found system with SOFIA/FORCAST. With SOFIA/FORCAST, we propose to obtain two narrow band photometric measurements between 6 and 9 microns. These FORCAST photometric measurements will constrain the amount and temperature of the warm disk in the system. There are less than a handful systems with a strong hint of recent planetary collisions. With the firmly constrained warm disk around TYC 8830-410-1, we will publish the discovery in a leading astronomical journal accompanied with a potential press release through SOFIA.

  15. Outbursts and Disk Variability in Be Stars

    NASA Astrophysics Data System (ADS)

    Labadie-Bartz, Jonathan; Chojnowski, S. Drew; Whelan, David G.; Pepper, Joshua; McSwain, M. Virginia; Borges Fernandes, Marcelo; Wisniewski, John P.; Stringfellow, Guy S.; Carciofi, Alex C.; Siverd, Robert J.; Glazier, Amy L.; Anderson, Sophie G.; Caravello, Anthoni J.; Stassun, Keivan G.; Lund, Michael B.; Stevens, Daniel J.; Rodriguez, Joseph E.; James, David J.; Kuhn, Rudolf B.

    2018-02-01

    In order to study the growth and evolution of circumstellar disks around classical Be stars, we analyze optical time-series photometry from the KELT survey with simultaneous infrared and visible spectroscopy from the Apache Point Observatory Galactic Evolution Experiment survey and Be Star Spectra database for a sample of 160 Galactic classical Be stars. The systems studied here show variability including transitions from a diskless to a disk-possessing state (and vice versa), and persistent disks that vary in strength, being replenished at either regularly or irregularly occurring intervals. We detect disk-building events (outbursts) in the light curves of 28% of our sample. Outbursts are more commonly observed in early- (57%), compared to mid- (27%) and late-type (8%) systems. A given system may show anywhere between 0 and 40 individual outbursts in its light curve, with amplitudes ranging up to ∼0.5 mag and event durations between ∼2 and 1000 days. We study how both the photometry and spectroscopy change together during active episodes of disk growth or dissipation, revealing details about the evolution of the circumstellar environment. We demonstrate that photometric activity is linked to changes in the inner disk, and show that, at least in some cases, the disk growth process is asymmetrical. Observational evidence of Be star disks both growing and clearing from the inside out is presented. The duration of disk buildup and dissipation phases are measured for 70 outbursts, and we find that the average outburst takes about twice as long to dissipate as it does to build up in optical photometry. Our analysis hints that dissipation of the inner disk occurs relatively slowly for late-type Be stars.

  16. A Method to Constrain the Size of the Protosolar Nebula

    NASA Astrophysics Data System (ADS)

    Kretke, K. A.; Levison, H. F.; Buie, M. W.; Morbidelli, A.

    2012-04-01

    Observations indicate that the gaseous circumstellar disks around young stars vary significantly in size, ranging from tens to thousands of AU. Models of planet formation depend critically upon the properties of these primordial disks, yet in general it is impossible to connect an existing planetary system with an observed disk. We present a method by which we can constrain the size of our own protosolar nebula using the properties of the small body reservoirs in the solar system. In standard planet formation theory, after Jupiter and Saturn formed they scattered a significant number of remnant planetesimals into highly eccentric orbits. In this paper, we show that if there had been a massive, extended protoplanetary disk at that time, then the disk would have excited Kozai oscillations in some of the scattered objects, driving them into high-inclination (i >~ 50°), low-eccentricity orbits (q >~ 30 AU). The dissipation of the gaseous disk would strand a subset of objects in these high-inclination orbits; orbits that are stable on Gyr timescales. To date, surveys have not detected any Kuiper-belt objects with orbits consistent with this dynamical mechanism. Using these non-detections by the Deep Ecliptic Survey and the Palomar Distant Solar System Survey we are able to rule out an extended gaseous protoplanetary disk (RD >~ 80 AU) in our solar system at the time of Jupiter's formation. Future deep all sky surveys such as the Large Synoptic Survey Telescope will allow us to further constrain the size of the protoplanetary disk.

  17. An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction

    ERIC Educational Resources Information Center

    Bhasin, Harpreet

    2011-01-01

    Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…

  18. Ceramic blade attachment system

    DOEpatents

    Boyd, Gary L.

    1995-01-01

    A retainer ring is arranged to mount turbine blades to a turbine disk so that aerodynamic forces produced by a gas turbine engine are transferred from the turbine blades to the turbine disk to cause the turbine blades and turbine disk to rotate, but so that centrifugal forces of the turbine blades resulting from the rotation of the turbine blades and turbine disk are not transferred from the turbine blades to the turbine disk.

  19. Gaseous Inner Disks

    DTIC Science & Technology

    2007-01-01

    planetary systems (i.e., planetary masses, orbital radii, and eccentricities). For example, the lifetime of gas in the inner disk (limited by accretion onto...2002). Thus, understanding how inner disks dissipate may impact our understanding of the origin of planetary orbital radii. Similarly, residual gas...which the orbiting giant planet carves out a “ gap ” in the disk . Low column densities would also be characteristic of a dissipating disk . Thus, we should

  20. HUBBLE UNCOVERS DUST DISK AROUND A MASSIVE BLACK HOLE

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Resembling a gigantic hubcap in space, a 3,700 light-year-diameter dust disk encircles a 300 million solar-mass black hole in the center of the elliptical galaxy NGC 7052. The disk, possibly a remnant of an ancient galaxy collision, will be swallowed up by the black hole in several billion years. Because the front end of the disk eclipses more stars than the back, it appears darker. Also, because dust absorbs blue light more effectively than red light, the disk is redder than the rest of the galaxy (this same phenomenon causes the Sun to appear red when it sets in a smoggy afternoon). This NASA Hubble Space Telescope image was taken with the Wide Field and Planetary Camera 2, in visible light. Details as small as 50 light-years across can be seen. Hubble's Faint Object Spectrograph (replaced by the STIS spectrograph in 1997) was used to observe hydrogen and nitrogen emission lines from gas in the disk. Hubble measurements show that the disk rotates like an enormous carousel, 341,000 miles per hour (155 kilometers per second) at 186 light-years from the center. The rotation velocity provides a direct measure of the gravitational force acting on the gas by the black hole. Though 300 million times the mass of our Sun, the black hole is still only 0.05 per cent of the total mass of the NGC 7052 galaxy. Despite its size, the disk is 100 times less massive than the black hole. Still, it contains enough raw material to make three million sun-like stars. The bright spot in the center of the disk is the combined light of stars that have crowded around the black hole due to its strong gravitational pull. This stellar concentration matches theoretical models linking stellar density to a central black hole's mass. NGC 7052 is a strong source of radio emission and has two oppositely directed `jets' emanating from the nucleus. (The jets are streams of energetic electrons moving in a strong magnetic field and unleashing radio energy). Because the jets in NGC 7052 are not perpendicular to the disk, it may indicate that the black hole and the dust disk in NGC 7052 do not have a common origin. One possibility is that the dust was acquired from a collision with a small neighboring galaxy, after the black hole had already formed. NGC 7052 is located in the constellation of Vulpecula, 191 million light-years from Earth. Credit: Roeland P. van der Marel (STScI), Frank C. van den Bosch (Univ. of Washington), and NASA. A caption and image files are available via the Internet at http://oposite.stsci.edu/pubinfo/1998/22.html.

  1. Nitrogen Fractionation in Protoplanetary Disks from the H13CN/HC15N Ratio

    NASA Astrophysics Data System (ADS)

    Guzmán, V. V.; Öberg, K. I.; Huang, J.; Loomis, R.; Qi, C.

    2017-02-01

    Nitrogen fractionation is commonly used to assess the thermal history of solar system volatiles. With ALMA it is for the first time possible to directly measure {}14{{N}}/{}15{{N}} ratios in common molecules during the assembly of planetary systems. We present ALMA observations of the {{{H}}}13{CN} and {{HC}}15{{N}} J=3-2 lines at 0.″5 angular resolution, toward a sample of six protoplanetary disks, selected to span a range of stellar and disk structure properties. Adopting a typical {}12{{C}}/{}13{{C}} ratio of 70, we find comet-like {}14{{N}}/{}15{{N}} ratios of 80-160 in five of the disks (3 T Tauri and 2 Herbig Ae disks) and lack constraints for one of the T Tauri disks (IM Lup). There are no systematic differences between T Tauri and Herbig Ae disks, or between full and transition disks within the sample. In addition, no correlation is observed between disk-averaged D/H and {}14{{N}}/{}15{{N}} ratios in the sample. One of the disks, V4046 Sgr, presents unusually bright HCN isotopologue emission, enabling us to model the radial profiles of {{{H}}}13{CN} and {{HC}}15{{N}}. We find tentative evidence of an increasing {}14{{N}}/{}15{{N}} ratio with radius, indicating that selective photodissociation in the inner disk is important in setting the {}14{{N}}/{}15{{N}} ratio during planet formation.

  2. Orbiter Flying Qualities (OFQ) Workstation user's guide

    NASA Technical Reports Server (NTRS)

    Myers, Thomas T.; Parseghian, Zareh; Hogue, Jeffrey R.

    1988-01-01

    This project was devoted to the development of a software package, called the Orbiter Flying Qualities (OFQ) Workstation, for working with the OFQ Archives which are specially selected sets of space shuttle entry flight data relevant to flight control and flying qualities. The basic approach to creation of the workstation software was to federate and extend commercial software products to create a low cost package that operates on personal computers. Provision was made to link the workstation to large computers, but the OFQ Archive files were also converted to personal computer diskettes and can be stored on workstation hard disk drives. The primary element of the workstation developed in the project is the Interactive Data Handler (IDH) which allows the user to select data subsets from the archives and pass them to specialized analysis programs. The IDH was developed as an application in a relational database management system product. The specialized analysis programs linked to the workstation include a spreadsheet program, FREDA for spectral analysis, MFP for frequency domain system identification, and NIPIP for pilot-vehicle system parameter identification. The workstation also includes capability for ensemble analysis over groups of missions.

  3. The Evolution of Dust in the Multiphase ISM: Grain Destruction Processes

    NASA Technical Reports Server (NTRS)

    Wolfire, Mark

    1999-01-01

    This proposal covered year one of a long term project in which we acquired the necessary hardware and softwaxe needed to calculate grain destruction processes in the interstellar medium (ISM). The long term goal of this research is to develop a model for the dust evolution in the ISM capable of explaining observations of elemental depletions, the grain size distribution, and the emission characteristics of the ISM from the X-ray through the FIR. We purchased a SUN Ultra 10 workstation and peripheral devices including an Exabyte Tape drive, HP Laser Printer, and Seagate External Hard Disk. The PI installed the hardware and Solaris operating system on the workstation and integrated the hardware into the network. Software was also purchased to enable connections to the workstation from a PC (Hummingbird Exceed). Additional freeware required to carry out the proposed program was installed on the system including compilers (g77, gcc, g++), editors (emacs), a markup language (LaTeX), and display programs (WIP, XV, SAOtng). We have also successfully modified the required plot files to work with our system which display the results of grain processing.

  4. Experience in Using a Finite Element Stress and Vibration Package on a Minicomputer,

    DTIC Science & Technology

    1982-01-01

    as the Gra’phics Oricntat.ed Interactive Finite Element Time Sharing Pacl’age ( GIFTS ). This packge has been running on a PDP11/60 minicomputer...Unlike many other FEM packages, GIFTS consists of a collecticon E of fully compatible special purpose programns operating on a se. ef files on disk known...matrix is initiated by running the appropriate ptrojrF:’. from the GIFTS library. The following if, a list of the major (IFtS library programs with a

  5. Printer Multiplexing Among Multiple Z-100 Microcomputers.

    DTIC Science & Technology

    1985-12-01

    allows the printer to be used by any one of multiple Z-l00’s at a time. The SPOOL process sends the data thru the CONTROL process to the printer or saves...the data on the (Continue) 20 OISTRIBUTION/AVAILABILITY OF ABSTRACT 21. ABSTRACT SECURITY CLASSIFICATION K)UNCLASSIFIEDIUNLIMITED 0 SAME AS RPT 0 DTIC...CLASSFICATION Of THIS PAG9 (l#1011 DMIat 19. ABSTRACT (Continued) disk file. - ,:1J. 4, .-.-. _ SECURITY CLASSIF9CATION OP THIS PA8EWY~en Data Enteed

  6. The Future is Hera: Analyzing Astronomical Data Over the Internet

    NASA Astrophysics Data System (ADS)

    Valencic, Lynne A.; Snowden, S.; Chai, P.; Shafer, R.

    2009-01-01

    Hera is the new data processing facility provided by the HEASARC at the NASA Goddard Space Flight Center for analyzing astronomical data. Hera provides all the preinstalled software packages, local disk space, and computing resources needed to do general processing of FITS format data files residing on the user's local computer, and to do advanced research using the publicly available data from High Energy Astrophysics missions. Qualified students, educators, and researchers may freely use the Hera services over the internet for research and educational purposes.

  7. A mysterious dust clump in a disk around an evolved binary star system.

    PubMed

    Jura, M; Turner, J

    1998-09-10

    The discovery of planets in orbit around the pulsar PSR1257+12 shows that planets may form around post-main-sequence stars. Other evolved stars, such as HD44179 (an evolved star which is part of the binary system that has expelled the gas and dust that make the Red Rectangle nebula), possess gravitationally bound orbiting dust disks. It is possible that planets might form from gravitational collapse in such disks. Here we report high-angular-resolution observations at millimetre and submillimetre wavelengths of the dusk disk associated with the Red Rectangle. We find a dust clump with an estimated mass near that of Jupiter in the outer region of the disk. The clump is larger than our Solar System, and far beyond where planet formation would normally be expected, so its nature is at present unclear.

  8. Spitzer c2d Legacy, Circumstellar Disks around wTT Stars

    NASA Astrophysics Data System (ADS)

    Wahhaj, Zahed; c2d Legacy Team

    2007-05-01

    The Spitzer Legacy Project From "Molecular Cores to Planet-forming Disks" conducted a 3.6 to 70um photometric survey of roughly 160 weak- line TTauri Stars (wTTs) and 20 classical TTauri stars (cTTs) in the nearby star-forming regions Chamaeleon, Lupus, Ophiuchus and Taurus. WTTs are so named because they possess weaker H-alpha emission lines signifying weaker disk accretion on to the star than cTTs. The evolution of dust disks around these young stars (Age 10 Myrs) is key to understanding planet formation. From the observed infrared excesses, we infer the presence of circumstellar disks around 12% of wTTs and 75% of cTTs. However, when considering on-cloud sources only, the wTTs disk fraction is 22%, while it is only 6% for off- cloud sources, suggesting an older age for the latter. WTTs, while not discernibly younger than cTTs in age diagnostics, in general have disks which exhibit lower fractional luminosities and larger inner clearings. However, quite a few wTTs systems have fractional disk luminosities as high as cTTs systems. In light of these findings, wTTs seem to be transitional objects between cTTs and debris disks.

  9. HD 100453: An evolutionary link between protoplanetary disks and debris disks

    NASA Astrophysics Data System (ADS)

    Collins, Karen

    2008-12-01

    Herbig Ae stars are young stars usually surrounded by gas and dust in the form of a disk and are thought to evolve into planetary systems similar to our own. We present a multi-wavelength examination of the disk and environment of the Herbig Ae star HD 100453A, focusing on the determination of accretion rate, system age, and disk evolution. We show that the accretion rate is characterized by Chandra X-ray imagery that is inconsistent with strongly accreting early F stars, that the disk lacks the conspicuous Fe II emission and continuum seen in FUV spectra of actively accreting Herbig Ae stars, and that FUSE, HST, and FEROS data suggest an accretion rate below ˜ 2.5×10 -10 [Special characters omitted.] M⊙ yr -1 . We confirm that HD 100453B is a common proper motion companion to HD 100453A, with spectral type M4.0V - M4.5V, and derive an age of 14 ± 4 Myr. We examine the Meeus et al. (2001) hypothesis that Meeus Group I sources, which have a mid-IR bump which can be fitted by a black body component, evolve to Meeus Group II sources, which have no such mid-IR bump. By considering stellar age and accretion rate evidence, we find the hypothesis to be invalid. Furthermore, we find that the disk characteristics of HD 100453A do not fit the traditional definition of a protoplanetary disk, a transitional disk, or a debris disk, and they may suggest a new class of disks linking gas-rich protoplanetary disks and gas-poor debris disks.

  10. Experience with procuring, deploying and maintaining hardware at remote co-location centre

    NASA Astrophysics Data System (ADS)

    Bärring, O.; Bonfillou, E.; Clement, B.; Coelho Dos Santos, M.; Dore, V.; Gentit, A.; Grossir, A.; Salter, W.; Valsan, L.; Xafi, A.

    2014-05-01

    In May 2012 CERN signed a contract with the Wigner Data Centre in Budapest for an extension to CERN's central computing facility beyond its current boundaries set by electrical power and cooling available for computing. The centre is operated as a remote co-location site providing rack-space, electrical power and cooling for server, storage and networking equipment acquired by CERN. The contract includes a 'remote-hands' services for physical handling of hardware (rack mounting, cabling, pushing power buttons, ...) and maintenance repairs (swapping disks, memory modules, ...). However, only CERN personnel have network and console access to the equipment for system administration. This report gives an insight to adaptations of hardware architecture, procurement and delivery procedures undertaken enabling remote physical handling of the hardware. We will also describe tools and procedures developed for automating the registration, burn-in testing, acceptance and maintenance of the equipment as well as an independent but important change to the IT assets management (ITAM) developed in parallel as part of the CERN IT Agile Infrastructure project. Finally, we will report on experience from the first large delivery of 400 servers and 80 SAS JBOD expansion units (24 drive bays) to Wigner in March 2013. Changes were made to the abstract file on 13/06/2014 to correct errors, the pdf file was unchanged.

  11. Shaft flexibility effects on the forced response of a bladed-disk assembly

    NASA Technical Reports Server (NTRS)

    Khader, N.; Loewy, R. G.

    1990-01-01

    A model analysis approach is used to study the forced response of an actual flexible bladed-disk-shaft system. Both in-plane and out-of-plane flexible deformations of the bladed-disk assembly are considered, in addition to its rigid-body translations and rotations, resulting from the bending of the supporting flexible shaft in two orthogonal planes. The effects of Coriolis forces and structural coupling between flexible and rigid disk motions on the system's response are investigated. Aerodynamic loads acting on the rotating and vibrating bladed-disk assembly are accounted for through a simple quasi-steady representation, to evaluate their influence, combined with shaft flexibility and Coriolis effects.

  12. CO Fundamental Emission from V836 Tauri

    DTIC Science & Technology

    2008-11-10

    systems: formation — planetary systems: protoplanetary disks — stars: individual (V836 Tauri) — stars: pre–main-sequence Online material: color...how either of these hypothesesmay bear on our under- standing of disk dissipation in this system. Subject headinggs: circumstellar matter — planetary ...that can be modeled as an optically thick disk that has an optically thin region (a hole or a gap ) at smaller radii, have been suggested to be in the

  13. Designing a scalable video-on-demand server with data sharing

    NASA Astrophysics Data System (ADS)

    Lim, Hyeran; Du, David H.

    2000-12-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  14. Designing a scalable video-on-demand server with data sharing

    NASA Astrophysics Data System (ADS)

    Lim, Hyeran; Du, David H. C.

    2001-01-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  15. Resolved Dual-Frequency Observations of the Debris Disk Around AU Mic: Strengths of Bodies in the Collisional Cascade

    NASA Astrophysics Data System (ADS)

    Carter, Evan; Hughes, A. Meredith; Daley, Cail; Flaherty, Kevin; Pan, Margaret; Schlichting, Hilke; Chiang, Eugene; MacGregor, Meredith Ann; Wilner, David; Dent, Bill; Carpenter, John; Andrews, Sean; Moor, Attila; Kospal, Agnes

    2018-01-01

    Debris disks are hallmarks of mature planetary systems, with second-generation dust produced via collisions between pluto-like planetesimals. The vertical structure of a debris disk encodes unique information about the dynamical state of the system, particularly at millimeter wavelengths where gravitational effects dominate over the effects of stellar radiation. We present 450 μm Atacama Large Millimeter/sub-millimeter Array (ALMA) observations of the edge-on debris disk around AU Mic, a nearby (d = 9.91 ± 0.10 pc) M1-type star. The 0.3'' angular resolution of the data allows us to spatially resolve the scale height of the disk, complementing previous observations at a wavelength of 1.3 mm. By resolving the vertical structure of the disk at these two widely-separated frequencies, we are able to spatially resolve the spectral index and study variations in the grain size distribution as a function of disk radius. The comparison of scale heights for two different wavelengths and therefore particle sizes also constrains the velocity dispersion as a function of grain size, which allows us to probe the strengths of bodies in the collisional cascade for the first time outside the Solar System.

  16. Gravitational Instabilities in Protostellar and Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Durisen, R. H.; Mejia, A. C.; Pickett, B. K.

    Self-gravity in fluid and particle systems is the primary mechanism for the creation of structure in the Universe on astronomical scales. The rapidly rotating Solar System-sized disks which orbit stars during the early phases of star and planet formation can be massive and thus susceptible to spontaneous growth of spiral distortions driven by disk self-gravity. These are called gravitational instabilities (GI's). They can be important sources of mass and angular momentum transport due to the long-range torques they generate; and, if strong enough, they may fragment the disk into bound lumps with masses in therange of gas giant planets and brown dwarfs. My research group has been using numerical 3D hydrodynamics techniques to study the growth and nonlinear behavior of GI's in disks around young stars. Our simulations have demonstrated the sensitivity of outcomes to the thermal physics of the disks and have helped to delineate conditions conducive to the formation of dense clumps. We are currently concentrating our efforts on determining how GI's affect the long-term evolution and appearance of young stellar disks, with the hope of finding characteristic GI signatures by which we may recognize their occurrence in real systems.

  17. RESOLVED CO GAS INTERIOR TO THE DUST RINGS OF THE HD 141569 DISK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flaherty, Kevin M.; Hughes, A. Meredith; Zachary, Julia

    2016-02-10

    The disk around HD 141569 is one of a handful of systems whose weak infrared emission is consistent with a debris disk, but still has a significant reservoir of gas. Here we report spatially resolved millimeter observations of the CO(3-2) and CO(1-0) emission as seen with the Submillimeter Array and CARMA. We find that the excitation temperature for CO is lower than expected from cospatial blackbody grains, similar to previous observations of analogous systems, and derive a gas mass that lies between that of gas-rich primordial disks and gas-poor debris disks. The data also indicate a large inner hole inmore » the CO gas distribution and an outer radius that lies interior to the outer scattered light rings. This spatial distribution, with the dust rings just outside the gaseous disk, is consistent with the expected interactions between gas and dust in an optically thin disk. This indicates that gas can have a significant effect on the location of the dust within debris disks.« less

  18. Out-of-Core Streamline Visualization on Large Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu

    1997-01-01

    It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

  19. A Study of Inner Disk Gas around Young Stars in the Lupus Complex

    NASA Astrophysics Data System (ADS)

    Arulanantham, Nicole Annemarie; France, Kevin; Hoadley, Keri

    2018-06-01

    We present a study of molecular hydrogen at the surfaces of the disks around five young stars in the Lupus complex: RY Lupi, RU Lupi, MY Lupi, Sz 68, and TYC 7851. Each system was observed with the Cosmic Origins Spectrograph (COS) onboard the Hubble Space Telescope (HST), and we detect a population of fluorescent H2 in all five sources. The temperatures required for LyA fluorescence to proceed (T ~ 1500-2500 K) place the gas within ~15 AU of the central stars. We have used these features to extract the radial distribution of H2 in the inner disk, where planet formation may already be taking place. The objects presented here have very different outer disk morphologies, as seen by ALMA via 890 micron dust continuum emission, ranging from full disks with no signs of cavities to systems with large regions that are clearly depleted (e.g. TYC 7851, with a cavity extending to 75 and 60 AU in dust and gas, respectively). Our results are interpreted in conjunction with sub-mm data from the five systems in an effort to piece together a more complete picture of the overall disk structure. We have previously applied this multi-wavelength approach to RY Lupi, including 4.7 micron IR-CO emission in our analysis. These IR-CO and UV-H2 observations were combined with 10 micron silicate emission, the 890 micron dust continuum, and 1.3 mm CO observations from the literature to infer a gapped structure in the inner disk. This single system has served as a testing ground for the larger Lupus complex sample, which we compare here to examine any trends between the outer disk morphology and inner disk gas distributions.

  20. Debris Disk Dust Characterization through Spectral Types: Deep Visible-Light Imaging of Nine Systems

    NASA Astrophysics Data System (ADS)

    Choquet, Elodie

    2017-08-01

    We propose STIS coronagraphy of 9 debris disks recently seen in the near-infrared from our re-analysis of archival NICMOS data. STIS coronagraphy will provide complementary visible-light images that will let us characterize the disk colors needed to place constraints on dust grain sizes, albedos, and anisotropy of scattering of these disks. With 3 times finer angular resolution and much better sensitivity, our STIS images will dramatically surpass the NICMOS discovery images, and will more clearly reveal disk local structures, cleared inner regions, and test for large-scale asymmetries in the dust distributions possibly triggered by associated planets in these systems. The exquisite sensitivity to visible-light scattering by submicron particles uniquely offered by STIS coronagraphy will let us detect and spatially characterize the diffuse halo of dust blown out of the systems by the host star radiative pressure. Our sample includes disks around 3 low-mass stars, 3 solar-type stars, and 3 massive A stars; together with our STIS+NICMOS imaging of 6 additional disks around F and G stars, our sample covers the full range of spectral types and will let us perform a comparative study of dust distribution properties as a function of stellar mass and luminosity. Our sample makes up more than 1/3 of all debris disks imaged in scattered light to date, and will offer the first homogeneous characterization of the visible-light to near-IR properties of debris disk systems over a large range of spectral types. Our program will let us analyze how the dynamical balance is affected by initial conditions and star properties, and how it may be perturbed by gas drag or planet perturbations.

Top