Note: This page contains sample records for the topic underlying file system from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: August 15, 2014.
1

Serverless network file systems  

Microsoft Academic Search

We propose a new paradigm for network file system design: serverless network file systems. While traditional network file systems rely on a central server machine, a serverless system utilizes workstations cooperating as peers to provide all file system services. Any machine in the system can store, cache, or control any block of data. Our approach uses this location independence, in

Thomas E. Anderson; Michael D. Dahlin; Jeanna M. Neefe; David A. Patterson; Drew S. Roselli; Randolph Y. Wang

1996-01-01

2

Serverless Network File Systems  

Microsoft Academic Search

In this paper, we propose a new paradigm for network file system design, serverless network file systems. While traditional network file systems rely on a central server machine, a serverless system utilizes workstations cooperating as peers to provide all file system services. Any machine in the system can store, cache, or control any block of data. Our approach uses this

Thomas E. Anderson; Michael Dahlin; Jeanna M. Neefe; David A. Patterson; Drew S. Roselli; Randolph Y. Wang

1995-01-01

3

In Search of an API for Scalable File Systems: Under the Table or Above it.  

National Technical Information Service (NTIS)

Cluster file systems have been used by the high performance computing (HPC) community at even larger scales for more than a decade. These cluster file systems, including IBM GPFS, Panasas PanFS, PVFS and Lustre, are required to meet the scalability demand...

G. A. Gibson G. R. Ganger J. Lopez M. Polte S. Patil

2009-01-01

4

A File Archival System  

NASA Technical Reports Server (NTRS)

ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.

Fanselow, J. L.; Vavrus, J. L.

1984-01-01

5

Accessing files in an Internet: The Jade file system  

NASA Technical Reports Server (NTRS)

Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

Peterson, Larry L.; Rao, Herman C.

1991-01-01

6

Accessing files in an internet - The Jade file system  

NASA Technical Reports Server (NTRS)

Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

Rao, Herman C.; Peterson, Larry L.

1993-01-01

7

The Google file system  

Microsoft Academic Search

We have designed and implemented the Google File Sys- tem, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While sharing many of the same goals as previous dis- tributed file systems, our design has been driven

Sanjay Ghemawat; Howard Gobioff; Shun-Tak Leung

2003-01-01

8

Network file storage system  

SciTech Connect

The Common File System (CFS) is a large, online centralized storage system for the Los Alamos National Laboratory's computer network. The CFS provides Los Alamos computer users a relatively simple set of primitives with which they can store and retrieve files. A tree-structured directory allows the users to organize their data in a logical and reasonable manner. Eighteen months of operational experience and statistics have provided considerable insight into the best methods of providing optimum service and response to CFS users. Automatically moving, or migrating, files between storage devices based on usage characteristics has provided a cost-effective storage system.

Christman, R.D.; Collins, M.W.; Devaney, M.A.; Willbanks, E.W.

1981-07-01

9

Virtual file system for PSDS  

NASA Technical Reports Server (NTRS)

This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

Runnels, Tyson D.

1993-01-01

10

The Global File System  

NASA Technical Reports Server (NTRS)

The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

1996-01-01

11

File Type Classification for Adaptive Object File System  

Microsoft Academic Search

This paper gives an overview of a novel storage management concept, called, adaptive object file system (AOFS). The design of the file type classification module in AOFS is emphasized. The design attempts to increase the efficiency through the dynamic tuning technique, which automatically classifies files using attributes and access pattern. The file classification, thus, allows files to be stored in

Phond Phunchongharn; S. Pornnapa; T. Achalakul

2006-01-01

12

Automatic-adaptive File Striping of Parallel File System  

Microsoft Academic Search

(Abstract)This paper studies a fuzzy logic rule base for adaptive striping of files across multiple disks, and the rule base is based on an analytical model of disk contention that includes disk physical parameters and file request sizes; based on auto classifying of access patterns and real-time monitored data of file system. As the file system load is low, the

WEI Wenguo

2006-01-01

13

The Galley Parallel File System  

NASA Technical Reports Server (NTRS)

Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

Nieuwejaar, Nils; Kotz, David

1996-01-01

14

The Galley Parallel File System  

NASA Technical Reports Server (NTRS)

As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

Nieuwejaar, Nils; Kotz, David

1996-01-01

15

DMFS: A Data Migration File System for NetBSD  

NASA Technical Reports Server (NTRS)

I have recently developed dmfs, a Data Migration File System, for NetBSD. This file system is based on the overlay file system, which is discussed in a separate paper, and provides kernel support for the data migration system being developed by my research group here at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal meta data in a flat file, which resides on a separate file system. Our data migration system provides archiving and file migration services. System utilities scan the dmfs file system for recently modified files, and archive them to two separate tape stores. Once a file has been doubly archived, files larger than a specified size will be truncated to that size, potentially freeing up large amounts of the underlying file store. Some sites will choose to retain none of the file (deleting its contents entirely from the file system) while others may choose to retain a portion, for instance a preamble describing the remainder of the file. The dmfs layer coordinates access to the file, retaining user-perceived access and modification times, file size, and restricting access to partially migrated files to the portion actually resident. When a user process attempts to read from the non-resident portion of a file, it is blocked and the dmfs layer sends a request to a system daemon to restore the file. As more of the file becomes resident, the user process is permitted to begin accessing the now-resident portions of the file. For simplicity, our data migration system divides a file into two portions, a resident portion followed by an optional non-resident portion. Also, a file is in one of three states: fully resident, fully resident and archived, and (partially) non-resident and archived. For a file which is only partially resident, any attempt to write or truncate the file, or to read a non-resident portion, will trigger a file restoration. Truncations and writes are blocked until the file is fully restored so that a restoration which only partially succeed does not leave the file in an indeterminate state with portions existing only on tape and other portions only in the disk file system. We chose layered file system technology as it permits us to focus on the data migration functionality, and permits end system administrators to choose the underlying file store technology. We chose the overlay layered file system instead of the null layer for two reasons: first to permit our layer to better preserve meta data integrity and second to prevent even root processes from accessing migrated files. This is achieved as the underlying file store becomes inaccessible once the dmfs layer is mounted. We are quite pleased with how the layered file system has turned out. Of the 45 vnode operations in NetBSD, 20 (forty-four percent) required no intervention by our file layer - they are passed directly to the underlying file store. Of the twenty five we do intercept, nine (such as vop_create()) are intercepted only to ensure meta data integrity. Most of the functionality was concentrated in five operations: vop_read, vop_write, vop_getattr, vop_setattr, and vop_fcntl. The first four are the core operations for controlling access to migrated files and preserving the user experience. vop_fcntl, a call generated for a certain class of fcntl codes, provides the command channel used by privileged user programs to communicate with the dmfs layer.

Studenmund, William

1999-01-01

16

Alternatives of Implementing a Cluster File Systems  

Microsoft Academic Search

With the emergence of Storage Networking, distributed file systems that allow data sharing through shared disks will become vital. We refer to Cluster File Systems as a distributed file systems optimized for environments of clustered servers. The requirements such file systems is that they guarantee file systems consistency while allowing shared access from multiple nodes in a shared-disk environment. In

Yoshitake Shinkai; Yoshihiro Tsuchiya; Takeo Murakami

17

Accessing Files in an Internet: The Jade File System.  

National Technical Information Service (NTIS)

Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file syste...

L. L. Peterson H. C. Rao

1991-01-01

18

Intra-file Security for a Distributed File System  

Microsoft Academic Search

Cryptographic file systems typically provide security by encrypting entire files or directo- ries. This has the advantage of simplicity, but does not allow for fine-grained protection of data within very large files. This is not an issue in most general-purpose systems, but can be very important in scientific applications where some but not all of the output data is sensitive

Scott A. Banachowski; Zachary N. J. Peterson; Ethan L. Miller; Scott A. Brandt

2002-01-01

19

DMFS: A Data Migration File System for NetBSD  

NASA Technical Reports Server (NTRS)

I have recently developed DMFS, a Data Migration File System, for NetBSD. This file system provides kernel support for the data migration system being developed by my research group at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal metadata in a flat file, which resides on a separate file system. This paper will first describe our data migration system to provide a context for DMFS, then it will describe DMFS. It also will describe the changes to NetBSD needed to make DMFS work. Then it will give an overview of the file archival and restoration procedures, and describe how some typical user actions are modified by DMFS. Lastly, the paper will present simple performance measurements which indicate that there is little performance loss due to the use of the DMFS layer.

Studenmund, William

2000-01-01

20

A File System for Information Management  

Microsoft Academic Search

Nebula is a file system that explicitly supports informationmanagement. It differs from traditional systems in threeimportant ways. First, Nebula implements files as sets of attributes.Each attribute describes some property of the filesuch as owner, protection, functions defined, sections specified,project, or file type. The content of the file is representedby a special text attribute. Second, Nebula supportsassociative access of files within

A Dharap; Bill Camargo; C. Mic Bowman; Mrinal Baruah; Sunil Potti

1994-01-01

21

DFS: A De-Fragmented File System  

Microsoft Academic Search

Small file accesses are still limited by disk head move- ment on modern disk drives with the high disk bandwidth. Small file performance can be improved by grouping and clustering, each of which places multiple files in a direc- tory and places blocks of the same file on disks contigu- ously. These schemes make it possible for file systems to

Woo Hyun Ahn; Kyungbaek Kim; Yongjin Choi

2002-01-01

22

Zebra: A Striped Network File System.  

National Technical Information Service (NTIS)

The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers effic...

J. H. Hartman J. K. Ousterhout

1992-01-01

23

Fail-safe WORM file system  

NASA Astrophysics Data System (ADS)

Most operating systems have no fail-safe features built into their file systems. They rely on recovery programs to repair the file systems after failures have occurred. These actions often result in loss of files, and sometimes the files are not recoverable. The loss of data in high capacity storage devices is extremely costly. We illustrate how robust file systems may be built for the WORM optical disks.

Ooi, B. C.

1991-03-01

24

Tuning HDF5 for Lustre File Systems  

SciTech Connect

HDF5 is a cross-platform parallel I/O library that is used by a wide variety of HPC applications for the flexibility of its hierarchical object-database representation of scientific data. We describe our recent work to optimize the performance of the HDF5 and MPI-IO libraries for the Lustre parallel file system. We selected three different HPC applications to represent the diverse range of I/O requirements, and measured their performance on three different systems to demonstrate the robustness of our optimizations across different file system configurations and to validate our optimization strategy. We demonstrate that the combined optimizations improve HDF5 parallel I/O performance by up to 33 times in some cases running close to the achievable peak performance of the underlying file system and demonstrate scalable performance up to 40,960-way concurrency.

Howison, Mark; Koziol, Quincey; Knaak, David; Mainzer, John; Shalf, John

2010-09-24

25

Remote file inquiry (RFI) system  

NASA Technical Reports Server (NTRS)

System interrogates and maintains user-definable data files from remote terminals, using English-like, free-form query language easily learned by persons not proficient in computer programming. System operates in asynchronous mode, allowing any number of inquiries within limitation of available core to be active concurrently.

1975-01-01

26

Measurements of a Distributed File System  

Microsoft Academic Search

We analyzed the user-level file access patterns and caching behavior of the Sprite distributed file system. The first part of our analysis repeated a study done in 1985 of the: BSD UNIX file system. We found that file throughput has increased by a factor of 20 to an average of 8 Kbytes per second per active user over 10-minute intervals,

Mary G. Baker; John H. Hartman; Michael D. Kupfer; Ken W. Shirriff; John K. Ousterhout

1991-01-01

27

The Future of the Andrew File System  

ScienceCinema

The talk will discuss the ten operational capabilities that have made AFS unique in the distributed file system space and how these capabilities are being expanded upon to meet the needs of the 21st century. Derrick Brashear and Jeffrey Altman will present a technical road map of new features and technical innovations that are under development by the OpenAFS community and Your File System, Inc. funded by a U.S. Department of Energy Small Business Innovative Research grant. The talk will end with a comparison of AFS to its modern days competitors.

28

The Future of the Andrew File System  

ScienceCinema

The talk will discuss the ten operational capabilities that have made AFS unique in the distributed file system space and how these capabilities are being expanded upon to meet the needs of the 21st century. Derrick Brashear and Jeffrey Altman will present a technical road map of new features and technical innovations that are under development by the OpenAFS community and Your File System, Inc. funded by a U.S. Department of Energy Small Business Innovative Research grant. The talk will end with a comparison of AFS to its modern days competitors.

None

2011-04-25

29

The Future of the Andrew File System  

SciTech Connect

The talk will discuss the ten operational capabilities that have made AFS unique in the distributed file system space and how these capabilities are being expanded upon to meet the needs of the 21st century. Derrick Brashear and Jeffrey Altman will present a technical road map of new features and technical innovations that are under development by the OpenAFS community and Your File System, Inc. funded by a U.S. Department of Energy Small Business Innovative Research grant. The talk will end with a comparison of AFS to its modern days competitors.

None

2011-02-23

30

The Jade File System. Ph.D. Thesis  

NASA Technical Reports Server (NTRS)

File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its design. The prototype consists of interfaces to the Unix File System, the Sun Network File System, and the File Transfer Protocol.

Rao, Herman Chung-Hwa

1991-01-01

31

The Hadoop Distributed File System  

Microsoft Academic Search

The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execute user application tasks. By distributing storage and computation across many servers, the resource can grow with demand while remaining

Konstantin Shvachko; Hairong Kuang; Sanjay Radia; Robert Chansler

2010-01-01

32

Beyond a Terabyte File System  

NASA Technical Reports Server (NTRS)

The Numerical Aerodynamics Simulation Facility's (NAS) CRAY C916/1024 accesses a "virtual" on-line file system, which is expanding beyond a terabyte of information. This paper will present some options to fine tuning Data Migration Facility (DMF) to stretch the online disk capacity and explore the transitions to newer devices (STK 4490, ER90, RAID).

Powers, Alan K.

1994-01-01

33

Operation Shipping for Mobile File Systems  

Microsoft Academic Search

This paper addresses a bottleneck problem in mobile file systems: the propagation of updated large files from a weakly- connected client to its servers. It proposes an efficient mechanism called operation shipping or operation-based update propagation .I n the new mechanism, the client ships the user operation that updated the large files, rather than the files themselves, across the weak

Yui-wah Lee; Kwong-sak Leung; Mahadev Satyanarayanan

2002-01-01

34

Separating key management from file system security  

Microsoft Academic Search

Abstract No secure network file system has ever grown to span the Internet. Existing systems all lack adequate key management for security at a global scale. Given the diversity of the Internet, any particular mechanism a file system employs to manage,keys will fail to support many types of use. We propose separating key management,from file system security, letting the world

David Mazières; Michael Kaminsky; M. Frans Kaashoek; Emmett Witchel

2000-01-01

35

Separating key management from file system security  

Microsoft Academic Search

No secure network file system has ever grown to span the Internet. Existing systems all lack adequate key management for security at a global scale. Given the diversity of the Internet, any particular mechanism a file system employs to manage keys will fail to support many types of use.We propose separating key management from file system security, letting the world

David Mazières; Michael Kaminsky; M. Frans Kaashoek; Emmett Witchel

1999-01-01

36

UsiFe: a user space file system with support for intra-file encryption  

NASA Astrophysics Data System (ADS)

This paper proposes a new paradigm for the design of cryptographic filesystems. Traditionally, cryptographic file systems have mainly focused on encrypting entire files or directories. In this paper, we envisage encryption at a finer granularity, i.e. encrypting parts of files. Such an approach is useful for protecting parts of large files that typically feature in novel applications focused on handling a large amount of scientific data, GIS, and XML data. We extend prior work by implementing a user level file system on Linux, UsiFe, which supports fine grained encryption by extending the popular ext2 file system. We further explore two paradigms in which the user is agnostic to encryption in the underlying filesystem, and the user is aware that a file contains encrypted content. Popular file formats like XML, PDF, and PostScript can leverage both of these models to form the basis of interactive applications that use fine grained access control to selectively hide data. Lastly, we measure the performance of UsiFe, and observe that we can support file access for partially encrypted files with less than 15% overhead.

Sharma, Rohan; Kallurkar, Prathmesh; Kumar, Saurabh; Sarangi, Smruti R.

2011-12-01

37

Improving File System Performance by Striping  

NASA Technical Reports Server (NTRS)

This document discusses the performance and advantages of striped file systems on the SGI AD workstations. Performance of several striped file system configurations are compared and guidelines for optimal striping are recommended.

Lam, Terance L.; Kutler, Paul (Technical Monitor)

1998-01-01

38

Zebra: A striped network file system  

NASA Technical Reports Server (NTRS)

The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

Hartman, John H.; Ousterhout, John K.

1992-01-01

39

Metadata Efficiency in Versioning File Systems  

Microsoft Academic Search

Versioning file systems retain earlier versions of modi- fied files, allowing recovery from user mistakes or sys- tem corruption. Unfortunately, conventional versioning systems do not efficiently record large numbers of ver- sions. In particular, versioned metadata can consume as much space as versioned data. This paper examines two space-efficient metadata structures for versioning file systems and describes their integration into

Craig A. N. Soules; Garth R. Goodson; John D. Strunk; Gregory R. Ganger

2003-01-01

40

A Stackable File System Interface For Linux  

Microsoft Academic Search

Linux is a popular operating system that is rapidly evolv- ing due to being Open Source and having many developers. The Linux kernel comes with more than two dozen file sys- tems, all of which are native: they access device drivers di- rectly. Native file systems are harder to develop. Stackable file systems, however, are easier to develop because they

Erez Zadok; Ion Badulescu

1999-01-01

41

Integrating mass storage and file systems  

Microsoft Academic Search

The authors describe current and anticipated work at the Center for Information Technology Integration at the University of Michigan in developing and integrating mass storage with distributed file systems, specifically with the Andrew File System (AFS). They present a specific approach to integrating AFS with mass storage: they consider the mass store itself to be the file system, not a

C. J. Antonelli; P. Honeyman

1993-01-01

42

Texas Natural Resources Information System. File Description Report.  

ERIC Educational Resources Information Center

Descriptions are given for the 164 computerized files that comprise the Texas Natural Resources Information System (TNRIS). The system provides natural resources information to federal, state, regional, and local and private entities. File descriptions are organized under the following data and information content areas: (1) base data, (2)…

Interagency Council on Natural Resources and the Environment, Austin, TX. Texas Natural Resources Information System.

43

Self-Similarity in File Systems  

Microsoft Academic Search

We demonstrate that high-level file system events exhibit self-similar behaviour, but only for short-term time scales of approximately under a day. We do so through the analysis of four sets of traces that span time scales of milliseconds through months, and that differ in the trace collection method, the filesystems being traced, and the chronological times of the tracing. Two

Steven D. Gribble; Gurmeet Singh Manku; Drew S. Roselli; Eric A. Brewer; Timothy J. Gibson; Ethan L. Miller

1998-01-01

44

Caching in the Sprite network file system  

Microsoft Academic Search

The Sprite network operating system uses large main-memory disk block caches to achieve high performance in its file system. It provides non-write-through file caching on both client and server machines. A simple cache consistency mechanism permits files to be shared by multiple clients without danger of stale data. In order to allow the file cache to occupy as much memory

Michael N. Nelson; Brent B. Welch; John K. Ousterhout

1988-01-01

45

18 CFR 300.14 - Filings under section 7(k).  

Code of Federal Regulations, 2013 CFR

...2013-04-01 false Filings under section 7(k). 300.14 Section 300.14 Conservation... § 300.14 Filings under section 7(k). Any application for Commission review...Power Administration pursuant to section 7(k) of the Pacific Northwest...

2013-04-01

46

A Metadata-Rich File System  

SciTech Connect

Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

Ames, S; Gokhale, M B; Maltzahn, C

2009-01-07

47

A Predicate-Driven Document Filing System  

Microsoft Academic Search

This paper presents a predicate-driven document filing system for organizing and automatically filing documents. A document model consists of two basic elements: frame templates representing document classes, and folders which are repositories of frame instances. The frame templates can be organized to form a document type hierarchy, which helps classify and file documents. Frame instances are grouped into a folder

Zhijian Zhu; Qianhong Liu; James A. Mchugh; Peter A. Ng

1996-01-01

48

Deceit: A flexible distributed file system  

NASA Technical Reports Server (NTRS)

Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.

Siegel, Alex; Birman, Kenneth; Marzullo, Keith

1989-01-01

49

Efficient and Effective File Replication in Structured P2P File Sharing Systems  

Microsoft Academic Search

In peer-to-peer file sharing systems, file replication helps to avoid overloading file owners and improve file query efficiency. Aiming to achieve high replica utilization and efficient file query with low overhead, this paper presents a file replication mechanism based on swarm intelligence, namely SWARM. Recognizing the power of collective behaviors, SWARM identifies node swarms with common node interests and close

Haiying Shen

2009-01-01

50

Flexibility and Performance of Parallel File Systems  

NASA Technical Reports Server (NTRS)

As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applications difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (API's). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.

Kotz, David; Nieuwejaar, Nils

1996-01-01

51

Distributed file systems: concepts and examples  

Microsoft Academic Search

The purpose of a distributed file system (DFS) is to allow users of physically distributed computers to share data and storage resources by using a common file system. A typical configuration for a DFS is a collection of workstations and mainframes connected by a local area network (LAN). A DFS is implemented as part of the operating system of each

Eliezer Levy; Abraham Silberschatz

1990-01-01

52

Group Sharing and Random Access in Cryptographic Storage File Systems  

Microsoft Academic Search

Traditional cryptographic storage uses encryption to ensure confidentiality of file data. However, encryption can prevent efficient random access to file data. Moreover, no cryptographic storage file system allows file sharing with similar semantics to UNIX group sharing. The Cryptographic Storage File System (Cepheus) provides confidentiality and integrity of data while enabling efficient random access and file sharing using mechanisms similar

Kevin E. Fu

1999-01-01

53

Hadoop distributed file system for the Grid  

Microsoft Academic Search

Data distribution, storage and access are essential to CPU-intensive and data-intensive high performance Grid computing. A newly emerged file system, Hadoop distributed file system (HDFS), is deployed and tested within the Open Science Grid (OSG) middleware stack. Efforts have been taken to integrate HDFS with other Grid tools to build a complete service framework for the Storage Element (SE). Scalability

G. Attebury; A. Baranovski; K. Bloom; B. Bockelman; D. Kcira; J. Letts; T. Levshina; C. Lundestedt; T. Martin; W. Maier; Haifeng Pi; A. Rana; I. Sfiligoi; A. Sim; M. Thomas; F. Wuerthwein

2009-01-01

54

Pastime--A System for File Compression.  

ERIC Educational Resources Information Center

An interactive search and editing system, 3RIP, is being developed at the library of the Royal Institute of Technology in Stockholm for large files of textual and numeric data. A substantial part (on the order of 10-E9 characters) of the primary file of the search system will consist of bibliographic references from a wide range of sources. If the…

Hultgren, Jan; Larsson, Rolf

55

A fast file system for UNIX  

Microsoft Academic Search

A reimplementation of the UNIX file system is described. The reimplementation provides substantially higher throughput rates by using more flexible allocation policies that allow better locality of reference and can be adapted to a wide range of peripheral and processor characteristics. The new file system clusters data that is sequentially accessed and provides two block sizes to allow fast access

Marshall K. McKusick; William N. Joy; Samuel J. Leffler; Robert S. Fabry

1984-01-01

56

Postmark: a new file system benchmark  

Microsoft Academic Search

Existing file system benchmarks are deficient in portraying performance in the ephemeral small-file regime used by Internet software, especially: electronicmail; netnews; and web-based commerce. PostMark is a new benchmark to measure performance for this class of application.In this paper, PostMark test results are presented and analyzed for both UNIX and Windows NT application servers. Network Appliance Filers (file server appliances)

J. Katcher

1997-01-01

57

Prefetching in file systems for MIMD multiprocessors  

NASA Technical Reports Server (NTRS)

The question of whether prefetching blocks on the file into the block cache can effectively reduce overall execution time of a parallel computation, even under favorable assumptions, is considered. Experiments have been conducted with an interleaved file system testbed on the Butterfly Plus multiprocessor. Results of these experiments suggest that (1) the hit ratio, the accepted measure in traditional caching studies, may not be an adequate measure of performance when the workload consists of parallel computations and parallel file access patterns, (2) caching with prefetching can significantly improve the hit ratio and the average time to perform an I/O (input/output) operation, and (3) an improvement in overall execution time has been observed in most cases. In spite of these gains, prefetching sometimes results in increased execution times (a negative result, given the optimistic nature of the study). The authors explore why it is not trivial to translate savings on individual I/O requests into consistently better overall performance and identify the key problems that need to be addressed in order to improve the potential of prefetching techniques in the environment.

Kotz, David F.; Ellis, Carla Schlatter

1990-01-01

58

Knowledge File System -- A Principled Approach to Personal Information Management  

Microsoft Academic Search

The Knowledge File System (KFS) is a smart virtual file system that sits between the operating system and the file system. Its primary functionality is to automatically organize files in a transparent and seamless manner so as to facilitate easy retrieval. Think of the KFS as a personal assistant, who can file every one of you documents into multiple appropriate

Kuiyu Chang; I. Wayan Tresna Perdana; Bramandia Ramadhana; Kailash Sethuraman; Truc Viet Le; Neha Chachra

2010-01-01

59

Pastime--A System for File Compression.  

National Technical Information Service (NTIS)

An interactive search and editing system, 3RIP, is being developed at the library of the Royal Institute of Technology in Stockholm for large files of textual and numeric data. A substantial part (on the order of 10-E9 characters) of the primary file of t...

J. Hultgren R. Larsson

1975-01-01

60

Frangipani: A Scalable Distributed File System  

Microsoft Academic Search

The ideal distributed file system would provide all its users with co- herent, shared access to the same set of files,yet would be arbitrarily scalable to provide more storage space and higher performance to a growing user community. It would be highly available in spite of component failures. It would require minimal human administra- tion, and administration would not become

Chandramohan A. Thekkath; Timothy Mann; Edward K. Lee

1997-01-01

61

Deciding when to forget in the Elephant file system  

Microsoft Academic Search

Modern file systems associate the deletion of a file with the immediate release of storage, and file writes with the irrevocable change of file contents. We argue that this behavior is a relic of the past, when disk storage was a scarce resource. Today, large cheap disks make it possible for the file system to protect valuable data from accidental

Douglas S. Santry; Michael J. Feeley; Norman C. Hutchinson; Alistair C. Veitch; Ross W. Carton; Jacob Ofir

1999-01-01

62

NASA ARCH- A FILE ARCHIVAL SYSTEM FOR THE DEC VAX  

NASA Technical Reports Server (NTRS)

The function of the NASA ARCH system is to provide a permanent storage area for files that are infrequently accessed. The NASA ARCH routines were designed to provide a simple mechanism by which users can easily store and retrieve files. The user treats NASA ARCH as the interface to a black box where files are stored. There are only five NASA ARCH user commands, even though NASA ARCH employs standard VMS directives and the VAX BACKUP utility. Special care is taken to provide the security needed to insure file integrity over a period of years. The archived files may exist in any of three storage areas: a temporary buffer, the main buffer, and a magnetic tape library. When the main buffer fills up, it is transferred to permanent magnetic tape storage and deleted from disk. Files may be restored from any of the three storage areas. A single file, multiple files, or entire directories can be stored and retrieved. archived entities hold the same name, extension, version number, and VMS file protection scheme as they had in the user's account prior to archival. NASA ARCH is capable of handling up to 7 directory levels. Wildcards are supported. User commands include TEMPCOPY, DISKCOPY, DELETE, RESTORE, and DIRECTORY. The DIRECTORY command searches a directory of savesets covering all three archival areas, listing matches according to area, date, filename, or other criteria supplied by the user. The system manager commands include 1) ARCHIVE- to transfer the main buffer to duplicate magnetic tapes, 2) REPORTto determine when the main buffer is full enough to archive, 3) INCREMENT- to back up the partially filled main buffer, and 4) FULLBACKUP- to back up the entire main buffer. On-line help files are provided for all NASA ARCH commands. NASA ARCH is written in DEC VAX DCL for interactive execution and has been implemented on a DEC VAX computer operating under VMS 4.X. This program was developed in 1985.

Scott, P. J.

1994-01-01

63

[PVFS 2000: An operational parallel file system for Beowulf  

NASA Technical Reports Server (NTRS)

The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

Ligon, Walt

2004-01-01

64

A file system for continuous media  

Microsoft Academic Search

The Continuous Media File System, CMFS, supports real-time storage and retrieval of continuous media data (digital audio and video) on disk. CMFS clients read or write files in “sessions,” each with a guaranteed minimum data rate. Multiple sessions, perhaps with different rates, and non-real-time access can proceed concurrently. CMFS addresses several interrelated design issues; real-time semantics fo sessions, disk layout,

David P. Anderson; Yoshitomo Osawa; Ramesh Govindan

1992-01-01

65

Performance of the Galley Parallel File System  

NASA Technical Reports Server (NTRS)

As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

Nieuwejaar, Nils; Kotz, David

1996-01-01

66

A Proof-Carrying File System  

Microsoft Academic Search

We present the design and implementation of PCFS, a file system that adapts proof-carrying authorization to provide direct, rigorous, and efficient enforcement of dynamic access policies. The keystones of PCFS are a new authorization logic BL that supports policies whose consequences may change with both time and system state, and a rigorous enforcement mechanism that combines proof verification with conditional

Deepak Garg; Frank Pfenning

2010-01-01

67

OrcFS: Organized Relationships between Components of the File System for Efficient File Retrieval  

Microsoft Academic Search

The need for efficient organization of files grows with the computer storage capabilities. However, a classical hierarchical file system offers little help in this matter, excepting maybe the case of links and shortcuts. OrcFS proposes a solution to this problem. By redefining several file system concepts, it allows the user to set custom metadata, in the form of property-value pairs,

Alexandra Coldea; Adrian Colesa; Iosif Ignat

2010-01-01

68

The Design of the Expand Parallel File System  

Microsoft Academic Search

This article describes an implementation of MPI-IO using a new parallel file system, called Expand (Expandable Parallel File System), which is based on NFS servers. Expand combines multiple NFS servers to create a distributed partition where files are striped. Expand requires no changes to the NFS server and uses RPC operations to provide parallel access to the same file. Expand

Félix García Carballeira; Alejandro Calderón; Jesús Carretero; Javier Fernández; Jose M. Perez

2003-01-01

69

A Caching File System For a Programmer's Workstation  

Microsoft Academic Search

This paper describes a file system for a programmer's workstation that has access both to a local disk and to remote file servers. The file system is designed to help programmers manage their local naming environments and share consistent versions of collections of software. It names multiple versions of local and remote files in a hier- archy. Local names can

Michael D. Schroeder; David K. Gifford; Roger M. Needham

1985-01-01

70

Security Aware Partitioning for efficient file system search  

Microsoft Academic Search

Index partitioning techniques-where indexes are broken into multiple distinct sub-indexes-are a proven way to improve metadata search speeds and scalability for large file systems, permitting early triage of the file system. A partitioned metadata index can rule out irrelevant files and quickly focus on files that are more likely to match the search criteria. Also, in a large file system

Aleatha Parker-Wood; Christina Strong; Ethan L. Miller; Darrell D. E. Long

2010-01-01

71

Tiger Shark - A scalable file system for multimedia  

Microsoft Academic Search

Tiger Shark is a scalable, parallel file system designed to support interactive multime dia applications, particularly large-scale ones such as interactive television (ITV). Tiger Shark runs under the IBM AIX® operating system, on machines ranging from RS\\/6000™ desktop workstations to the SP2® parallel supercomputer. In addition to supporting continuous-time data, Tiger Shark provides scalability, high availability, and on-line system management,

Roger L. Haskin

1998-01-01

72

Pollution in P2P file sharing systems  

Microsoft Academic Search

One way tu combat fzP file sharing of copyrighted content is to deposit into the file sharing systems large volumes of polluted files. Without taking sides in the file sharing debate, in this paper we undertake a measurement study af the nature and magnitude of pollution in the FastTrack P2P network, currently the most popular PZP file sharing system. We

Jian Liang; Rakesh Kumar; Y. Xi; Keith W. Ross

2005-01-01

73

Stochastic Petri Net Analysis of a Replicated File System  

Microsoft Academic Search

We present a stochastic Petri net model of a replicated file system in a distributed environmentwhere replicated files reside on different hosts and a voting algorithm is used to maintain consistency.Witnesses, which simply record the status of the file but contain no data, may be used in additionto or in place of files to reduce overhead. We present a model

Joanne Bechta Dugan; Gianfranco Ciardo

1989-01-01

74

Collective operations in a file system based execution model  

DOEpatents

A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

Shinde, Pravin; Van Hensbergen, Eric

2013-02-19

75

Collective operations in a file system based execution model  

DOEpatents

A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

Shinde, Pravin; Van Hensbergen, Eric

2013-02-12

76

Towards Protecting Sensitive Files in a Compromised System  

Microsoft Academic Search

Protecting sensitive files from a compromised system helps administrators to thwart many attacks, discover intrusion trails, and fast restore the system to a safe state. However, most existing file protection mechanisms can be turned off after an attacker manages to exploit a vulnerability to gain privileged access. In this pa- per we propose SVFS, a Secure Virtual File System that

Xin Zhao; Kevin Borders; Atul Prakash

2005-01-01

77

On-line consistent backup in transactional file systems  

Microsoft Academic Search

A consistent backup, preserving data integrity across files in a file system, is of utmost importance for the purpose of correctness and minimizing system downtime during the process of data recovery. With the present day demand for continuous access to data, backup has to be taken of an active file system, putting the consistency of the backup copy at risk.

Lipika Deka; Gautam Barua

2010-01-01

78

Optimizing Input\\/Output Using Adaptive File System Policies  

Microsoft Academic Search

Parallel input\\/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and prefetching policies, while performance

Tara M. Madhyastha; Christopher L. Elford; Daniel A. Reed

1996-01-01

79

A History of the Andrew File System  

SciTech Connect

Derrick Brashear and Jeffrey Altman will present a technical history of the evolution of Andrew File System starting with the early days of the Andrew Project at Carnegie Mellon through the commercialization by Transarc Corporation and IBM and a decade of OpenAFS. The talk will be technical with a focus on the various decisions and implementation trade-offs that were made over the course of AFS versions 1 through 4, the development of the Distributed Computing Environment Distributed File System (DCE DFS), and the course of the OpenAFS development community. The speakers will also discuss the various AFS branches developed at the University of Michigan, Massachusetts Institute of Technology and Carnegie Mellon University.

None

2011-02-22

80

Diagnosing Performance Problems in Parallel File Systems  

Microsoft Academic Search

Abstract This work describes and compares two black-box approaches, using syscall statistics and OS-level perfor- mance metrics, to automatically diagnose different performance problems in parallel file systems. Both approaches rely on peer-comparison diagnosis to compare,statistical attributes of relevant metrics across servers in order to indict the culprit node. An observation-based checklist is developed to identify from the metrics affected the

Michael P. Kasick

2009-01-01

81

3RIP: File Design for the Search System.  

ERIC Educational Resources Information Center

The file design of the search system part of an interactive search and editing system, 3RIP, is described. A scatter-stored and compact inverted file is used to search a primary file of up to 4 million records containing on the order of 10-E9 characters of text and numeric data. Searchable attributes are keywords, words or phrases in text, names,…

Larsson, Rolf; And Others

82

Design of the Classified File Management and Control System  

Microsoft Academic Search

The classified file management control state at present was analyzed at first in the paper. Then, according to improve the security and the management capability of the inside network information, the classified file management and control system was designed, which consists of three modules: the management control platform, the electronic file examine and approve system, and the classified moving flash

Yanmei Lv; Jiangsheng Sun; Gefang Wang

2009-01-01

83

Embedded NAND flash file system for mobile multimedia devices  

Microsoft Academic Search

In this work, we present a novel mobile multimedia file system, MNFS, which is specifically designed for NAND flash memory. Our design specifically addresses the needs of devices such as MP3 players, personal media players (PMPs), digital camcorders, etc. The file system has three novel features important in mobile multimedia applications: (1) predictable and uniform write latency, (2) quick file

Hyojun Kim; Youjip Won; Sooyong Kang

2009-01-01

84

File Design for the Search System, 3RIP.  

National Technical Information Service (NTIS)

The file design of the search system part of an interactive search and editing system, 3RIP, is described. A scatter-stored and compact inverted file is used to search a primary file of up to 4 million records containing on the order of 10-E9 characters o...

R. Larsson

1975-01-01

85

Bridging the gap between parallel file systems and local file systems : a case study with PVFS.  

SciTech Connect

Parallel I/O plays an increasingly important role in today's data intensive computing applications. While much attention has been paid to parallel read performance, most of this work has focused on the parallel file system, middleware, or application layers, ignoring the potential for improvement through more effective use of local storage. In this paper, we present the design and implementation of segment-structured on-disk data grouping and prefetching (SOGP), a technique that leverages additional local storage to boost the local data read performance for parallel file systems, especially for those applications with partially overlapped access patterns. Parallel virtual file system (PVFS) is chosen as an example. Our experiments show that an SOGP-enhanced PVFS prototype system can outperform a traditional Linux-Ext3-based PVFS for many applications and benchmarks, in some tests by as much as 230% in terms of I/O bandwidth.

Gu, P.; Wang, J.; Ross, R.; Mathematics and Computer Science; Univ. of Central Florida

2008-09-01

86

A Context-based System for Personal File Retrieval  

Microsoft Academic Search

This paper introduces CoFS, a context-based system for personal file retrieval. Users in this system use tags to describe files or resources of special interest. A set of tags assigned to a file by a user is called a tag-based context. For each user, his interesting files are organized into the approriate contexts. A directed acyclic graph of tags is

Hung Ba Ngo

87

Design and Implementation of a Metadata-rich File System  

SciTech Connect

Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

Ames, S; Gokhale, M B; Maltzahn, C

2010-01-19

88

Digital Libraries: The Next Generation in File System Technology.  

ERIC Educational Resources Information Center

Examines file sharing within corporations that use wide-area, distributed file systems. Applications and user interactions strongly suggest that the addition of services typically associated with digital libraries (content-based file location, strongly typed objects, representation of complex relationships between documents, and extrinsic…

Bowman, Mic; Camargo, Bill

1998-01-01

89

Using Grid Files for a Relational Database Management System  

Microsoft Academic Search

This paper describes our experience with using Grid files as the main storage\\u000aorganization for a relational database management system. We primarily focus on\\u000athe following two aspects. (i) Strategies for implementing grid files\\u000aefficiently. (ii) Methods for efficiency evaluating queries posed to a database\\u000aorganized using grid files.

S. M. Joshi; S. Sanyal; S. Banerjee; S. Srikumar

2010-01-01

90

Active Storage Processing in a Parallel File System  

SciTech Connect

By creating a processing system within a parallel file system one can harness the power of unused processing power on servers that have very fast access to the disks they are serving. By inserting a module the Lustre file system the Active Storage Concept is able to perform processing with the file system architecture. Results of using this technology are presented as the results of the Supercomputing StorCloud Challenge Application are reviewed.

Felix, Evan J.; Fox, Kevin M.; Regimbal, Kevin M.; Nieplocha, Jarek

2006-01-01

91

Silvabase: A flexible data file management system  

NASA Technical Reports Server (NTRS)

The need for a more flexible and efficient data file management system for mission planning in the Mission Operations Laboratory (EO) at MSFC has spawned the development of Silvabase. Silvabase is a new data file structure based on a B+ tree data structure. This data organization allows for efficient forward and backward sequential reads, random searches, and appends to existing data. It also provides random insertions and deletions with reasonable efficiency, utilization of storage space well but not at the expense of speed, and performance of these functions on a large volume of data. Mission planners required that some data be keyed and manipulated in ways not found in a commercial product. Mission planning software is currently being converted to use Silvabase in the Spacelab and Space Station Mission Planning Systems. Silvabase runs on a Digital Equipment Corporation's popular VAX/VMS computers in VAX Fortran. Silvabase has unique features involving time histories and intervals such as in operations research. Because of its flexibility and unique capabilities, Silvabase could be used in almost any government or commercial application that requires efficient reads, searches, and appends in medium to large amounts of almost any kind of data.

Lambing, Steven J.; Reynolds, Sandra J.

1991-01-01

92

Optimizing Input/Output Using Adaptive File System Policies  

NASA Technical Reports Server (NTRS)

Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

1996-01-01

93

Reimplementing the Cedar File System Using Logging and Group Commit  

Microsoft Academic Search

The workstation file system for the Cedar programming environment was modified to improve its robustness and performance. Previously, the file system used hardware-provided labels on disk blocks to increase robustness against hardware and software errors. The new system does not require hardware disk labels, yet is more robust than the old system. Recovery is rapid after a crash. The performance

Robert B. Hagmann

1987-01-01

94

Application performance on the Direct Access File System  

Microsoft Academic Search

The Direct Access File System (DAFS) is a distributed file system built on top of direct-access transports (DAT). Direct-access transports are characterized by using remote direct memory access (RDMA) for data transfer and user-level networking. The motivation behind the DAT-enabled distributed file system architecture is the reduction of the CPU overhead on the I\\/O data path.We have created an implementation

Alexandra Fedorova; Margo I. Seltzer; Kostas Magoutis; Salimah Addetia

2004-01-01

95

File System Logging versus Clustering: A Performance Comparison  

Microsoft Academic Search

The Log-structured File System (LFS), introduced in 1991 (8), has received much attention for its potential order-of-magnitude improvement in file system performance. Early research results (9) showed that small file performance could scale with processor speed and that cleaning costs could be kept low, allowing LFS to write at an effective bandwidth of 62 to 83% of the maximum. Later

Margo I. Seltzer; Keith A. Smith; Hari Balakrishnan; Jacqueline Chang; Sara Mcmains; Venkata N. Padmanabhan

1995-01-01

96

Adaptive file allocation in distributed computer systems  

Microsoft Academic Search

An algorithm to dynamically reallocate the database files in a computer network is presented. The proposed algorithm uses the best fit approach to allocate and delete beneficial file copies. A key problem of economical estimation of future access and update pattern is discussed and an algorithm based on the Gabor-Kolmogorov learning process is presented to estimate the access and the

Amjad Mahmood; H. U. Khan; H. A. Fatmi

1994-01-01

97

Design and Implementation of a Metadata-rich File System.  

National Technical Information Service (NTIS)

Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stor...

C. Maltzahn M. B. Gokhale S. Ames

2010-01-01

98

PVFS: A Parallel File System for Linux Clusters  

Microsoft Academic Search

As Linux clusters have matured as platforms for low- cost, high-performance parallel computing, software packages to provide many key services have emerged, especially in areas such as message passing and net- working. One area devoid of support, however, has been parallel file systems, which are critical for high- performance I\\/O on such clusters. We have developed a parallel file system

Philip H. Carns; Walter B. Ligon III; Robert B. Ross; Rajeev Thakur

2002-01-01

99

xFS: A Wide Area Mass Storage File System  

Microsoft Academic Search

The current generation of file systems are inadequate in facing the new technological challenges of wide area networks and massive storage. xFS is a prototype file system we are developing to explore the issues brought about by these technological advances. xFS adapts many of the techniques used in the field of high performance multiprocessor design. It organizes hosts into a

Randolph Y. Wang; Thomas E. Anderson

1993-01-01

100

Distributing LHC application software and conditions databases using the CernVM file system  

NASA Astrophysics Data System (ADS)

The CernVM File System (CernVM-FS) is a read-only file system designed to deliver high energy physics (HEP) experiment analysis software onto virtual machines and Grid worker nodes in a fast, scalable, and reliable way. CernVM-FS decouples the underlying operating system from the experiment denned software stack. Files and file metadata are aggressively cached and downloaded on demand. By designing the file system specifically to the use case of HEP software repositories and experiment conditions data, several typically hard problems for (distributed) file systems can be solved in an elegant way. For the distribution of files, we use a standard HTTP transport, which allows exploitation of a variety of web caches, including commercial content delivery networks. We ensure data authenticity and integrity over possibly untrusted caches and connections while keeping all distributed data cacheable. On the small scale, we developed an experimental extension that allows multiple CernVM-FS instances in a computing cluster to discover each other and to share their file caches.

Blomer, Jakob; Aguado-Sánchez, Carlos; Buncic, Predrag; Harutyunyan, Artem

2011-12-01

101

Stochastic Petri net analysis of a replicated file system  

NASA Technical Reports Server (NTRS)

A stochastic Petri-net model of a replicated file system is presented for a distributed environment where replicated files reside on different hosts and a voting algorithm is used to maintain consistency. Witnesses, which simply record the status of the file but contain no data, can be used in addition to or in place of files to reduce overhead. A model sufficiently detailed to include file status (current or out-of-date), as well as failure and repair of hosts where copies or witnesses reside, is presented. The number of copies and witnesses is a parameter of the model. Two different majority protocols are examined, one where a majority of all copies and witnesses is necessary to form a quorum, and the other where only a majority of the copies and witnesses on operational hosts is needed. The latter, known as adaptive voting, is shown to increase file availability in most cases.

Bechta Dugan, Joanne; Ciardo, Gianfranco

1989-01-01

102

Disconnected Operation in the Coda File System  

Microsoft Academic Search

How can we improve ($#$#$#CommaToBeDetercollection of untrusted ,Unix,clients and a much ,smaller number,of trusted ,Unix file servers. The ,design ,is optimized for the access and sharing patterns typical of

James J. Kistler; Mahadev Satyanarayanan

1991-01-01

103

Part III: AFS - A Secure Distributed File System.  

National Technical Information Service (NTIS)

AFS is a secure distributed global file system providing location independence, scalability and transparent migration capabilities for data. AFS works across a multitude of Unix and non-Unix operating systems and is used at many large sites in production ...

A. Wachsmann

2005-01-01

104

File Manager System for Ceiling and Master Staffing.  

National Technical Information Service (NTIS)

The project aims to develop a ceiling and master staffing system for Nursing Service Department at the Veterans Administration Medical Center in Jackson, Mississippi. It uses a system called File Manager, which is a simplified approach to project manageme...

S. H. Honea

1985-01-01

105

Re-engineering the Los Alamos Common File System  

SciTech Connect

The Los Alamos National Laboratory's Common File System is being substantially upgraded so it will continue to meet the future data storage needs of the Laboratory and other sites. The nature of computing is changing with the advent of workstations, different work loads, and standard approaches to networking and operating systems. We reviewed the pros and cons of the Common File System and numerous existing and planned systems. Our conclusion was that the Common File System was a valuable resource with many strengths that should be preserved for the future. We have arrived at a set of recommendations for changes and additions that we feel will allow the Common File System to continue to meet data storage needs in both the short and long term. The recommendations for 1990--1992 support the transition to Unix on supercomputers and support high performance networking. The recommendations for 1992--1995 support the transition to a Unix type file system that provides a significant level of file location transparency. For 1995--2000 the recommendations are necessarily less precise but support the notion of standard security, transparent file systems and faster networking. 11 refs.

Christman, R.D.; Cook, D.P.; Mercier, C.W.

1990-01-01

106

17 CFR 242.608 - Filing and amendment of national market system plans.  

Code of Federal Regulations, 2010 CFR

...false Filing and amendment of national market system plans. 242.608 Section 242...Regulation Nms-Regulation of the National Market System § 242.608 Filing and amendment of national market system plans. (a) Filing of...

2009-04-01

107

17 CFR 242.608 - Filing and amendment of national market system plans.  

Code of Federal Regulations, 2010 CFR

...false Filing and amendment of national market system plans. 242.608 Section 242...Regulation Nms-Regulation of the National Market System § 242.608 Filing and amendment of national market system plans. (a) Filing of...

2010-04-01

108

High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems  

NASA Technical Reports Server (NTRS)

Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

Kolano, Paul Z.; Ciotti, Robert B.

2012-01-01

109

Configuration Management File Manager Developed for Numerical Propulsion System Simulation  

NASA Technical Reports Server (NTRS)

One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.

Follen, Gregory J.

1997-01-01

110

File  

NSDL National Science Digital Library

With a cover that looks suspiciously like one of the 20th century's most beloved photographic magazines (hint: "File" is an anagram of its moniker), File is an online photography magazine that specializes in "alternate takes, odd angles, unconventional observations". As its makers wryly note, "We leave the Kodak Moments to the family album, the glossy fashion spreads to Vogue, and the photo finishes to ESPN". While the site is relatively new, there is a good deal to browse through here, and visitors can start with a trip to the thematic galleries, and also stop by the contributors section to learn more about each individual photographer. One rather intriguing collection is called "Rustfetish" and features the work of Vince Stinson, whose artistic statement notes that "...these photos prove that by celebrating/the love of rust and all that rusts". The site also includes "Karaoke Camera", in which the editors of the FILE offer a photographic theme or trope, and visitors are encouraged to submit photographs related to that particular idea.

111

The Rio File Cache: Surviving Operating System Crashes  

Microsoft Academic Search

One of the fundamental limits to high-performance, high-reliability file systems is memory's vulnerability to system crashes. Because memory is viewed as unsafe, systems periodically write data back to disk. The extra disk traffic lowers performance, and the delay period before data is safe lowers reliability. The goal of the Rio (RAM I\\/O) file cache is to make ordinary main memory

Peter M. Chen; Wee Teck Ng; Subhachandra Chandra; Christopher M. Aycock; Gurushankar Rajamani; David E. Lowell

1996-01-01

112

78 FR 16365 - Foreign Trade Regulations: Mandatory Automated Export System Filing for All Shipments Requiring...  

Federal Register 2010, 2011, 2012, 2013

...Trade Regulations: Mandatory Automated Export System Filing for All Shipments Requiring Shipper's Export Declaration Information; Final Rule...Trade Regulations: Mandatory Automated Export System Filing for All Shipments...

2013-03-14

113

19 CFR 191.176 - Procedures for claims filed under 19 U.S.C. 1313(p).  

Code of Federal Regulations, 2013 CFR

...for claims filed under 19 U.S.C. 1313(p). 191.176 Section 191.176 Customs...for claims filed under 19 U.S.C. 1313(p). (a) Applicability. The general...to claims filed under 19 U.S.C. 1313(p) unless otherwise specifically...

2013-04-01

114

39 CFR 959.8 - Service of petition filed under § 959.6.  

Code of Federal Regulations, 2013 CFR

...8 Postal Service UNITED STATES POSTAL SERVICE PROCEDURES RULES OF PRACTICE IN PROCEEDINGS RELATIVE TO THE PRIVATE EXPRESS STATUTES § 959.8 Service of petition filed under § 959.6. (a) The Recorder shall cause a notice of...

2013-07-01

115

76 FR 71019 - Amendment of Inspector General's Operation and Reporting (IGOR) System Investigative Files (EPA-40)  

Federal Register 2010, 2011, 2012, 2013

...Reporting (IGOR) System Investigative Files (EPA-40) AGENCY: Environmental Protection...Reporting (IGOR) System Investigative Files (EPA-40) to the Inspector General Enterprise...able to consider your comment. Electronic files should avoid the use of special...

2011-11-16

116

PVFS : a parallel file system for linux clusters  

Microsoft Academic Search

As Linux clusters have matured as platforms for low-cost, high-performance parallel computing, software packages to provide many key services have emerged, especially in areas such as message passing and net-working. One area devoid of support, however, has been parallel file systems, which are critical for high-performance I\\/O on such clusters. We have developed a parallel file system for Linux clusters,

Philip H. Carns; Walter B. Ligon; R. B. Ross; R. Thakur

2000-01-01

117

HighLight: a file system for tertiary storage  

Microsoft Academic Search

HighLight, a file system combining secondary disk storage and tertiary robotic storage that is being developed as part of the Sequoia 200 Project, is described. HighLight is an extension of the 4.4BSD log-structured file system (LFS), which provides hierarchical storage management without requiring any special support from applications. The authors present HighLight's design and various policies for automatic migration of

John Kohl; Michael Stonebraker; Carl Staelin

1993-01-01

118

File Sessions: A Technique and its Application to the UNIX File System  

Microsoft Academic Search

Thls paper describes a new technique for Files are often used in stylized ways. For exa~iiple, analyzing dynamic file umge Patterm based upon some programs append data to files, while others create classification of file sessioiis. AJile session is defined to be the new files or overwrite existing files with new data, wl1ile set of operations On a given from

John H. Maloney; Andrew P. Black

1987-01-01

119

37 CFR 1.109 - Effective filing date of a claimed invention under the Leahy-Smith America Invents Act.  

Code of Federal Regulations, 2013 CFR

... Effective filing date of a claimed invention under the Leahy-Smith America Invents... Effective filing date of a claimed invention under the Leahy-Smith America Invents...The effective filing date for a claimed invention in a patent or application for...

2013-07-01

120

Coda: A Highly Available File System for a  

Microsoft Academic Search

Coda is a file system for a large-scale distributedcomputing environment composed of Unix workstations. It pro-vides resiliency to server and network failures through the use oftwo distinct but complementary mechanisms. One mechanism,server replication,stores copies of a file at multiple servers. Theother mechanism, disconnected operation, is a mode of executionin which a caching site temporarily assumes the role ofa replication site.

M. Satyanarayanan; J. j. Kistler; P. Kumar; M. e. Okasaki; E. h. Siegel; D. c. Steere

1990-01-01

121

A nine year study of file system and storage benchmarking  

Microsoft Academic Search

Benchmarking is critical when evaluating performance, but is especially difficult for file and stor- age systems. Complex interactions between I\\/O devices, caches, kernel daemons, and other OS components result in behavior that is rather difficult to analyze. Moreover, systems have differ- ent features and optimizations, so no single benchmark is always suitable. The large variety of workloads that these systems

Avishay Traeger; Erez Zadok; Nikolai Joukov; Charles P. Wright

2008-01-01

122

Scale and performance in a distributed file system  

Microsoft Academic Search

The Andrew File System is a location-transparent distributed tile system that will eventually span more than 5000 workstations at Carnegie Mellon University. Large scale affects performance and complicates system operation. In this paper we present observations of a prototype implementation, motivate changes in the areas of cache validation, server process structure, name translation, and low-level storage representation, and quantitatively demonstrate

John H. Howard; Michael L. Kazar; Sherri G. Menees; David A. Nichols; Mahadev Satyanarayanan; Robert N. Sidebotham; Michael J. West

1988-01-01

123

PVFS : a parallel file system for linux clusters  

SciTech Connect

As Linux clusters have matured as platforms for low-cost, high-performance parallel computing, software packages to provide many key services have emerged, especially in areas such as message passing and net-working. One area devoid of support, however, has been parallel file systems, which are critical for high-performance I/O on such clusters. We have developed a parallel file system for Linux clusters, called the Parallel Virtual File System (PVFS). PVFS is intended both as a high-performance parallel file system that anyone can download and use and as a tool for pursuing further research in parallel I/O and parallel file systems for Linux clusters. In this paper, we describe the design and implementation of PVFS and present performance results on the Chiba City cluster at Argonne. We provide performance results for a workload of concurrent reads and writes for various numbers of compute nodes, I/O nodes, and I/O request sizes. We also present performance results for MPI-IO on PVFS, both for a concurrent read/write workload and for the BTIO benchmark. We compare the I/O performance when using a Myrinet network versus a fast-ethernet network for I/O-related communication in PVFS. We obtained read and write bandwidths as high as 700 Mbytes/sec with Myrinet and 225 Mbytes/sec with fast ethernet.

Carns, P. H.; Ligon, W. B., III; Ross, R. B.; Thakur, R.

2000-04-27

124

JavaFIRE: A Replica and File System for Grids  

NASA Astrophysics Data System (ADS)

The work is focused on the creation of a replica and file transfers system for Computational Grids inspired on the needs of the High Energy Physics (HEP). Due to the high volume of data created by the HEP experiments, an efficient file and dataset replica system may play an important role on the computing model. Data replica systems allow the creation of copies, distributed between the different storage elements on the Grid. In the HEP context, the data files are basically immutable. This eases the task of the replica system, because given sufficient local storage resources any dataset just needs to be replicated to a particular site once. Concurrent with the advent of computational Grids, another important theme in the distributed systems area that has also seen some significant interest is that of peer-to-peer networks (p2p). P2p networks are an important and evolving mechanism that eases the use of distributed computing and storage resources by end users. One common technique to achieve faster file download from possibly overloaded storage elements over congested networks is to split the files into smaller pieces. This way, each piece can be transferred from a different replica, in parallel or not, optimizing the moments when the network conditions are better suited to the transfer. The main tasks achieved by the system are: the creation of replicas, the development of a system for replicas transfer (RFT) and for replicas location (RLS) with a different architecture that the one provided by Globus and the development of a system for file transfer in pieces on computational grids with interfaces for several storage elements. The RLS uses a p2p overlay based on the Kademlia algorithm.

Petek, Marko; da Silva Gomes, Diego; Resin Geyer, Claudio Fernando; Santoro, Alberto; Gowdy, Stephen

2012-12-01

125

CA-NFS: A Congestion-Aware Network File System  

Microsoft Academic Search

We develop a holistic framework for adaptively schedul- ing asynchronous requests in distributed file systems. The system is holistic in that it manages all resources, including network bandwidth, server I\\/O, server CPU, and client and server memory utilization. It acceler- ates, defers, or cancels asynchronous requests in order to improve application-perceived performance directly. We employ congestion pricing via online auctions

Alexandros Batsakis; Randal C. Burns; Arkady Kanevsky; James Lentini; Thomas Talpey

2009-01-01

126

Log-Based Directory Resolution in the Coda File System  

Microsoft Academic Search

updates to an object must be treated as conflicting, and Optimistic replication is an important technique for merged manually by the user. Manual resolution is achieving high availability in distributed file systems. A undesirable because it reduces the overall usability of the key problem in optimistic replication is using semantic knowledge of objects to resolve concurrent updates from system. multiple

Puneet Kumar; Mahadev Satyanarayanan

1993-01-01

127

TOXICS RELEASE INVENTORY - GEOGRAPHIC INFORMATION SYSTEM COVERAGE FILES  

EPA Science Inventory

Data extracted from the EPA Toxics Release Inventory (TRI) system for reporting year 1993 are written in Arc/INFO geographic information system (GIS) export file format (an ASCII data exchange format). The data are also summarized in tables out of the TRI public data release publ...

128

Program Description: Financial Master File Processor-SWRL Financial System.  

ERIC Educational Resources Information Center

Computer routines designed to produce various management and accounting reports required by the Southwest Regional Laboratory's (SWRL) Financial System are described. Input data requirements and output report formats are presented together with a discussion of the Financial Master File updating capabilities of the system. This document should be…

Ideda, Masumi

129

File-System Workload on a Scientific Multiprocessor  

NASA Technical Reports Server (NTRS)

Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.

Kotz, David; Nieuwejaar, Nils

1995-01-01

130

European Southern Observatory-MIDAS Table File System.  

National Technical Information Service (NTIS)

The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from th...

M. Peron P. Grosbol

1992-01-01

131

The ITC Distributed File System: Principles and Design  

Microsoft Academic Search

This paper presents the design and rationale of a distributed file system for a network of more than 5000 personal computer workstations. While scale has been the dominant design influence, careful attention has also been paid to the goals of location transparency, user mobility and compatibility with existing operating system interfaces. Security is an important design consideration, and the mechanisms

Mahadev Satyanarayanan; John H. Howard; David A. Nichols; Robert N. Sidebotham; Alfred Z. Spector; Michael J. West

1985-01-01

132

An Improved B+ Tree for Flash File Systems  

NASA Astrophysics Data System (ADS)

Nowadays mobile devices such as mobile phones, mp3 players and PDAs are becoming evermore common. Most of them use flash chips as storage. To store data efficiently on flash, it is necessary to adapt ordinary file systems because they are designed for use on hard disks. Most of the file systems use some kind of search tree to store index information, which is very important from a performance aspect. Here we improved the B+ search tree algorithm so as to make flash devices more efficient. Our implementation of this solution saves 98%-99% of the flash operations, and is now the part of the Linux kernel.

Havasi, Ferenc

133

Home network file system for home network based on IEEE1394 technology  

Microsoft Academic Search

This paper proposes a network file system for home network based on IEEE-1394 technology. The home network file system enables real-time playback, recording of audio\\/video file and file sharing for HAVi (home audio video interoperability) compliant consumer devices

T. Igarashi; K. Hayakawa; T. Nishimura; T. Ozawa; H. Takizuka

1999-01-01

134

Determining the Optimal File Size on Tertiary Storage Systems Based on the Distribution of Query Sizes  

Microsoft Academic Search

In tertiary storage systems, the data is stored on multiple tape volumes where each tape isfurther divided into files. Since in many such systems the minimum unit of data transfer is a file,it is an important problem to match file sizes with the access patterns to the data. In general,if the file size is large relative to the query size

Luis M. Bernardo; Henrik Nordberg; Doron Rotem; Arie Shoshani

1998-01-01

135

Toward automatic context-based attribute assignment for semantic file systems  

Microsoft Academic Search

Semantic file systems enable users to search for files based on attributes rather than just pre-assigned names. This paper devel- ops and evaluates several new approaches to automatically generating file attributes based on context, complementing existing approaches based on content analysis. Context captures broader system state that can be used to provide new attributes for files, and to propagate attributes

Craig A. N. Soules; Gregory R. Ganger

2004-01-01

136

A trace-driven analysis of the UNIX 4.2 BSD file system  

Microsoft Academic Search

We analyzed the UNIX 4.2BSD file system by recording activity in trace files and writing programs to analyze the traces. The trace analysis shows that the average file system bandwidth needed per user is low (a few hundred bytes per second). Most of the files accessed are short, are open a short time, and are accessed sequentially. Most new information

John K. Ousterhout; Hervé Da Costa; David Harrison; John A. Kunze; Michael D. Kupfer; James G. Thompson

1985-01-01

137

Cleaning capacity of hybrid instrumentation technique using reamer with alternating cutting edges system files: Histological analysis  

PubMed Central

Aim: The aim of the following study is to evaluate the cleaning capacity of a hybrid instrumentation technique using Reamer with Alternating Cutting Edges (RaCe) system files in the apical third of mesial roots of mandibular molars. Materials and Methods: Twenty teeth were selected and separated into two groups (n = 20) according to instrumentation technique as follows: BioRaCe - chemomechanical preparation with K-type files #10 and #15; and files BioRaCe BR0, BR1, BR2, BR3, and BR4; HybTec - hybrid instrumentation technique with K-type files #10 and #15 in the working length, #20 at 2 mm, #25 at 3 mm, cervical preparation with Largo burs #1 and #2; apical preparation with K-type files #15, #20, and #25 and RaCe files #25.04 and #30.04. The root canals were irrigated with 1 ml of 2.5% sodium hypochlorite at each change of instrument. The specimens were histologically processed and photographed under light optical microscope. The images were inserted onto an integration grid to count the amount of debris present in the root canal. Results: BioRaCe presented the highest percentage of debris in the apical third, however, with no statistically significant difference for HybTec (P > 0.05). Conclusions: The hybrid technique presented similar cleaning capacity as the technique recommended by the manufacturer.

Junior, Emilio Carlos Sponchiado; da Fonseca, Tiago Silva; da Frota, Matheus Franco; de Carvalho, Fredson Marcio Acris; Marques, Andre Augusto Franco; Garcia, Lucas da Fonseca Roberti

2014-01-01

138

GLIMPSE: A Tool to Search Through Entire File Systems  

Microsoft Academic Search

GLIMPSE, which stands for GLobal IMPlicit SEarch, provides indexing and query schemes for file systems. The novelty of glimpse is that it uses a very small index — in most cases 2-4% of the size of the text — and still allows very flexible full-text retrieval including Boolean queries, approximate matching (i.e., allowing misspel- ling), and even searching for regular

Udi Manber; Sun Wu

1994-01-01

139

A 64-bit, Shared Disk File System for Linux  

Microsoft Academic Search

In computer systems today, speed and responsiveness is often determined by network and storage subsystem per- formance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaf- folding from which higher performance implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage de- vices. We have developed a Linux file

Kenneth W. Preslan; Andrew P. Barry; Jonathan E. Brassow; Grant M. Erickson; Erling Nygaard; Christopher J. Sabol; Steven R. Soltis; David C. Teigland; Matthew T. O'keefe

1999-01-01

140

The secret stridulatory file under the right tegmen in katydids
(Orthoptera, Ensifera, Tettigonioidea).
 

PubMed

Males of most species of crickets and katydids produce species-specific calling songs to attract conspecific females. The typical stridulatory apparatus of the Ensifera consists of a file-and-scraper system in the basal dorsal region of the forewings (tegmina): the file on the underside of the cubital vein of one tegmen is composed of a series of lamelliform teeth and is run against the sclerotized scraper at the edge of the other tegmen. The region directly distal of the cubital vein is often thin and glassy and serves to amplify and spread the sound. In stridulating crickets the tegmina are quite symmetrical with both the left and the right one containing a file, which is considered the ancestral condition (Béthoux 2012). Most of these crickets adopted a right-over-left wing overlap and use only the right file. The few extant species of the ancient group Hagloidea have bilaterally symmetrical tegmina, both with functional files, and individual males can change the overlap (Morris & Gwynne 1978). Katydids are distinguished by a left-over-right wing overlap, with a stridulatory file on the underside of the left tegmen, and a scraper on the right one, which usually is also equipped with a mirror as resonating structure. PMID:24989770

Chamorro-Rengifo, Juliana; Braun, Holger; Lopes-Andrade, Cristiano

2014-01-01

141

Accurate and Efficient Replaying of File System Traces  

Microsoft Academic Search

Replaying traces is a time-honored method for bench- marking, stress-testing, and debugging systems—and more recently—forensic analysis. One benefit to replay- ing traces is the reproducibility of the exact set of op- erations that were captured during a specific workload. Existing trace capture and replay systems operate at dif- ferent levels: network packets, disk device drivers, net- work file systems, or

Nikolai Joukov; Timothy Wong; Erez Zadok

2005-01-01

142

Statutes of Limitations for Filing a Lawsuit under the Individuals with Disabilities Education Act.  

ERIC Educational Resources Information Center

The Individuals with Disabilities Education Act (IDEA) does not contain a statute of limitations for filing a lawsuit. Presents a state-by-state analysis of the statutes of limitations courts have borrowed from the respective states for lawsuits appealing administrative decisions and independent actions to recover attorney fees under IDEA. (55…

Osborne, Allan G., Jr.

1996-01-01

143

29 CFR 15.202 - How is a claim filed under the MPCECA?  

Code of Federal Regulations, 2013 CFR

...A claim under this subpart must be presented in writing. A sample claim, located on the Department's Office of the Solicitor...site at www.dol.gov, is provided as an example for convenience of filing. The SF-95 for FTCA claims is not an...

2013-07-01

144

A Parallel and Fault Tolerant File System Based on NFS Servers  

Microsoft Academic Search

One important piece of system software for clusters is the parallel file system. All current parallel file systems and parallel I\\/O libraries for clusters do not use standard servers, thus it is very difficult to use these systems in heterogeneous environments. However why use proprietary or special-purpose servers on the server end of a parallel file system when you have

Félix García; Alejandro Calderón; Jesús Carretero; José María Pérez; Javier Fernández

2003-01-01

145

NASIS data base management system: IBM 360 TSS implementation. Volume 6: NASIS message file  

NASA Technical Reports Server (NTRS)

The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.

1973-01-01

146

NASIS data base management system - IBM 360/370 OS MVT implementation. 6: NASIS message file  

NASA Technical Reports Server (NTRS)

The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.

1973-01-01

147

Part III: AFS - A Secure Distributed File System  

SciTech Connect

AFS is a secure distributed global file system providing location independence, scalability and transparent migration capabilities for data. AFS works across a multitude of Unix and non-Unix operating systems and is used at many large sites in production for many years. AFS still provides unique features that are not available with other distributed file systems even though AFS is almost 20 years old. This age might make it less appealing to some but with IBM making AFS available as open-source in 2000, new interest in use and development was sparked. When talking about AFS, people often mention other file systems as potential alternatives. Coda (http://www.coda.cs.cmu.edu/) with its disconnected mode will always be a research project and never have production quality. Intermezzo (http://www.inter-mezzo.org/) is now in the Linux kernel but not available for any other operating systems. NFSv4 (http://www.nfsv4.org/) which picked up many ideas from AFS and Coda is not mature enough yet to be used in serious production mode. This article presents the rich features of AFS and invites readers to play with it.

Wachsmann, A.; /SLAC

2005-06-29

148

QMDS: A File System Metadata Management Service Supporting a Graph Data Model-Based Query Language  

Microsoft Academic Search

File system metadata management has become a bottleneck for many data-intensive applications that rely on high-performance file systems. Part of the bottleneck is due to the limitations of an almost 50 year old interface standard with metadata abstractions that were designed at a time when high-end file systems managed less than 100MB. Today's high- performance file systems store 7 to

Sasha Ames; Maya B. Gokhale; Carlos Maltzahn

2011-01-01

149

QMDS: a file system metadata management service supporting a graph data model-based query language  

Microsoft Academic Search

File system metadata management has become a bottleneck for many data-intensive applications that rely on high-performance file systems. Part of the bottleneck is due to the limitations of an almost 50-year-old interface standard with metadata abstractions that were designed at a time when high-end file systems managed less than 100 MB. Today's high-performance file systems store 7–9 orders of magnitude more

Sasha Ames; Maya Gokhale; Carlos Maltzahn

2012-01-01

150

MPI-IO on a Parallel File System for Cluster of Workstations  

Microsoft Academic Search

Since the MPI-IO definition, a standard interface for parallel IO, some implementations are available for clusters of workstations, but the performances are within the limits of the file system (typically NFS). New parallel file systems are now available on clusters of workstations and provide higher performances. A first idea is to interface such a parallel file system to a portable

Hakan Taki; Gil Utard

1999-01-01

151

Novel Filing Systems Applicable to an Automated Office: A State-of-the-Art Study.  

ERIC Educational Resources Information Center

Examines novel computer filing systems which have particular application to office information storage and retrieval requirements. A variety of filing systems and their major characteristics are reviewed, ranging from network-based file servers to digital image storage and retrieval systems. Desirable characteristics of a modern electronic office…

Restorick, F. Mark

1986-01-01

152

MNFS: mobile multimedia file system for NAND flash based storage device  

Microsoft Academic Search

In this work, we present a novel mobile multimedia file system, MNFS, which is specifically designed for NAND flash memory. It is designed for mobile multimedia devices such as MP3 player, Personal Media Player (PMP), digital camcorder, etc. Our file system has three novel features important in mobile multimedia applications: (1) predictable and uniform write latency, (2) quick file system

Hyojun Kim; Youjip Won

2006-01-01

153

Vnodes: An Architecture for Multiple File System Types in Sun UNIX  

Microsoft Academic Search

This paper describes an architecture for accommodating multiple file system implementations within theSun UNIX+ kernel. The file system implementations can encompass local, remote, or even non-UNIX filesystems. These file systems can be "plugged" into the kernel through a well defined interface, much thesame way as UNIX device drivers are currently added to the kernel.

Steve R. Kleiman

1986-01-01

154

An Implementation of a Log-Structured File System for UNIX  

Microsoft Academic Search

Research results (ROSE91) suggest that a log-structured file system (LFS) offers the potential for dramatically improved write performance, faster recovery time, and faster file creation and dele- tion than traditional UNIX file systems. This paper presents a redesign and implementation of the Sprite (ROSE91) log-structured file system that is more robust and integrated into the vnode inter- face (KLEI86). Measurements

Margo I. Seltzer; Keith Bostic; Marshall K. Mckusick; Carl Staelin

1993-01-01

155

The European Southern Observatory-MIDAS table file system  

NASA Technical Reports Server (NTRS)

The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.

Peron, M.; Grosbol, P.

1992-01-01

156

Mean filed study of disordered spin-12 antiferromagnetic systems  

NASA Astrophysics Data System (ADS)

We present a mean filed theory picture of disordered spin-12 antiferromagnetic system as a function of the degree of disorder, in connection to the insulating doped semiconductors. The system is a resonant valence bond (RVB) liquid state at zero disorder, and a possible RVB glass state when the disorder is finite but weak. For a highly disordered system, we show that the essential physics is the formation and decimation of strongly coupled bonds, and the thermodynamics shows an effective power-law singularity, in qualitative agreement with renormalization group result of Bhatt and Lee.

Dobrosavljevic, Vladimir; Zhou, Sen; Miranda, Eduardo

2008-03-01

157

A trace-driven analysis of the UNIX 4.2 BSD file system  

Microsoft Academic Search

We analyzed the UNIX 4.2 BSD file system by recording user-level activity in trace files and writing programs to analyze the traces. The tracer did not record individual read and write operations, yet still provided tight bounds on what information was accessed and when. The trace analysis shows that the average file system bandwidth needed per user is low (a

John K. Ousterhout; Hervé Da Costa; David Harrison; John A. Kunze; Mike Kupfer; James G. Thompson

1985-01-01

158

77 FR 15026 - Privacy Act of 1974; Farm Records File (Automated) System of Records  

Federal Register 2010, 2011, 2012, 2013

...Secretary Privacy Act of 1974; Farm Records File (Automated) System of Records AGENCY...Act System of Records titled Farm Records File (Automated) USDA/FSA-2. The records...INFORMATION: FSA maintains the Farm Records File (Automated) USDA/FSA-2...

2012-03-14

159

IRM: Integrated File Replication and Consistency Maintenance in P2P Systems  

Microsoft Academic Search

In peer-to-peer file sharing systems, file replication and consistency maintenance are widely used techniques for high system performance. Despite significant interdependencies between them, these two issues are typically addressed separately. Most file replication methods rigidly specify replica nodes, leading to low replica utilization, unnecessary replicas and hence extra consistency maintenance overhead. Most consistency maintenance methods propagate update messages based on

Haiying Shen

2008-01-01

160

Streaming RAID: a disk array management system for video files  

Microsoft Academic Search

The characteristics of digital video files and traffic differ substantially from those encountered with data applications: (i) video files are much larger than data files, and (ii) video traffic is continuous in nature while data traffic is bursty, with the data rate of a video stream much higher than the mean rate of a data traffic source. Accordingly, conventional file

Fouad A. Tobagi; Joseph Pang; Randall Baird; Mark Gang

1993-01-01

161

The design and implementation of a log-structured file system  

Microsoft Academic Search

This paper presents a new technique for disk storage management called a log-structured file system. A log-structured file system writes all modifications to disk sequentially in a log-like structure, thereby speeding up both file writing and crash recovery. The log is the only structure on disk; it contains indexing information so that files can be read back from the log

Mendel Rosenblum; John K. Ousterhout

1992-01-01

162

An Implementation of MPI-IO on Expand: A Parallel File System Based on NFS Servers  

Microsoft Academic Search

This paper describes an implementation of MPI-IO using a new parallel file system, called Expand (Expandable Parallel File\\u000a System)1, that is based on NFS servers. Expand combines multiple NFS servers to create a distributed partition where files are declustered.\\u000a Expand requires no changes to the NFS server and uses RPC operations to provide parallel access to the same file. Expand

Alejandro Calderón; Félix García; Jesús Carretero; José María Pérez; Javier Fernández

2002-01-01

163

26 CFR 301.6033-4 - Required use of magnetic media for returns by organizations required to file returns under...  

Code of Federal Regulations, 2013 CFR

...2013-04-01 false Required use of magnetic media for returns by organizations required...301.6033-4 Required use of magnetic media for returns by organizations required...file returns under section 6033 on magnetic media. An organization required to file a...

2013-04-01

164

DEVELOPMENT OF THE NATIONAL ACID PRECIPITATION ASSESSMENT PROGRAM (NAPAP) EMISSIONS INVENTORY, 1980: THE FLEXIBLE REGIONAL EMISSIONS DATA SYSTEM (SOFTWARE, ALLOCATION FACTOR FILES, PERIPHERAL DATA FILES)  

EPA Science Inventory

The package contains documentation of the Flexible Regional Emissions Data System (FREDS) for the 1980 NAPAP Emissions Inventory, FREDS source code, allocation factor files, and peripheral data files. FREDS extracts emissions data, pertinent modeling parameters (e.g., stack heigh...

165

LiFS: An Attribute-Rich File System for Storage Class Memories  

Microsoft Academic Search

As the number and variety of files stored and ac- cessed by a typical user has dramatically increased, existing file system structures have begun to fail as a mechanism for managing all of the information con- tained in those files. Many applications—email clients, multimedia management applications, and desktop search engines are examples—have been forced to de- velop their own richer

Sasha Ames; Nikhil Bobb; Kevin M. Greenan; Owen S. Hofmann

2006-01-01

166

Generation and use of the Goddard trajectory determination system SLP ephemeris files  

NASA Technical Reports Server (NTRS)

Information is presented to acquaint users of the Goddard Trajectory Determination System Solar/Lunar/Planetary ephemeris files with the details connected with the generation and use of these files. In particular, certain sections constitute a user's manual for the ephemeris files.

Armstrong, M. G.; Tomaszewski, I. B.

1973-01-01

167

ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog  

NASA Technical Reports Server (NTRS)

A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.

Gray, F. P., Jr. (editor)

1979-01-01

168

Implementing Journaling in a Linux Shared Disk File System  

NASA Technical Reports Server (NTRS)

In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; Erickson, Grant; Agarwal, Manish

2000-01-01

169

26 CFR 1.6033-4 - Required use of magnetic media for returns by organizations required to file returns under...  

Code of Federal Regulations, 2011 CFR

...media under § 301.6033-4 of this chapter must be filed in accordance with Internal Revenue Service revenue procedures, publications, forms, or instructions, including those posted electronically. (See § 601.601(d)(2) of this chapter)....

2011-04-01

170

Evaluation of the efficiency of a new file removal system in comparison with two conventional systems.  

PubMed

A novel file-removal system (FRS) was designed to address weak points of conventional file-removal methods. The purpose of this study was to compare file-removal time and dentin removal rates among the FRS, the Masserann kit (Micro-Mega, Besancon, France), and an ultrasonic file-removal method. Ninety extracted mandibular incisors with separated nickel titanium files were divided into 3 groups of 30 teeth each. Groups 1, 2, and 3 had file-removal attempts made by using the Masserann kit, a CPR-7 titanium ultrasonic tip (Obtura-Spartan Corp., Fenton, MO), and the FRS, respectively. Each group had three operators removing the separated files. Pre-/postoperative digital radiographs were downloaded into image analyzing software that calculated the amount of dentin removed. The FRS needed less time and had less dentin loss than the others (p<0.05). There were statistical differences between the experienced operator and less experienced operators regarding the file-removal time and the dentin removal rates (p<0.05). PMID:17437878

Terauchi, Yoshitsugu; O'Leary, Le; Kikuchi, Izumi; Asanagi, Mami; Yoshioka, Takatomo; Kobayashi, Chihiro; Suda, Hideaki

2007-05-01

171

An in vitro CT Comparison of Gutta-Percha Removal with Two Rotary Systems and Hedstrom Files  

PubMed Central

Introduction To evaluate the efficacy of NiTi mechanical rotary instrumentation and Hedstrom file for gutta-percha/sealer removal computed tomography (CT) was utilized in vitro. Materials and Methods Thirty extracted human single rooted teeth, each with a single canal were selected. The samples were decoronated with a double faced diamond disk to have 17-mm root; teeth roots were instrumented with K-files up to master apical file #30 using step back technique. Samples were obturated using cold lateral condensation of gutta-percha and AH Plus root canal sealer. The teeth were then randomly divided into three groups of 10 specimens each. After 2 weeks 3-dimensional images of the roots were obtained by CT and the volume of root filling mass was measured. All the canals were then retreated by either the ProTaper retreatment files, Mtwo retreatment files or Hedstrom files. The canals were irrigated with 2 mL of 2.5% sodium hypochlorite irrigating solution during each change of instrument. The volume of remaining filling materials after the retreatment procedures was assessed by CT. Statistical analysis was performed with one-way ANOVA and Tukey’s post hoc test. Results Neither of studied systems completely removed the root filling material. No significant difference was observed between the rotary systems. The volume of remaining filling materials was significantly less in rotary instrumentation than hand files. There was no significant difference for debris extruded from the apical foramen between the groups. Conclusion Under the experimental conditions, Mtwo and ProTaper retreatment files left less gutta-percha and sealer than H files; however, complete removal of filling materials was not achieved by the three systems investigated.

Yadav, Pankaj; Bharath, Makonahalli Jaganath; Sahadev, Chickmagravalli Krishnegowda; Makonahalli Ramachandra, Praveen Kumar; Rao, Yogesh; Ali, Ambereen; Mohamed, Shahnawaz

2013-01-01

172

Record Storage Systems: From Paper Based Files to Electronic Image Systems.  

ERIC Educational Resources Information Center

Alternative methods of storing and handling the registrar's records are described, and their relative advantages and disadvantages are noted. The methods include paper files, micrographics (computer output microfilm and source document microfilm), and electronic image systems. (MSE)

Gregory, Bob; Lonabocker, Louise

1986-01-01

173

Very high-speed report file system. Final report, 20 September 1991-19 September 1992  

SciTech Connect

The Minnesota Supercomputer Center, Inc. (MSCI), under its Very High-Speed Remote File System (VHSRFS) contract with the Army Research Office, conducted research in: Technologies for integrating TCP/EP protocols with ATM networks; Technologies which will enable gigabit-per-second local area networks (LANs) and high-performance hosts to use gigabit-speed ATM wide-area networks; Methods by which high-performance, massively parallel processors (MPPs) can connect to high-speed wide-area networks; and Protocols and other technologies required to develop a shared, remote file system capable of transferring data to multiple remote supercomputers over gigabit wide-area networks. The research results of the VHSRFS contract were intended to provide some of the foundation for research in the MAGIC Gigabit Testbed, which has been funded by DARPA.... Gigabit networks, Broadband Integrated Services Digital Networks(B-ISDN), Asynchronous Transfer Mode(ATM).

Salo, T.J.; Cavanaugh, J.D.; Spengler, M.K.

1992-12-15

174

File servers for network-based distributed systems  

Microsoft Academic Search

A file server provides remote centralized storage of data to workstations connected to it via a communicatmn network; it facilitates data sharing among autonomous workstations and support of inexpensive workstations that have limited or no secondary storage. Various characteristics of file servers and the corresponding implementation issues based on a survey of a number of experimental file servers are discussed

Liba Svobodova

1984-01-01

175

UFO: a personal global file system based on user-level extensions to the operating system  

Microsoft Academic Search

In this article we show how to extend a wide range of functionality of standard operation systems completely at the user level. Our approach works by intercepting selected system calls at the user level, using tracing facilities such as the \\/proc file system provided by many Unix operating systems. The behavior of some intercepted system calls is then modified to

Albert D. Alexandrov; Maximilian Ibel; Klaus E. Schauser; Chris J. Scheiman

1998-01-01

176

Human factors challenges in creating a principal support office system—the speech filing system approach  

Microsoft Academic Search

This paper identifies the key behavioral challenges in designing a principal-support office system and our approaches to them. These challenges included designing a system which office principals would find useful and would directly use themselves. Ultimately, the system, called the Speech Filing System (SFS), became primarily a voice store and forward message system with which users compose, edit, send, and

John D. Gould; Stephen J. Boies

1983-01-01

177

43 CFR 1822.16 - Where do I file an application that involves lands under the jurisdiction of more than one BLM...  

Code of Federal Regulations, 2010 CFR

...lands under the jurisdiction of more than one BLM State Office? 1822.16 Section 1822...APPLICATION PROCEDURES Filing a Document with BLM § 1822.16 Where do I file an application...lands under the jurisdiction of more than one BLM State Office? You may file your...

2010-10-01

178

Disk File Management in a Medium-Scale Time-Sharing System.  

ERIC Educational Resources Information Center

The paper descibes a compact and highly efficient disk file management system responsible for the management and allocation of space on moving head disk drives in a medium-scale time-sharing system. The disk file management system is a major component of the Experimental Time-Sharing System (ETSS) developed at the Learning Research and Development…

Fitzhugh, Robert J.; Pethia, Richard D.

179

The Cluster File System: Integration of High Performance Communication and I\\/O in Clusters  

Microsoft Academic Search

In this paper, we report on the experiences in designing a portable parallel file system for clusters. The file system offers to the applications an interface compliant with MPI-IO, the I\\/O interface of the MPI-2 standard. The file system implementation relies upon MPI for internal coordination and communication. This guarantees high performance and portability over a wide range of hardware

Rosario Cristaldi; Giulio Iannello; Francesco Delfino

2002-01-01

180

Derived virtual devices: a secure distributed file system mechanism  

NASA Technical Reports Server (NTRS)

This paper presents the design of derived virtual devices (DVDs). DVDs are the mechanism used by the Netstation Project to provide secure shared access to network-attached peripherals distributed in an untrusted network environment. DVDs improve Input/Output efficiency by allowing user processes to perform I/O operations directly from devices without intermediate transfer through the controlling operating system kernel. The security enforced at the device through the DVD mechanism includes resource boundary checking, user authentication, and restricted operations, e.g., read-only access. To illustrate the application of DVDs, we present the interactions between a network-attached disk and a file system designed to exploit the DVD abstraction. We further discuss third-party transfer as a mechanism intended to provide for efficient data transfer in a typical NAP environment. We show how DVDs facilitate third-party transfer, and provide the security required in a more open network environment.

VanMeter, Rodney; Hotz, Steve; Finn, Gregory

1996-01-01

181

Automatic Tag Attachment Scheme for Efficient File Search in Peer-to-Peer File Sharing Systems  

Microsoft Academic Search

In this paper, we consider the problem of automatic tag attachment to the documents distributed over a P2P network aiming at improving the efficiency of file search in such networks. The proposed scheme combines text clustering with a modified tag extraction algorithm, and is executed in a fully distributed manner. We conducted experiments to evaluate the accuracy of the proposed

Ting Ting Qin; Satoshi Fujita

2011-01-01

182

37 CFR 201.34 - Procedures for filing Correction Notices of Intent to Enforce a Copyright Restored under the...  

Code of Federal Regulations, 2013 CFR

...and Copyrights COPYRIGHT OFFICE, LIBRARY OF CONGRESS COPYRIGHT OFFICE AND PROCEDURES...History Documents (COHD) file through the Library of Congress electronic information system...terminals located in other parts of the Library of Congress through the Library of...

2013-07-01

183

Using the K-25 C TD Common File System: A guide to CFSI (CFS Interface)  

SciTech Connect

A CFS (Common File System) is a large, centralized file management and storage facility based on software developed at Los Alamos National Laboratory. This manual is a guide to use of the CFS available to users of the Cray UNICOS system at Martin Marietta Energy Systems, Inc., in Oak Ridge, Tennessee.

Not Available

1989-12-01

184

Feasibility of a serverless distributed file system deployed on an existing set of desktop PCs  

Microsoft Academic Search

We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of

William J. Bolosky; John R. Douceur; David Ely; Marvin Theimer

2000-01-01

185

Extending the Operating System at the User Level: the Ufo Global File System  

Microsoft Academic Search

In this paper we show how to extend the functional- ity of standard operating systems completely at the user level. Our approach works by intercepting selected sys- tem calls at the user level, using tracing facilities such as the \\/proc file system provided by many Unix oper- ating systems. The behavior of some intercepted sys- tem calls is then modified

Albert D. Alexandrov; Maximilian Ibel; Klaus E. Schauser; Chris J. Scheiman

1997-01-01

186

FTP access as a user-defined file system  

Microsoft Academic Search

Current methods of accessing services provided in the internet require users to know several interfaces. We show that integrating these services into the file naming spaces brings considerable benefits: users are able to use different services by a simple file access. This mechanism is general enough so that all applications can use the available services. Our idea is demonstrated by

Michael K. Gschwind

1994-01-01

187

Journaling Versus Soft Updates: Asynchronous Meta-data Protection in File Systems  

Microsoft Academic Search

1 Abstract The UNIX Fast File System (FFS) is probably the most widely-used file system for performance comparisons. However, such comparisons frequently overlook many of the performance enhancements that have been added over the past decade. In this paper, we explore the two most commonly used approaches for improving the performance of meta-data operations and recovery: journaling and Soft Updates.

Margo I. Seltzer; Gregory R. Ganger; Marshall K. Mckusick; A Keith; Craig A. N. Soules; Christopher A. Stein

2000-01-01

188

HighLight: Using a Log-structured File System for Tertiary Storage Management  

Microsoft Academic Search

Robotic storage devices offer huge storage capacity at a low cost per byte, but with large access times. Integrating these devices into the storage hierarchy presents a chal- lenge to file system designers. Log-structured file systems (LFSs) were developed to reduce latencies involved in ac- cessing disk devices, but their sequential write patterns match well with tertiary storage characteristics. Unfortu-

John T. Kohl; Carl Staelin; Michael Stonebraker

1993-01-01

189

Security Considerations When Designing a Distributed File System Using Object Storage Devices  

Microsoft Academic Search

We present the design goals that led us to developing a dis- tributed object-based secure file system, Brave. Brave uses mutually authenticated object storage devices, SCARED, to store file system data. Rather than require a new authen- tication infrastructure, we show how we use a simple au- thentication protocol that is bridged into existing securi ty infrastructures, even if there

Benjamin C. Reed; Mark A. Smith; Dejan Diklic

2002-01-01

190

ELF: an efficient log-structured flash file system for micro sensor nodes  

Microsoft Academic Search

An efficient and reliable file storage system is important to micro sensor nodes so that data can be logged for later asynchronous delivery across a multi-hop wireless sensor network. Designing and implementing such a file system for a sensor node faces various challenges. Sensor nodes are highly resource constrained in terms of limited runtime memory, limited persistent storage, and finite

Hui Dai; Michael Neufeld; Richard Han

2004-01-01

191

76 FR 52549 - Suspension of the Duty To File Reports for Classes of Asset-Backed Securities Under Section 15(D...  

Federal Register 2010, 2011, 2012, 2013

...240 and 249 [Release No. 34-65148; File No. S7-02-11] RIN 3235-AK89 Suspension of the Duty To File Reports for Classes of Asset- Backed Securities...the automatic suspension of the duty to file under Section 15(d) of the...

2011-08-23

192

76 FR 2049 - Suspension of the Duty To File Reports for Classes of Asset-Backed Securities Under Section 15(d...  

Federal Register 2010, 2011, 2012, 2013

...240 and 249 [Release No. 34-63652; File No. S7-02-11] RIN 3235-AK89 Suspension of the Duty To File Reports for Classes of Asset- Backed Securities...the automatic suspension of the duty to file under Section 15(d) of the...

2011-01-12

193

SSS: A Personal File Storage System Considering Fairness among Users Based on Pure P2P Model  

Microsoft Academic Search

This paper proposes a personal file storage system named simple shared storage (SSS) based on the pure P2P model. The participant nodes of SSS contribute their storage space to SSS and they can store their files in SSS. SSS maintains the number of the replicas of a file within the predefined range and ensures availability of the stored files. It

Shouta Morimoto; Fumio Teraoka

2008-01-01

194

Utilizing Lustre file system with dCache for CMS analysis  

NASA Astrophysics Data System (ADS)

This paper presents storage implementations that utilize the Lustre file system for CMS analysis with direct POSIX file access while keeping dCache as the frontend for data distribution and management. We describe two implementations that integrate dCache with Lustre and how to enable user data access without going through the dCache file read protocol. Our initial CMS analysis job measurement and transfer performance results are shown and the advantages of different implementations are briefly discussed.

Wu, Y.; Kim, B.; Rodriguez, J. L.; Fu, Y.; Bourilkov, D.; Avery, P.

2010-04-01

195

Efficient structured data access in parallel file systems.  

SciTech Connect

Parallel scientific applications store and retrieve very large, structured datasets. Directly supporting these structured accesses is an important step in providing high-performance I/O solutions for these applications. High-level interfaces such as HDF5 and Parallel netCDF provide convenient APIs for accessing structured datasets, and the MPI-IO interface also supports efficient access to structured data. However, parallel ?le systems do not traditionally support such access. In this work we present an implementation of structured data access support in the context of the Parallel Virtual File System (PVFS). We call this support 'datatype I/O' because of its similarity to MPI datatypes. This support is built by using a reusable datatype-processing component from the MPICH2 MPI implementation. We describe how this component is leveraged to efficiently process structured data representations resulting from MPI-IO operations. We quantitatively assess the solution using three test applications. We also point to further optimizations in the processing path that could be leveraged for even more efficient operation.

Ching, A.; Choudhary, A.; Liao, W.-K.; Ross, R.; Gropp, W.; Mathematics and Computer Science; Northwestern Univ.

2003-01-01

196

Evaluating the Shared Root File System Approach for Diskless High-Performance Computing Systems  

SciTech Connect

Diskless high-performance computing (HPC) systems utilizing networked storage have become popular in the last several years. Removing disk drives significantly increases compute node reliability as they are known to be a major source of failures. Furthermore, networked storage solutions utilizing parallel I/O and replication are able to provide increased scalability and availability. Reducing a compute node to processor(s), memory and network interface(s) greatly reduces its physical size, which in turn allows for large-scale dense HPC solutions. However, one major obstacle is the requirement by certain operating systems (OSs), such as Linux, for a root file system. While one solution is to remove this requirement from the OS, another is to share the root file system over the networked storage. This paper evaluates three networked file system solutions, NFSv4, Lustre and PVFS2, with respect to their performance, scalability, and availability features for servicing a common root file system in a diskless HPC configuration. Our findings indicate that Lustre is a viable solution as it meets both, scaling and performance requirements. However, certain availability issues regarding single points of failure and control need to be considered.

Engelmann, Christian [ORNL; Ong, Hong Hoe [ORNL; Scott, Stephen L [ORNL

2009-01-01

197

NASA Uniform Files Index  

NASA Technical Reports Server (NTRS)

This handbook is a guide for the use of all personnel engaged in handling NASA files. It is issued in accordance with the regulations of the National Archives and Records Administration, in the Code of Federal Regulations Title 36, Part 1224, Files Management; and the Federal Information Resources Management Regulation, Subpart 201-45.108, Files Management. It is intended to provide a standardized classification and filing scheme to achieve maximum uniformity and ease in maintaining and using agency records. It is a framework for consistent organization of information in an arrangement that will be useful to current and future researchers. The NASA Uniform Files Index coding structure is composed of the subject classification table used for NASA management directives and the subject groups in the NASA scientific and technical information system. It is designed to correlate files throughout NASA and it is anticipated that it may be useful with automated filing systems. It is expected that in the conversion of current files to this arrangement it will be necessary to add tertiary subjects and make further subdivisions under the existing categories. Established primary and secondary subject categories may not be changed arbitrarily. Proposals for additional subject categories of NASA-wide applicability, and suggestions for improvement in this handbook, should be addressed to the Records Program Manager at the pertinent installation who will forward it to the NASA Records Management Office, Code NTR, for approval. This handbook is issued in loose-leaf form and will be revised by page changes.

1987-01-01

198

Migrant Student Record Transfer System (MSRTS) [machine-readable data file].  

ERIC Educational Resources Information Center

The Migrant Student Record Transfer System (MSRTS) machine-readable data file (MRDF) is a collection of education and health data on more than 750,000 migrant children in grades K-12 in the United States (except Hawaii), the District of Columbia, and the outlying territories of Puerto Rico and the Mariana and Marshall Islands. The active file

Arkansas State Dept. of Education, Little Rock. General Education Div.

199

On File and Task Placements and Dynamic Load Balancing in Distributed Systems  

Microsoft Academic Search

Two distributed system problems, the file and task placement problem and the dynamic load balancing problem, are investigated in this paper. To find the placement of files and tasks at sites with minimal total communication overhead, we propose using the Simulated Annealing approach and multiple objective functions. Experimental results show that our proposed approach depicts superior performance with much less

Po-Jen Chuang; Chi-Wei Cheng

2002-01-01

200

Coda: a highly available file system for a distributed workstation environment  

Microsoft Academic Search

A description is given of Coda, a file system for a large-scale distributed computing environment composed of Unix workstations. It provides resilience to server and network failures through the use of two distinct but complementary mechanisms. One mechanism, server replication, involves storing copies of a file at multiple servers. The other mechanism, disconnected operation, is a mode of execution in

M. Satyanarayanan

1989-01-01

201

77 FR 71591 - Pelican Gathering Systems, LLC; Notice for Temporary Waiver of Filing and Reporting Requirements  

Federal Register 2010, 2011, 2012, 2013

...Regulatory Commission [Docket No. OR13-4-000] Pelican Gathering Systems, LLC; Notice for Temporary...Reporting Requirements On October 19, 2012, Pelican Gathering Systems, LLC (Pelican) filed a Request for a Temporary Waiver of...

2012-12-03

202

Instructional Support System (ISS) Hierarchy File Editor User Manual, December 1987 (VAX Version).  

National Technical Information Service (NTIS)

The Instructional Support System Hierarchy File Editor allows system administrators to display hierarchy record information about specific courses. Each hierarchy record contains the following data: general information, group test array, group data flags,...

1987-01-01

203

26 CFR 1.6041-6 - Returns made on Forms 1096 and 1099 under section 6041; contents and time and place for filing.  

Code of Federal Regulations, 2011 CFR

...Returns made on Forms 1096 and 1099 under section 6041; contents and time and place for filing. 1.6041-6 Section 1.6041-6...Returns made on Forms 1096 and 1099 under section 6041; contents and time and place for filing. Returns made under...

2011-04-01

204

Maintaining a Distributed File System by Collection and Analysis of Metrics  

NASA Technical Reports Server (NTRS)

AFS(originally, Andrew File System) is a widely-deployed distributed file system product used by companies, universities, and laboratories world-wide. However, it is not trivial to operate: runing an AFS cell is a formidable task. It requires a team of dedicated and experienced system administratores who must manage a user base numbring in the thousands, rather than the smaller range of 10 to 500 faced by the typical system administrator.

Bromberg, Daniel

1997-01-01

205

26 CFR 54.6081-1 - Automatic extension of time for filing returns for certain excise taxes under Chapter 43.  

Code of Federal Regulations, 2013 CFR

...8928, âReturn of Certain Excise Taxes Under Chapter 43 of the Internal...Time To File Certain Business Income Tax, Information, and Other Returns,â or in any...of the properly estimated unpaid tax liability on or before the date...

2013-04-01

206

26 CFR 301.6033-4 - Required use of magnetic media for returns by organizations required to file returns under...  

Code of Federal Regulations, 2010 CFR

...2010-04-01 false Required use of magnetic media for returns by organizations required... § 301.6033-4 Required use of magnetic media for returns by organizations required...to file returns under section 6033 on magnetic media. An organization required...

2010-04-01

207

26 CFR 301.6033-4 - Required use of magnetic media for returns by organizations required to file returns under...  

Code of Federal Regulations, 2010 CFR

...2009-04-01 false Required use of magnetic media for returns by organizations required... § 301.6033-4 Required use of magnetic media for returns by organizations required...to file returns under section 6033 on magnetic media. An organization required...

2009-04-01

208

Nmcs Information Processing System 360 Formatted File System (NIPS 360 Ffs). User'S Manual. Volume Iii. File Maintenance (FM).  

National Technical Information Service (NTIS)

The volume defines the File Maintenance component of NIPS S/360 FFS. It describes the functioning of the component, its capabilities limitations, expected output results, and the specifications for preparing run decks and control cards which will serve as...

J. Stallard

1970-01-01

209

75 FR 27986 - Electronic Filing System-Web (EFS-Web) Contingency Option  

Federal Register 2010, 2011, 2012, 2013

The United States Patent and Trademark Office (USPTO) is increasing the availability of its patent electronic filing system, Electronic Filing System--Web (EFS-Web) by providing a new contingency option when the primary portal to EFS-Web has an unscheduled outage. Previously, the entire EFS-Web system is not available to the users during such an outage. The contingency option in EFS-Web will......

2010-05-19

210

Adaptive Message Management Using Hybrid Channel Model in Parallel File System  

Microsoft Academic Search

A parallel file system is utilized for supporting an excessive file request resulted from a parallel application in a cluster system. It uses traditional communication protocols like TCP\\/IP or UDP\\/IP that were designed for Wide Area Networks(WANs). For a cluster system, however, these protocols are inappropriate for its large scale of network overhead. In accordance with this problem, we propose

Joon-hyung Hwangbo; Sang-ki Lee; Yoon-young Lee; Dae-wha Seo

2002-01-01

211

A resources monitoring architecture for P2P file-sharing systems  

NASA Astrophysics Data System (ADS)

Resources monitoring is an important problem of the overall efficient usage and control of P2P file-sharing systems. The resources of file-sharing systems can include all distributing servers, programs and peers. Several researches have tried to address this issue, but most of them illuminated P2P traffic characterization, identification and user behavior. Based on previous work, we present a resources monitoring architecture for P2P file-sharing systems. The monitoring architecture employs a hierarchical structure and provides systemic monitoring including resources discovery, relative information extraction and analysis, trace and location. It gives a systematic framework for file-sharing resources monitoring. And a prototype system has been developed based on the framework.

Wang, Wenxian; Chen, Xingshu; Wang, Haizhou

2013-07-01

212

Toward Millions of File System IOPS on Low-Cost, Commodity Hardware  

PubMed Central

We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads.

Zheng, Da; Burns, Randal; Szalay, Alexander S.

2013-01-01

213

NVFAT: A FAT-Compatible File System with NVRAM Write Cache for Its Metadata  

NASA Astrophysics Data System (ADS)

File systems make use of the buffer cache to enhance their performance. Traditionally, part of DRAM, which is volatile memory, is used as the buffer cache. In this paper, we consider the use of of Non-Volatile RAM (NVRAM) as a write cache for metadata of the file system in embedded systems. NVRAM is a state-of-the-art memory that provides characteristics of both non-volatility and random byte addressability. By employing NVRAM as a write cache for dirty metadata, we retain the same integrity of a file system that always synchronously writes its metadata to storage, while at the same time improving file system performance to the level of a file system that always writes asynchronously. To show quantitative results, we developed an embedded board with NVRAM and modify the VFAT file system provided in Linux 2.6.11 to accommodate the NVRAM write cache. We performed a wide range of experiments on this platform for various synthetic and realistic workloads. The results show that substantial reductions in execution time are possible from an application viewpoint. Another consequence of the write cache is its benefits at the FTL layer, leading to improved wear leveling of Flash memory and increased energy savings, which are important measures in embedded systems. From the real numbers obtained through our experiments, we show that wear leveling is improved considerably and also quantify the improvements in terms of energy.

Doh, In Hwan; Lee, Hyo J.; Moon, Young Je; Kim, Eunsam; Choi, Jongmoo; Lee, Donghee; Noh, Sam H.

214

Image save and carry system-based teaching-file library  

NASA Astrophysics Data System (ADS)

Digital imaging technology has introduced some new possibilities of forming teaching files without films. IS&C (Image Save & Carry) system, which is based on magneto-optic disc, is a good medium for this purpose, because of its large capacity, prompt access time, and unified format independent of operating systems. The author have constructed a teaching file library, on which user can add and edit images. CD-ROM and IS&C satisfy most of basic criteria for teaching file construction platform. CD-ROM is the best medium for circulating large numbers of identical copies, while IS&C is advantageous in personal addition and editing of library.

Morimoto, Kouji; Kimura, Michio; Fujii, Toshiyuki

1994-05-01

215

A Universal Access, Smart-Card-Based, Secure File System  

Microsoft Academic Search

The SFS provides transparent, end-to-end encryption (1) support to users accessing files across the Internet on HTTP or FTP servers. In this paper we describe a SFS architecture, the current implementation, and the future directions for our work.

James Hughes; Chris Feist; Steve Hawkinson; Jeff Perrault; Matthew O'Keefe; David Corcoran

1999-01-01

216

Using Hadoop File System and MapReduce in a small/medium Grid site  

NASA Astrophysics Data System (ADS)

Data storage and data access represent the key of CPU-intensive and data-intensive high performance Grid computing. Hadoop is an open-source data processing framework that includes fault-tolerant and scalable distributed data processing model and execution environment, named MapReduce, and distributed File System, named Hadoop distributed File System (HDFS). HDFS was deployed and tested within the Open Science Grid (OSG) middleware stack. Efforts have been taken to integrate HDFS with gLite middleware. We have tested the File System thoroughly in order to understand its scalability and fault-tolerance while dealing with small/medium site environment constraints. To benefit entirely from this File System, we made it working in conjunction with Hadoop Job scheduler to optimize the executions of the local physics analysis workflows. The performance of the analysis jobs which used such architecture seems to be promising, making it useful to follow up in the future.

Riahi, H.; Donvito, G.; Fanò, L.; Fasi, M.; Marzulli, G.; Spiga, D.; Valentini, A.

2012-12-01

217

Inverted File Organization in the Information Retrieval System Based on Thesaurus with Weights.  

ERIC Educational Resources Information Center

Examines through a series of mathematical models (theorems, descriptions, and examples), properties and operations on inverted files, which are used in an information retrieval system based on thesaurus with weighted descriptors. (CWM)

Mazur, Zygmunt

1979-01-01

218

PicFS: The Privacy-Enhancing Image-Based Collaborative File System  

Microsoft Academic Search

Cloud computing makes available a vast amount of computation and storage resources in the pay-as-you-go manner. However, the users of cloud storage have to trust the providers to ensure the data privacy and confidentiality. In this paper, we present the Privacy-enhancing Image-based Collaborative File System (PicFS), a network file system that steganographically encodes itself into images and provides anonymous uploads

Chris Sosa; Blake C. Sutton; H. Howie Huang

2010-01-01

219

Generating configuration for missing traffic detector and security measures in industrial control systems based on the system description files  

Microsoft Academic Search

Nowadays, industrial control systems operators are trying to fulfill requirements from upcoming standards and regulation regarding cyber security issues. However, addressing such security requirements by implementing security measures is not a trivial task. Moreover, the creation and maintenance of the configuration for the security measures is prone to error. This research shows that it is possible to derive configuration file(s)

H. Hadeli; R. Schierholz; M. Braendle; C. Tuduce

2009-01-01

220

Performance of the engineering analysis and data system 2 common file system  

NASA Technical Reports Server (NTRS)

The Engineering Analysis and Data System (EADS) was used from April 1986 to July 1993 to support large scale scientific and engineering computation (e.g. computational fluid dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP in June 1991, after which a contract was awarded to Cray Grumman. EADS II was installed in February 1993, and by July 1993 most users were migrated. EADS II is a network of heterogeneous computer systems supporting scientific and engineering applications. The Common File System (CFS) is a key component of this system. The CFS provides a seamless, integrated environment to the users of EADS II including both disk and tape storage. UniTree software is used to implement this hierarchical storage management system. The performance of the CFS suffered during the early months of the production system. Several of the performance problems were traced to software bugs which have been corrected. Other problems were associated with hardware. However, the use of NFS in UniTree UCFM software limits the performance of the system. The performance issues related to the CFS have led to a need to develop a greater understanding of the CFS organization. This paper will first describe the EADS II with emphasis on the CFS. Then, a discussion of mass storage systems will be presented, and methods of measuring the performance of the Common File System will be outlined. Finally, areas for further study will be identified and conclusions will be drawn.

Debrunner, Linda S.

1993-01-01

221

Research of the file system of volume holographic storage based on virtual storage layer  

NASA Astrophysics Data System (ADS)

Volume holographic storage (VHS) is currently the subject of widespread interest as a fast-readout-rate, high-capacity digital data-storage technology. To make need of characteristics of the VHS, the paper present the file system using a virtual storage layer (VSL) which can be compatible with the logic layer of the current used file system and accommodate the requirement of VHS in the physical layer. The VSL which is made of the super block, directory area, the metadata area and dynamic file area can connect directly to the storage media one side and implement compatible to the existing file system by providing the operating interfaces for the above logical file system. We produce the two layer storage structure which effectively reduces the number of disk accessed and improves the speed of file read and write. The allocation mode of 'hybrid of block and zone' and allocation strategy of 'block priority' greatly improve the space utilization rate of storage device and enforce the storage adaptability in VHS.

Wu, Fei; Yi, Faling; Xie, Changsheng

2008-03-01

222

Multilevel Caching in Distributed File Systems — or — Your cache ain't nuthin' but trash  

Microsoft Academic Search

We are investigating the potential for a hierarchy of intermediate file servers to address scaling problems in increasingly large distributed file systems. To this end, we have run trace-driven simulations based on data from DEC-SRC and our own data collection to determine the potential of caching-only intermediate servers. The degree of sharing among clients is central to the effectiveness of

D. Muntz; P. Honeyman

1992-01-01

223

Security in the CernVM File System and the Frontier Distributed Database Caching System  

NASA Astrophysics Data System (ADS)

Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

Dykstra, D.; Blomer, J.

2014-06-01

224

TBBT: scalable and accurate trace replay for file server evaluation  

Microsoft Academic Search

This paper describes the design, implementation, and evaluation of TBBT, the first comprehensive NFS trace replay tool. Given an NFS trace, TBBT automatically detects and repairs missing operations in the trace, derives a file system image required to successfully replay the trace, ages the file system image appropriately, initializes the file server under test with that image, and finally drives

Ningning Zhu; Jiawu Chen; Tzi-Cker Chiueh; Daniel Ellard

2005-01-01

225

The global unified parallel file system (GUPFS) project: FY 2002 activities and results  

SciTech Connect

The Global Unified Parallel File System (GUPFS) project is a multiple-phase, five-year project at the National Energy Research Scientific Computing (NERSC) Center to provide a scalable, high performance, high bandwidth, shared file system for all the NERSC production computing and support systems. The primary purpose of the GUPFS project is to make it easier to conduct advanced scientific research using the NERSC systems. This is to be accomplished through the use of a shared file system providing a unified file namespace, operating on consolidated shared storage that is directly accessed by all the NERSC production computing and support systems. During its first year, FY 2002, the GUPFS project focused on identifying, testing, and evaluating existing and emerging shared/cluster file system, SAN fabric, and storage technologies; identifying NERSC user input/output (I/O) requirements, methods, and mechanisms; and developing appropriate benchmarking methodologies and benchmark codes for a parallel environment. This report presents the activities and progress of the GUPFS project during its first year, the results of the evaluations conducted, and plans for near-term and longer-term investigations.

Butler, Gregory F.; Lee, Rei Chi; Welcome, Michael L.

2003-04-07

226

29 CFR 15.203 - When should a claim under the MPCECA be filed?  

Code of Federal Regulations, 2013 CFR

...be allowed only if it is filed in writing within 2 years after accrual of the claim. For...armed conflict ends, whichever is earlier, if a claim otherwise accrues...conflict or has accrued within 2 years before war or an armed...

2013-07-01

227

Extensible File Systems (ELFS): An Object-Oriented Approach to High Performance File I\\/O  

Microsoft Academic Search

Scientific applications often manipulate very large sets of persistent data. Over the past decade, advances in disk storage device performance have consistently been outpaced by advances in the per- formance of the rest of the computer system. As a result, many scientific applications have become I\\/ O-bound, i.e. their run-times are dominated by the time spent performing I\\/O operations. Conse-

John F. Karpovich; Andrew S. Grimshaw; James C. French

1994-01-01

228

Organization of the Inverted Files in a Distributed Information Retrieval System Based on Thesauri.  

ERIC Educational Resources Information Center

Describes how operations on local inverted files are to be modified in order to use them in distributed information retrieval systems based on thesauri. The presented rules may be viewed as the logical approach in implementing a distributed retrieval system consisting of n local retrieval systems. (Author/MBR)

Mazur, Zygmunt

1986-01-01

229

A Fault Tolerant MPI-IO Implementation using the Expand Parallel File System  

Microsoft Academic Search

Parallelism in file systems is obtained by using several independent server nodes supporting one or more secondary storage devices. This approach increases the performance and scalability of the system, but a fault in one single node can stop the whole system. To avoid this problem, data must be stored using some kind of redundant technique, so any data stored in

Alejandro Calderón; Félix García Carballeira; Jesús Carretero; José María Pérez; Luis Miguel Sánchez

2005-01-01

230

HLLV avionics requirements study and electronic filing system database development  

NASA Technical Reports Server (NTRS)

This final report provides a summary of achievements and activities performed under Contract NAS8-39215. The contract's objective was to explore a new way of delivering, storing, accessing, and archiving study products and information and to define top level system requirements for Heavy Lift Launch Vehicle (HLLV) avionics that incorporate Vehicle Health Management (VHM). This report includes technical objectives, methods, assumptions, recommendations, sample data, and issues as specified by DPD No. 772, DR-3. The report is organized into two major subsections, one specific to each of the two tasks defined in the Statement of Work: the Index Database Task and the HLLV Avionics Requirements Task. The Index Database Task resulted in the selection and modification of a commercial database software tool to contain the data developed during the HLLV Avionics Requirements Task. All summary information is addressed within each task's section.

1994-01-01

231

A Next-Generation Parallel File System Environment for the OLCF  

SciTech Connect

When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s original design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.

Dillow, David A [ORNL; Fuller, Douglas [ORNL; Gunasekaran, Raghul [ORNL; Kim, Youngjae [ORNL; Oral, H Sarp [ORNL; Reitz, Doug M [ORNL; Simmons, James A [ORNL; Wang, Feiyi [ORNL; Shipman, Galen M [ORNL; Hill, Jason J [ORNL

2012-01-01

232

LLNL's Parallel I/O Testing Tools and Techniques for ASC Parallel File Systems.  

National Technical Information Service (NTIS)

Livermore Computing is an early and aggressive adopter of parallel file systems including, for example, GPFS from IBM and Lustre for our present Linux systems. As such, we have acquired more than our share of battle scars from encountering bugs in 'bleedi...

W. E. Loewe R. M. Hedges T. T. McLarty C. J. Morrone

2004-01-01

233

Extending the POSIX I/O interface: a parallel file system perspective.  

SciTech Connect

The POSIX interface does not lend itself well to enabling good performance for high-end applications. Extensions are needed in the POSIX I/O interface so that high-concurrency HPC applications running on top of parallel file systems perform well. This paper presents the rationale, design, and evaluation of a reference implementation of a subset of the POSIX I/O interfaces on a widely used parallel file system (PVFS) on clusters. Experimental results on a set of micro-benchmarks confirm that the extensions to the POSIX interface greatly improve scalability and performance.

Vilayannur, M.; Lang, S.; Ross, R.; Klundt, R.; Ward, L.; Mathematics and Computer Science; VMWare, Inc.; SNL

2008-12-11

234

FlexVol: Flexible, Efficient File Volume Virtualization in WAFL  

Microsoft Academic Search

Virtualization is a well-known method of abstracting physical resources and of separating the manipulation and use of logical resources from their underlying im- plementation. We have used this technique to virtualize file volumes in the WAFL R file system, adding a level of indirection between client-visible volumes and the un- derlying physical storage. The resulting virtual file vol- umes, or

John K. Edwards; Daniel Ellard; Craig Everhart; Robert Fair; Eric Hamilton; Andy Kahn; Arkady Kanevsky; James Lentini; Ashish Prakash; Keith A. Smith; Edward R. Zayas

2008-01-01

235

A convertor and user interface to import CAD files into worldtoolkit virtual reality systems  

NASA Technical Reports Server (NTRS)

Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

Wang, Peter Hor-Ching

1996-01-01

236

Provisioning of virtual environments for wide area desktop grids through redirect-on-write distributed file system  

Microsoft Academic Search

We describe and evaluate a thin client solution for desktop grid computing based on virtual machine appliances whose images are fetched on-demand and on a per-block basis over wide-area networks. The approach uses a distributed file system redirection mechanism which enables the use of unmodified NFS clients\\/servers and local buffering of file system modifications during the appliances lifetime. The file

Vineet Chadha; David Wolinsky; Renato J. Figueiredo

2008-01-01

237

Permit Compliance System (PCS) facility address and permit file EPA region 9 (AZ, CA, HI, NV, American Samoa, Guam) (for microcomputers). Data file  

SciTech Connect

The Permit Compliance System (PCS) is an EPA national computerized management information system that records water-discharge permit data on more than 64,000 wastewater treatment facilities nationwide. This system automates entry, updating, and retrieval of National Pollutant Discharge Elimination System (NPDES) data and tracks permit issuance, permit limits, monitoring data, and other data pertaining to facilities regulated under NPDES. There are approximately 49,000 industrial facilities and 15,000 municipal facilities regulated by NPDES. The Enforcement Action File contains information regarding actions taken in the most recent 2-year period, in response to violations of effluent parameters limits, non-receipt of Discharge Monitoring Reports (DMRs) or Compliance Schedule reports, or failure to complete Compliance Schedule milestones for all active permitted facilities. Enforcement action data include the events in violation and the dates of occurrence, the type of enforcement action(s), and the dates they were taken, and the current status of each action. This data is updated twice a year.

NONE

1996-06-01

238

The global unified parallel file system (GUPFS) project: FY 2003 activities and results  

SciTech Connect

The Global Unified Parallel File System (GUPFS) project is a multiple-phase project at the National Energy Research Scientific Computing (NERSC) Center whose goal is to provide a scalable, high-performance, high-bandwidth, shared file system for all of the NERSC production computing and support systems. The primary purpose of the GUPFS project is to make the scientific users more productive as they conduct advanced scientific research at NERSC by simplifying the scientists' data management tasks and maximizing storage and data availability. This is to be accomplished through the use of a shared file system providing a unified file namespace, operating on consolidated shared storage that is accessible by all the NERSC production computing and support systems. In order to successfully deploy a scalable high-performance shared file system with consolidated disk storage, three major emerging technologies must be brought together: (1) shared/cluster file systems software, (2) cost-effective, high-performance storage area network (SAN) fabrics, and (3) high-performance storage devices. Although they are evolving rapidly, these emerging technologies individually are not targeted towards the needs of scientific high-performance computing (HPC). The GUPFS project is in the process of assessing these emerging technologies to determine the best combination of solutions for a center-wide shared file system, to encourage the development of these technologies in directions needed for HPC, particularly at NERSC, and to then put them into service. With the development of an evaluation methodology and benchmark suites, and with the updating of the GUPFS testbed system, the project did a substantial number of investigations and evaluations during FY 2003. The investigations and evaluations involved many vendors and products. From our evaluation of these products, we have found that most vendors and many of the products are more focused on the commercial market. Most vendors lack the understanding of, or do not have the resources to pay enough attention to, the needs of high-performance computing environments such as NERSC.

Butler, Gregory F.; Baird William P.; Lee, Rei C.; Tull, Craig E.; Welcome, Michael L.; Whitney Cary L.

2004-04-30

239

Design and Analysis of a Mobile File Sharing System for Opportunistic Networks  

Microsoft Academic Search

Opportunistic networks are characterized by inter- mittent connectivity among mobile devices that occurs during their opportunistic contacts. With the increasing number of ca- pable wireless devices and hence increasing potential formation of opportunistic networks, enabling applications over opportunistic networks has become critical. In this paper, we design and analyze a mobile file sharing system over opportunistic networks using Bluetooth technology.

Shanshan Lu; Gautam Chavan; Yanliang Liu; Yonghe Liu

2011-01-01

240

Toward Data Confidentiality via Integrating Hybrid Encryption Schemes and Hadoop Distributed File System  

Microsoft Academic Search

With the increasing popularity of cloud computing, Hadoop has become a widely used open source cloud computing framework for large scale data processing. However, few studies have been done to enhance data confidentiality of Hadoop against storage servers. In this paper, we address the data confidentiality issue by integrating hybrid encryption schemes and the Hadoop distributed file system (HDFS). We

Hsiao-Ying Lin; Shiuan-Tzuo Shen; Wen-Guey Tzeng; Bao-Shuh P. Lin

2012-01-01

241

Personal Computer Program to Transfer MultiMate Files to Automated Office System Documents.  

National Technical Information Service (NTIS)

A personal computer/disk operating system program, MultiMate to DEC WPS (MMWPS), was developed to read a MultiMate DOC file, interpret it into an ASCII text stream, then type it out the serial port at the baud rate and other serial port parameters specifi...

J. N. Kern

1989-01-01

242

Functionality and Performance Evaluation of File Systems for Storage Area Networks (SAN)  

Microsoft Academic Search

The demand for consolidated, widely accessible data stores continues to escalate. With the volume of data being retained mounting as well, a variety of markets are recognizing the advantage of shared data in terms of both cost and performance. Traditionally, common access has been addressed with network-attached fileservers employing data sharing protocols such as the Network File System (NFS). A

Martha Bancroft; Nick Bear; Jim Finlayson; Robert Hill; Richard Isicoff; Hoot Thompson

2000-01-01

243

National Center for Computational Sciences, Ceph Parallel File Systems Evaluation Report.  

National Technical Information Service (NTIS)

National Center for Computational Sciences (NCCS), in collaboration with Inktank Inc, prepared this performance and scalability study of Ceph file system. Ceph originated from Sage Weils PhD research at UC Santa Cruz around 2007 and it was designed to be ...

B. Caldwell B. Settlemyer D. Fuller F. Wang J. Hill J. Simmons M. Nelson S. Atchley S. Oral S. Weil

2013-01-01

244

Model Checking Cache Coherence Protocols for Distributed File Systems.  

National Technical Information Service (NTIS)

Debugging complex software systems is a major problem. Proving properties of software systems can be thought of as a debugging tool. If a system S must satisfy property P but we can prove that it does not, then S has bugs in it. On the other hand, if S is...

M. Vaziri-Farahani

1995-01-01

245

Secure capabilities for a petabyte-scale object-based distributed file system  

Microsoft Academic Search

Recently, the Network-Attached Secure Disk (NASD) model has become a more widely used technique for constructing large-scale storage systems. However, the security system proposed for NASD assumes that each client will contact the server to get a capability to access one object on a server. While this approach works well in smaller-scale systems in which each file is composed of

Christopher Olson; Ethan L. Miller

2005-01-01

246

Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System Using Shapefiles and DGM Files  

NASA Technical Reports Server (NTRS)

Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or Denver AWIPS Risk Reduction and Requirements Evaluation (DARE) Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU) located at Cape Canaveral Air Force Station (CCAFS), Florida. The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) at Johnson Space Center, Texas and 45th Weather Squadron (45 WS) at CCAFS to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. The presentation will list the advantages and disadvantages of both file types for creating interactive graphical overlays in future AWIPS applications. Shapefiles are a popular format used extensively in Geographical Information Systems. They are usually used in AWIPS to depict static map backgrounds. A shapefile stores the geometry and attribute information of spatial features in a dataset (ESRI 1998). Shapefiles can contain point, line, and polygon features. Each shapefile contains a main file, index file, and a dBASE table. The main file contains a record for each spatial feature, which describes the feature with a list of its vertices. The index file contains the offset of each record from the beginning of the main file. The dBASE table contains records for each attribute. Attributes are commonly used to label spatial features. Shapefiles can be viewed, but not created in AWIPS. As a result, either third-party software can be installed on an AWIPS workstation, or new software must be written to create shapefiles in the correct format.

Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

2007-01-01

247

StegFS: A Steganographic File System for Linux  

Microsoft Academic Search

Cryptographic le systems provide little protection against legal or illegal instruments that force the owner of data to release de- cryption keys for stored data once the presence of encrypted data on an inspected computer has been established. We are interested in how cryp- tographic le systems can be extended to provide additional protection for such a scenario and we

Andrew D. Mcdonald; Markus G. Kuhn

1999-01-01

248

Implementing Journaling in a Linux Shared Disk File System  

Microsoft Academic Search

In computer systems today, speed and responsiveness is often determined by net- work and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new think- ing is required about how machines interact with network-enabled storage devices. In this paper we describe

Kenneth W. Preslan; Andrew P. Barry; Jonathan Brassow; Russell Cattelan; Adam Manthei; Erling Nygaard; Seth Van Oort; David Teigland; Mike Tilstra; Matthew T. O'keefe; Grant Erickson; Manish Agarwal

2000-01-01

249

Anesthetic cartridge system under evaluation.  

PubMed

The problem of glass breakage in the local anesthetic cartridge system was evaluated under laboratory conditions with a mechanical testing machine. The anticipated breakage of the glass did not occur with any frequency, as the rubber stopper produced more uniform failures of the system. The glass cartridge appeared to be quite reliable and resistant to breakage.Local anesthetics have been used for many years to provide patients temporary freedom from pain. Local anesthetic solutions are in wide use in both dentistry and medicine and are the most frequently used drugs in dentistry. Various estimates place the number of injections at approximately one half million daily or 125 million injections per year.These drugs and the armamentarium necessary to administer them have proven to be safe and reliable. Only rarely are there reports of sensitivity to the anesthetic solution or breakage of needles.. Sterility of the solutions has not been a problem as they are carefully processed and evaluated at the factory. Although there are sporadic reports of loss of sterility, this has been attributed to the reuse of the anesthetic cartridges on more than one patient. Monheim states "The success of the cartridge system in dentistry has been due to the sincerity, honesty, and high standards of the manufacturers in giving the profession a near-perfect product." However, on occassion a glass cartridge will break or shatter when inserting the harpoon into the rubber stopper or even during injection. Cooley et al reported on eye injuries occurring in the dental office, one of which was due to glass from a local anesthetic cartridge that exploded and propelled particles into the patient's eye. Forrest evaluated syringes, needles, and cartridges and reported that one brand (made in Britain) fractured more often than any other, but that the fracture rate was too low to be of any consequence.It is apparent that glass cartridges will fracture or burst from time to time. This study evaluates the cartridge system with carefully controlled laboratory procedures. The cartridges were tested under various pressures and conditions in an attempt to determine the causes of failure and when such failure may be anticipated. PMID:6939350

Cooley, R L; Lubow, R M

1981-01-01

250

Anesthetic Cartridge System Under Evaluation  

PubMed Central

The problem of glass breakage in the local anesthetic cartridge system was evaluated under laboratory conditions with a mechanical testing machine. The anticipated breakage of the glass did not occur with any frequency, as the rubber stopper produced more uniform failures of the system. The glass cartridge appeared to be quite reliable and resistant to breakage. Local anesthetics have been used for many years to provide patients temporary freedom from pain. Local anesthetic solutions are in wide use in both dentistry and medicine and are the most frequently used drugs in dentistry. Various estimates place the number of injections at approximately one half million daily or 125 million injections per year. These drugs and the armamentarium necessary to administer them have proven to be safe and reliable. Only rarely are there reports of sensitivity to the anesthetic solution or breakage of needles.. Sterility of the solutions has not been a problem as they are carefully processed and evaluated at the factory. Although there are sporadic reports of loss of sterility, this has been attributed to the reuse of the anesthetic cartridges on more than one patient. Monheim states “The success of the cartridge system in dentistry has been due to the sincerity, honesty, and high standards of the manufacturers in giving the profession a near-perfect product.” However, on occassion a glass cartridge will break or shatter when inserting the harpoon into the rubber stopper or even during injection. Cooley et al reported on eye injuries occurring in the dental office, one of which was due to glass from a local anesthetic cartridge that exploded and propelled particles into the patient's eye. Forrest evaluated syringes, needles, and cartridges and reported that one brand (made in Britain) fractured more often than any other, but that the fracture rate was too low to be of any consequence. It is apparent that glass cartridges will fracture or burst from time to time. This study evaluates the cartridge system with carefully controlled laboratory procedures. The cartridges were tested under various pressures and conditions in an attempt to determine the causes of failure and when such failure may be anticipated. ImagesFigure 1Figure 2Figure 3Figure 4Figure 9Figure 10

Cooley, Robert L.; Lubow, Richard M.

1981-01-01

251

Configuration Management File Manager Developed for Numerical Propulsion System Simulation.  

National Technical Information Service (NTIS)

One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuratio...

G. J. Follen

1997-01-01

252

Endodontic treatment of mandibular molar with root dilaceration using Reciproc single-file system  

PubMed Central

Biomechanical preparation of root canals with accentuated curvature is challenging. New rotatory systems, such as Reciproc, require a shorter period of time to prepare curved canals, and became a viable alternative for endodontic treatment of teeth with root dilaceration. Thus, this study aimed to report a clinical case of endodontic therapy of root with accentuated dilaceration using Reciproc single-file system. Mandibular right second molar was diagnosed as asymptomatic irreversible pulpitis. Pulp chamber access was performed, and glide path was created with #10 K-file (Dentsply Maillefer) and PathFile #13, #16 and #19 (Dentsply Maillefer) up to the temporary working length. The working length measured corresponded to 20 mm in the mesio-buccal and mesio-lingual canals, and 22 mm in the distal canal. The R25 file (VDW GmbH) was used in all the canals for instrumentation and final preparation, followed by filling with Reciproc gutta-percha cones (VDW GmbH) and AH Plus sealer (Dentsply Maillefer), using thermal compaction technique. The case has been receiving follow-up for 6 mon and no painful symptomatology or periapical lesions have been found. Despite the difficulties, the treatment could be performed in a shorter period of time than the conventional methods.

Meireles, Daniely Amorin; Bastos, Mariana Mena Barreto; Marques, Andre Augusto Franco; Sponchiado, Emilio Carlos

2013-01-01

253

Automated Discovery of Patient-Specific Clinician Information Needs Using Clinical Information System Log Files  

PubMed Central

Knowledge about users and their information needs can contribute to better user interface design and organization of information in clinical information systems. This can lead to quicker access to desired information, which may facilitate the decision-making process. Qualitative methods such as interviews, observations and surveys have been commonly used to gain an understanding of clinician information needs. We introduce clinical information system (CIS) log analysis as a method for identifying patient-specific information needs and CIS log mining as an automated technique for discovering such needs in CIS log files. We have applied this method to WebCIS (Web-based Clinical Information System) log files to discover patterns of usage. The results can be used to guide design and development of relevant clinical information systems. This paper discusses the motivation behind the development of this method, describes CIS log analysis and mining, presents preliminary results and summarizes how the results can be applied.

Chen, Elizabeth S.; Cimino, James J.

2003-01-01

254

Introduction to BIBELOT: a bibliographic filing and retrieval system  

SciTech Connect

The BIBELOT System of COBOL and Datatrieve programs for bibliographic storage and retrieval is described. The storage scheme is also briefly described. The use of unique citation numbers and user defined keywords is illustrated by many retrieval examples. Finally, typical questions about the use of BIBELOT are answered.

Cochran, M.I.

1984-09-01

255

GAS: Overloading a File Sharing Network as an Anonymizing System  

Microsoft Academic Search

Anonymity is considered as a valuable property as far as everyday transactions in the Internet are concerned. Users care about their privacy and they seek for new ways to keep secret as much as of their personal information from third parties. Anonymizing systems exist nowadays that provide users with the tech- nology, which is able to hide their origin when

Elias Athanasopoulos; Mema Roussopoulos; Kostas G. Anagnostakis; Evangelos P. Markatos

2007-01-01

256

Performance of PFs, the Compaq Sierra Product's Parallel File System.  

National Technical Information Service (NTIS)

In FY 2000 Livermore Computing took delivery of serial number one of the Compaq Sierra high performance cluster product. The Sierra product employs a derivative of the Tru64 UNIX operating system called Tru-Cluster, which provides a cluster-wide parallel ...

A. C. Uselton

2001-01-01

257

HPC Global File System Performance Analysis Using A Scientific-Application Derived Benchmark  

SciTech Connect

With the exponential growth of high-fidelity sensor and simulated data, the scientific community is increasingly reliant on ultrascale HPC resources to handle its data analysis requirements. However, to use such extreme computing power effectively, the I/O components must be designed in a balanced fashion, as any architectural bottleneck will quickly render the platform intolerably inefficient. To understand I/O performance of data-intensive applications in realistic computational settings, we develop a lightweight, portable benchmark called MADbench2, which is derived directly from a large-scale Cosmic Microwave Background (CMB) data analysis package. Our study represents one of the most comprehensive I/O analyses of modern parallel file systems, examining a broad range of system architectures and configurations, including Lustre on the Cray XT3, XT4, and Intel Itanium2 clusters; GPFS on IBM Power5 and AMD Opteron platforms; a BlueGene/P installation using GPFS and PVFS2 file systems; and CXFS on the SGI Altix\\-3700. We present extensive synchronous I/O performance data comparing a number of key parameters including concurrency, POSIX- versus MPI-IO, and unique-versus shared-file accesses, using both the default environment as well as highly-tuned I/O parameters. Finally, we explore the potential of asynchronous I/O and show that only the two of the nine evaluated systems benefited from MPI-2's asynchronous MPI-IO. On those systems, experimental results indicate that the computational intensity required to hide I/O effectively is already close to the practical limit of BLAS3 calculations. Overall, our study quantifies vast differences in performance and functionality of parallel file systems across state-of-the-art platforms -- showing I/O rates that vary up to 75x on the examined architectures -- while providing system designers and computational scientists a lightweight tool for conducting further analysis.

Borrill, Julian; Oliker, Leonid; Shalf, John; Shan, Hongzhang; Uselton, Andrew

2008-08-28

258

Informatics in Radiology (infoRAD) Vendor-Neutral Case Input into a Server based Digital Teaching File System 1  

Microsoft Academic Search

Although digital teaching files are important to radiology education, there are no current satisfactory solutions for export of Digital Imaging and Communications in Medicine (DICOM) images from picture ar- chiving and communication systems (PACS) in desktop publishing format. A vendor-neutral digital teaching file, the Radiology Interesting Case Server (RadICS), offers an efficient tool for harvesting interesting cases from PACS without

Aaron W. C. Kamauu; Scott L. DuVall; Reid J. Robison; Andrew P. Liimatta; Richard H. Wiggins III; David E. Avrin

2006-01-01

259

A File Allocation Strategy for Energy-Efficient Disk Storage Systems  

SciTech Connect

Exponential data growth is a reality for most enterprise and scientific data centers.Improvements in price/performance and storage densities of disks have made it both easy and affordable to maintain most of the data in large disk storage farms. The provisioning of disk storage farms however, is at the expense of high energy consumption due to the large number of spinning disks. The power for spinning the disks and the associated cooling costs is a significant fraction of the total power consumption of a typical data center. Given the trend of rising global fuel and energy prices and the high rate of data growth, the challenge is to implement appropriateconfigurations of large scale disk storage systems that meet performancerequirements for information retrieval across data centers. We present part of the solution to this challenge with an energy efficient file allocation strategy on a large scale disk storage system. Given performance characteristics of thedisks, and a profile of the workload in terms of frequencies of file requests and their sizes, the basic idea is to allocate files to disks such that the disks can be configured into two sets of active (constantly spinning), and passive (capable of being spun up or down) disk pools. The goal is to minimize the number of active disks subject to I/O performance constraints. We present an algorithm for solving this problem with guaranteed bounds from the optimal solution. Our algorithm runs in O(n) time where n is the number of files allocated. It uses a mapping of our file allocation problem to a generalization of the bin packing problem known as 2-dimensional vector packing. Detailed simulation results are also provided.

Otoo, Ekow J; Otoo, Ekow J.; Rotem, Doron; Pinar, Ali; Tsao, Shi-Chiang

2008-06-27

260

Software installation and condition data distribution via CernVM File System in ATLAS  

NASA Astrophysics Data System (ADS)

The ATLAS collaboration is managing one of the largest collections of software among the High Energy Physics experiments. Traditionally, this software has been distributed via rpm or pacman packages, and has been installed in every site and user's machine, using more space than needed since the releases share common files but are installed in their own trees. As soon as the software has grown in size and number of releases this approach showed its limits, in terms of manageability, used disk space and performance. The adopted solution is based on the CernVM File System, a fuse-based HTTP, read-only filesystem which guarantees file de-duplication, on-demand file transfer with caching, scalability and performance. Here we describe the ATLAS experience in setting up the CVMFS facility and putting it into production, for different type of use-cases, ranging from single users’ machines up to large data centers, for both software and conditions data. The performance of CernVM-FS, both with software and condition data access, will be shown, comparing with other filesystems currently in use by the collaboration.

De Salvo, A.; De Silva, A.; Benjamin, D.; Blomer, J.; Buncic, P.; Harutyunyan, A.; Undrus, A.; Yao, Y.

2012-12-01

261

Layered Systems Under Shear Flow  

NASA Astrophysics Data System (ADS)

We discuss and review a generalization of the usual hydrodynamic description of smectic A liquid crystals motivated by the experimentally observed shear-induced destabilization and reorientation of smectic A like systems. We include both the smectic layering (via the layer displacement u and the layer normal hat{p}) and the director hat{n} of the underlying nematic order in our macroscopic hydrodynamic description and allow both directions to differ in non equilibrium situations. In a homeotropically aligned sample the nematic director couples to an applied simple shear, whereas the smectic layering stays unchanged. This difference leads to a finite (but usually small) angle between hat{n} and hat{p}, which we find to be equivalent to an effective dilatation of the layers. This effective dilatation leads, above a certain threshold, to an undulation instability of the layers with a wave vector parallel to the vorticity direction of the shear flow. We include the couplings of the velocity field with the order parameters for orientational and positional order and show how the order parameters interact with the undulation instability. We explore the influence of the magnitude of various material parameters on the instability. Comparing our results to available experimental results and molecular dynamic simulations, we find good qualitative agreement for the first instability. In addition, we discuss pathways to higher instabilities leading to the formation of onions (multilamellar vesicles) via cylindrical structures and/or the break-up of layers via large amplitude undulations.

Svenšek, Daniel; Brand, Helmut R.

262

76 FR 46774 - Privacy Act of 1974; System of Records-Federal Student Aid Application File  

Federal Register 2010, 2011, 2012, 2013

...Records--Federal Student Aid Application File AGENCY: Federal Student Aid, Department...Federal Student Financial Aid Application File (18-11-01), 64 Federal Register...term ``Federal Student Aid Application File'' in the subject line of your...

2011-08-03

263

Using a Wide-Area File System Within the World-Wide Web  

Microsoft Academic Search

This paper proposes the use of a wide-area file system for storing and retrieving documents.We demonstrate that most of the functionality of the World-Wide Web (WWW)information service can be provided by storing documents in AFS. The approach addressesseveral performance problems experienced by WWW servers and clients, suchas increased server and network load, network latency and inadequate security. In addition,the mechanism

Mirjana Spasojevic; Mic Bowman; Alfred Spector

1994-01-01

264

Evaluating ParFiSys: A high-performance parallel and distributed file system  

Microsoft Academic Search

We present an overview of ParFiSys, a coherent parallel file system developed at the UPM to provide I\\/O services to the GPMIMD machine, an MPP built within the ESPRIT project P-5404. Special emphasis is made on the results obtained during ParFiSys evaluation. They were obtained using several I\\/O benchmarks (PARKBENCH, IOBENCH, etc.) and several MPP platforms (T800, T9000, etc.) to

Felix Pérez; Jesús Carretero; Francisco García; Pedro De Miguel; L. Alonso

1997-01-01

265

Log-Less Metadata Management on Metadata Server for Parallel File Systems  

PubMed Central

This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

Xiao, Guoqiang; Peng, Xiaoning

2014-01-01

266

Log-less metadata management on metadata server for parallel file systems.  

PubMed

This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

2014-01-01

267

Design and Implementation of a Novel P2P-Based VOD System Using Media File Segments Selecting Algorithm  

Microsoft Academic Search

From the analysis of some hard drawbacks faced server-centric VOD system today, a novel P2P-based VOD system using media file segments selecting algorithm (P2P-VOD- FSS), is introduced in this paper. In this infrastructure, we emphasize on how to real-timely and orderly download and consume file segments from multiple peer nodes to guarantee streaming service, we firstly give a system model

Zhi Hui Lu; Shi Yong Zhang; Jie Wu; Wei Ming Fu; Yi Ping Zhong

2007-01-01

268

A design for a new catalog manager and associated file management for the Land Analysis System (LAS)  

NASA Technical Reports Server (NTRS)

Due to the larger number of different types of files used in an image processing system, a mechanism for file management beyond the bounds of typical operating systems is necessary. The Transportable Applications Executive (TAE) Catalog Manager was written to meet this need. Land Analysis System (LAS) users at the EROS Data Center (EDC) encountered some problems in using the TAE catalog manager, including catalog corruption, networking difficulties, and lack of a reliable tape storage and retrieval capability. These problems, coupled with the complexity of the TAE catalog manager, led to the decision to design a new file management system for LAS, tailored to the needs of the EDC user community. This design effort, which addressed catalog management, label services, associated data management, and enhancements to LAS applications, is described. The new file management design will provide many benefits including improved system integration, increased flexibility, enhanced reliability, enhanced portability, improved performance, and improved maintainability.

Greenhagen, Cheryl

1986-01-01

269

Integrated Risk Information System (IRIS) (for IBM PC/AT microcomputers). Data file  

SciTech Connect

The Integrated Risk Information System (IRIS), an on-line database of chemical-specific risk information, provides information on how chemicals affect human health and is a primary source of EPA risk assessment information on chemicals of environmental concern. It is intended to serve as a guide for the hazard identification and dose-response assessment steps of EPA risk assessments. IRIS makes chemical-specific risk information readily available to those who must perform risk assessments and also increases consistency in risk management decisions. The principal section of IRIS is the chemical files. The chemical files contain: oral and inhalation reference doses for noncarcinogens; oral and inhalation carcinogen assessments; summarized Drinking Water Health Advisories; summaries of selected EPA regulations; and supplementary data (for example, acute toxicity information and physical-chemical properties). The two primary types of health assessment information in IRIS are reference doses and carcinogen assessments.

Not Available

1990-04-01

270

Integrated Risk Information System (IRIS) (for IBM PC microcomputers). Data file  

SciTech Connect

The Integrated Risk Information System (IRIS), an on-line database of chemical-specific risk information, provides information on how chemicals affect human health and is a primary source of EPA risk assessment information on chemicals of environmental concern. It is intended to serve as a guide for the hazard identification and dose-response assessment steps of EPA risk assessments. IRIS makes chemical-specific risk information readily available to those who must perform risk assessments and also increases consistency in risk management decisions. The principal section of IRIS is the chemical files. The chemical files contain: oral and inhalation reference doses for noncarcinogens; oral and inhalation carcinogen assessments; summarized Drinking Water Health Advisories; summaries of selected EPA regulations; and supplementary data (for example, acute toxicity information and physical-chemical properties). The two primary types of health assessment information in IRIS are reference doses and carcinogen assessments.

Not Available

1990-04-01

271

Computer printing and filing of microbiology reports. 1. Description of the system.  

PubMed

From March 1974 all reports from this microbiology department have been computer printed and filed. The system was designed to include every medically important microorganism and test. Technicians at the laboratory bench made their results computer-readable using Port-a-punch cards, and specimen details were recorded on paper-tape, allowing the full description of each specimen to appear on the report. A summary form of each microbiology phrase enabled copies of reports to be printed on wide paper with 12 to 18 reports per sheet; such copies, in alphabetical order for one day, and cumulatively for one week were used by staff answering enquiries to the office. This format could also be used for printing allthe reports for one patient. Retrieval of results from the files was easily performed and was useful to medical and laboratory staff and for control-of-infection purposes. The system was written in COBOL and was designed to be as cost-effective as possible without sacrificing accuracy; the cost of a report and its filing was 17-97 pence. PMID:939809

Goodwin, C S; Smith, B C

1976-06-01

272

76 FR 4001 - Foreign Trade Regulations (FTR): Mandatory Automated Export System Filing for All Shipments...  

Federal Register 2010, 2011, 2012, 2013

...filing program by changing the filing time frame from ten (10) calendar days to five...addition, the Postdeparture filing time frame is changed from ten (10) calendar days...addition, the postdeparture filing time frame has changed from ten (10) calendar...

2011-01-21

273

NASA Test File  

NASA Technical Reports Server (NTRS)

Test File is data file containing computer-aided design (CAD) data formatted according to National Bureau of Standards Initial Graphic Exchange Specification (IGES). File created for purpose of conducting NASA tests to determine to what extent dissimilar CAD systems exchange data using the IGES standard formats and IGES translators.

Gordon, S.

1986-01-01

274

Text File Comparator  

NASA Technical Reports Server (NTRS)

File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

Kotler, R. S.

1983-01-01

275

Cartographic Boundary Files  

NSDL National Science Digital Library

The Cartographic Boundary Files Web site from the US Census Bureau contains "generalized extracts from the Census Bureau's TIGER geographic database for use in a Geographic Information System (GIS) or similar mapping systems." The files are mainly from the 2000 census and contain such things as Congressional Districts, School Districts, Urbanized Areas, and more. The Descriptions and Metadata link gives users an idea of what is contained in each file before downloading, and the Download Boundary Files link lists each file that can than be downloaded, all available in several formats.

2001-01-01

276

Statistical Disk Cluster Classification for File Carving  

Microsoft Academic Search

File carving is the process of recovering files from a disk without the help of a file system. In forensics, it is a helpful tool in finding hidden or recently removed disk content. Known signatures in file headers and footers are especially useful in carving such files out, that is, from header until footer. However, this approach assumes that file

Cor J. Veenman

2007-01-01

277

Evaluation of clinical data in childhood asthma. Application of a computer file system  

SciTech Connect

A computer file system was used in our pediatric allergy clinic to assess the value of chest roentgenograms and hemoglobin determinations used in the examination of patients and to correlate exposure to pets and forced hot air with the severity of asthma. Among 889 children with asthma, 20.7% had abnormal chest roentgenographic findings, excluding hyperinflation and peribronchial thickening, and 0.7% had abnormal hemoglobin values. Environmental exposure to pets or forced hot air was not associated with increased severity of asthma, as assessed by five measures of outcome: number of medications administered, requirement for corticosteroids, frequency of clinic visits, frequency of emergency room visits, and frequency of hospitalizations.

Fife, D.; Twarog, F.J.; Geha, R.S.

1983-10-01

278

Product Pricing Behaviour Under Different Costing Systems  

Microsoft Academic Search

Product pricing under ex ante imperfect marginal cost information is examined. The major findings indicate that under conditions of increasing (decreasing) average total (variable) cost, absorption (variable) costing results in outcomes closer to economic optimum. In addition, cost accounting systems interact with price elasticity of demand, resulting in greater (lesser) decision deviations from economic optimum under absorption costing when costs

Dennis P. Tishlias; Peter Chalos

1988-01-01

279

File under Fleeting  

ERIC Educational Resources Information Center

Archives have always had an aura of neutrality and coolness that masks the heat behind the data they record: births, marriages, crimes, wars, business dealings, genocides and deaths. Long thought of as the musty haunts of scholars with a specialized interest in the demographics of Rome or 15th-century France, archives have been seen as controlled…

Torgovnick, Marianna

2008-01-01

280

NAFFS: network attached flash file system for cloud storage on portable consumer electronics  

NASA Astrophysics Data System (ADS)

Cloud storage technology has become a research hotspot in recent years, while the existing cloud storage services are mainly designed for data storage needs with stable high speed Internet connection. Mobile Internet connections are often unstable and the speed is relatively low. These native features of mobile Internet limit the use of cloud storage in portable consumer electronics. The Network Attached Flash File System (NAFFS) presented the idea of taking the portable device built-in NAND flash memory as the front-end cache of virtualized cloud storage device. Modern portable devices with Internet connection have built-in more than 1GB NAND Flash, which is quite enough for daily data storage. The data transfer rate of NAND flash device is much higher than mobile Internet connections[1], and its non-volatile feature makes it very suitable as the cache device of Internet cloud storage on portable device, which often have unstable power supply and intermittent Internet connection. In the present work, NAFFS is evaluated with several benchmarks, and its performance is compared with traditional network attached file systems, such as NFS. Our evaluation results indicate that the NAFFS achieves an average accessing speed of 3.38MB/s, which is about 3 times faster than directly accessing cloud storage by mobile Internet connection? and offers a more stable interface than that of directly using cloud storage API. Unstable Internet connection and sudden power off condition are tolerable, and no data in cache will be lost in such situation.

Han, Lin; Huang, Hao; Xie, Changsheng

281

Formalizing structured file services for the data storage and retrieval subsystem of the data management system for Spacestation Freedom  

NASA Technical Reports Server (NTRS)

A brief example of the use of formal methods techniques in the specification of a software system is presented. The report is part of a larger effort targeted at defining a formal methods pilot project for NASA. One possible application domain that may be used to demonstrate the effective use of formal methods techniques within the NASA environment is presented. It is not intended to provide a tutorial on either formal methods techniques or the application being addressed. It should, however, provide an indication that the application being considered is suitable for a formal methods by showing how such a task may be started. The particular system being addressed is the Structured File Services (SFS), which is a part of the Data Storage and Retrieval Subsystem (DSAR), which in turn is part of the Data Management System (DMS) onboard Spacestation Freedom. This is a software system that is currently under development for NASA. An informal mathematical development is presented. Section 3 contains the same development using Penelope (23), an Ada specification and verification system. The complete text of the English version Software Requirements Specification (SRS) is reproduced in Appendix A.

Jamsek, Damir A.

1993-01-01

282

Integrated Risk Information System (IRIS) (for IBM PC/AT microcomputers). Data file  

SciTech Connect

The Integrated Risk Information System (IRIS), an on-line database of chemical-specific risk information, was made available outside EPA. IRIS provides information on how chemicals affect human health and is a primary source of EPA risk-assessment information on chemicals of environmental concern. The principal section of IRIS is the chemical files. The chemical files contain: oral and inhalation reference doses for noncarcinogens; oral and inhalation carcinogen assessments; summarized Drinking Water Health Advisories; summaries of selected EPA regulations; supplementary data (for example, acute-toxicity information and physical-chemical properties). The two primary types of health-assessment information in IRIS are reference doses and carcinogen assessments. Reference doses are estimated human chemical exposures over a lifetime which are just below the expected threshold for adverse health effects. Because exposure assessment pertains to exposure at a particular place, IRIS cannot provide situational information on exposure. IRIS, can be used with an exposure assessment to characterize the risk of chemical exposure. This risk characterization can be used to decide what must be done to protect human health. Oral reference doses (RfD) are provided for most of the chemicals in IRIS and carcinogen slope factors are provided for some. Inhalation reference doses are not yet available in IRIS. Inhalation reference doses will be added after the Agency produces a methodology for developing these RfDs. For more information on IRIS call IRIS User Support at (513) 569-7254 or FTS 684-7254.

Picardi, R.; Swartout, J.

1988-05-31

283

GENPRO: automatic generation of Prolog clause files for knowledge-based systems in the biomedical sciences.  

PubMed

With the increasing interest in using knowledge-based approaches for protein structure prediction and modelling, there is a requirement for general techniques to convert molecular biological data into structures that can be interpreted by artificial intelligence programming languages (e.g. Prolog). We describe here an interactive program that generates files in Prolog clausal form from the most commonly distributed protein structural data collections. The program is flexible and enables a variety of clause structures to be defined by the user through a general schema definition system. Our method can be extended to include other types of molecular biological database or those containing non-structural information, thus providing a uniform framework for handling the increasing volume of data available to knowledge-based systems in biomedicine. PMID:2702815

Saldanha, J; Eccles, J R

1989-03-01

284

Issues in Transparent File Access.  

National Technical Information Service (NTIS)

For a computer system attached to a network, the network provides connectivity to many other systems whose file systems may be very different from the local file system. However, there is no standard way for an application to 'transparently' access files ...

K. Olsen J. Barkley

1991-01-01

285

75 FR 6728 - The Merit Systems Protection Board (MSPB) is Providing Notice of the Opportunity to File Amicus...  

Federal Register 2010, 2011, 2012, 2013

...SYSTEMS PROTECTION BOARD The Merit Systems Protection Board (MSPB) is Providing Notice of the Opportunity to File Amicus Briefs in the...DC-0752-09-0033-R-1, 2009 MSPB 233. Although the Crumpler case is now settled, the legal issue raised in that matter and...

2010-02-10

286

Permanent-File-Validation Utility Computer Program  

NASA Technical Reports Server (NTRS)

Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

Derry, Stephen D.

1988-01-01

287

75 FR 54867 - Combined Notice of Filings No. 3  

Federal Register 2010, 2011, 2012, 2013

...Applicants: Dominion Transmission, Inc. Description: Dominion Transmission, Inc. submits tariff filing per 154.203: DTI--Volume No. 1B Baseline Compliance Filing, to be effective 8/31/2010 under RP10-779. Filing Type: 580. Filed...

2010-09-09

288

37 CFR 201.34 - Procedures for filing Correction Notices of Intent to Enforce a Copyright Restored under the...  

Code of Federal Regulations, 2010 CFR

...NIEs and Registrations, PO Box 70400, Washington, DC 20024, USA. (3) A Correction NIE shall contain the following information...Recordsâ and/or âURAA, GATT Amends U.S. law.â Images of the complete Correction NIEs as filed will be stored on...

2010-07-01

289

37 CFR 201.34 - Procedures for filing Correction Notices of Intent to Enforce a Copyright Restored under the...  

Code of Federal Regulations, 2010 CFR

...NIEs and Registrations, PO Box 70400, Washington, DC 20024, USA. (3) A Correction NIE shall contain the following information...Recordsâ and/or âURAA, GATT Amends U.S. law.â Images of the complete Correction NIEs as filed will be stored on...

2009-07-01

290

26 CFR 157.6081-1 - Automatic extension of time for filing a return due under chapter 55.  

Code of Federal Regulations, 2013 CFR

...a return on Form 8876, âExcise Tax on Structured Settlement Factoring...Time to File Certain Business Income Tax, Information, and Other Returns,â or in any...of the properly estimated unpaid tax liability on or before the date...

2013-04-01

291

26 CFR 156.6081-1 - Automatic extension of time for filing a return due under chapter 54.  

Code of Federal Regulations, 2013 CFR

...a return on Form 8725, âExcise Tax on Greenmail,â will be allowed...Time to File Certain Business Income Tax, Information, and Other Returns,â or in any...of the properly estimated unpaid tax liability on or before the date...

2013-04-01

292

26 CFR 55.6081-1 - Automatic extension of time for filing a return due under Chapter 44.  

Code of Federal Regulations, 2013 CFR

...on Form 8613, âReturn of Excise Tax on Undistributed Income of Regulated...Time to File Certain Business Income Tax, Information, and Other Returns,â or in any...of the properly estimated unpaid tax liability on or before the date...

2013-04-01

293

Securing the AliEn File Catalogue - Enforcing authorization with accountable file operations  

NASA Astrophysics Data System (ADS)

The AliEn Grid Services, as operated by the ALICE Collaboration in its global physics analysis grid framework, is based on a central File Catalogue together with a distributed set of storage systems and the possibility to register links to external data resources. This paper describes several identified vulnerabilities in the AliEn File Catalogue access protocol regarding fraud and unauthorized file alteration and presents a more secure and revised design: a new mechanism, called LFN Booking Table, is introduced in order to keep track of access authorization in the transient state of files entering or leaving the File Catalogue. Due to a simplification of the original Access Envelope mechanism for xrootd-protocol-based storage systems, fundamental computational improvements of the mechanism were achieved as well as an up to 50% reduction of the credential's size. By extending the access protocol with signed status messages from the underlying storage system, the File Catalogue receives trusted information about a file's size and checksum and the protocol is no longer dependent on client trust. Altogether, the revised design complies with atomic and consistent transactions and allows for accountable, authentic, and traceable file operations. This paper describes these changes as part and beyond the development of AliEn version 2.19.

Schreiner, Steffen; Bagnasco, Stefano; Sankar Banerjee, Subho; Betev, Latchezar; Carminati, Federico; Vladimirovna Datskova, Olga; Furano, Fabrizio; Grigoras, Alina; Grigoras, Costin; Mendez Lorenzo, Patricia; Peters, Andreas Joachim; Saiz, Pablo; Zhu, Jianlin

2011-12-01

294

File Integrity Monitor Scheduling Based on File Security Level Classification  

Microsoft Academic Search

\\u000a Integrity of operating system components must be carefully handled in order to optimize the system security. Attackers always\\u000a attempt to alter or modify these related components to achieve their goals. System files are common targets by the attackers.\\u000a File integrity monitoring tools are widely used to detect any malicious modification to these critical files. Two methods,\\u000a off-line and on-line file

Zul Hilmi Abdullah; Nur Izura Udzir; Ramlan Mahmod; Khairulmizam Samsudin

295

75 FR 1766 - Combined Notice of Filings #1  

Federal Register 2010, 2011, 2012, 2013

...filings: Docket Numbers: EC10-32-000. Applicants: NSTAR Companies, Advanced Energy Systems, Inc., Medical Area Total Energy Plant, Inc., MATEP LLC, New MATEP, Inc. Description: Application under Section 203 of the Federal Power...

2010-01-13

296

ncBrowse: A Graphical netCDF File Browser  

NSDL National Science Digital Library

This Java application provides interactive browsing of data and metadata netCDF file formats written under a wide range of netCDF file conventions. It features flexibility in accommodating a very wide range of netCDF files, user assignment of axes, and interactive, zoomable scientific graphics displays, including a self-scaling time axis. ncBrowse can read network accessible files, and includes Distributed Ocean Data System (DODS) and OPeNDAP support. Supported for Unix, Mac and PC hardware platforms.

297

A master environmental control and mine system design simulator for underground coal mining - test data file. Data file  

Microsoft Academic Search

The data sets on this tape are test data for the computer programs comprising the Master Environmental Control and Mine System Design Simulator for Underground Coal Mining. The programs are on BuMines Tape 1-77 (MDS-5). The Master Design Simulator is completely documented in an 11-volume report available through NTIS (Report No. PB-255 420\\/AS). These test data correspond to example applications

Schottler

1976-01-01

298

Integrated Risk Information System (IRIS) (for IBM PC microcomputers). Data file  

SciTech Connect

The Integrated Risk Information System (IRIS), an on-line database of chemical-specific risk information, was made available outside EPA. IRIS provides information on how chemicals affect human health and is a primary source of EPA risk-assessment information on chemicals of environmental concern. It is intended to serve as a guide for the hazard identification and dose-response assessment steps of EPA risk assessments. The principal section of IRIS is the chemical files. The chemical files contain: oral and inhalation reference doses for noncarcinogens; oral and inhalation carcinogen assessments; summarized Drinking Water Health Advisories; summaries of selected EPA regulations; supplementary data (for example, acute-toxicity information and physical-chemical properties). The two primary types of health-assessment information in IRIS are reference doses and carcinogen assessments. Reference doses are estimated human chemical exposures over a lifetime which are just below the expected threshold for adverse health effects. Because exposure assessment pertains to exposure at a particular place, IRIS cannot provide situational information on exposure. IRIS can be used with an exposure assessment to characterize the risk of chemical exposure. This risk characterization can be used to decide what must be done to protect human health. Oral reference doses (RfD) are provided for most of the chemicals in IRIS and carcinogen slope factors are provided for some. Inhalation reference doses are not yet available in IRIS. Inhalation reference doses will be added after the Agency produces a methodology for developing these RfDs. For more information on IRIS call IRIS User Support at (513) 569-7254 or FTS 684-7254.

Picardi, R.; Swartout, J.

1988-01-01

299

Statistical Disk Cluster Classification for File Carving  

Microsoft Academic Search

File carving is the process of recovering files from a disk without the help of a file system. In forensics, it is a help- ful tool in finding hidden or recently removed disk content. Known signatures in file headers and footers are especially useful in carving such files out, that is, from header until footer. However, this approach assumes that

Cor J. Veenman

2007-01-01

300

XML Files  

MedlinePLUS

... Topics Drugs & Supplements Videos & Cool Tools MedlinePlus XML Files To use the sharing features on this page, ... If you have questions about the MedlinePlus XML files, please contact us . For additional sources of MedlinePlus ...

301

The Use of Information Files and Information Retrieval Systems Within the University Environment.  

ERIC Educational Resources Information Center

An environment is described in which interdisciplinary scholars at a university are able to utilize for various purposes machine-readable bibliographic and other descriptive text files. The information files include abstracts of social science and computer and information science journal literature, descriptions of research activities in…

Borman, Lorraine

302

Integrating a Flash file using Action Scripts in a data exchange system on the Internet  

Microsoft Academic Search

This paper proposes a modality of design for the on-line completion forms used on Web pages. The chosen example is based on the technical and educational analysis for a study application conceived in Flash programming language using the advanced level of facilities offered by ActionScript. In this approach the connection between a swf Flash file and an asp file is

R. Radescu; Adrian Dumitru

2003-01-01

303

Cooperative Caching: Using Remote Client Memory to Improve File System Performance  

Microsoft Academic Search

Emerging high-speed networks will allow machines to access remote data nearly as quickly as they can access local data. This trend motivates the use of cooperative caching: coordinating the file caches of many machines distributed on a LAN to form a more effective overall file cache. In this paper we examine four cooperative caching algorithms using a trace-driven simulation study.

Michael D. Dahlin; Randolph Y. Wang; Thomas E. Anderson; David A. Patterson

1994-01-01

304

LegionFS: A Secure and Scalable File System Supporting Cross-Domain High-Performance Applications  

Microsoft Academic Search

Realizing that current file systems can not cope with the diverse requirements of wide-area collaborations, researchers have developed data access facilities to meet their needs. Recent work has focused on comprehensive data access architectures. In order to fulfill the evolving requirements in this environment, we suggest a more fully-integrated architecture built upon the fundamental tenets of naming, security, scalability, extensibility,

Brian S. White; Michael Walker; Marty Humphrey; Andrew S. Grimshaw

2001-01-01

305

LegionFS: a secure and scalable file system supporting cross-domain high-performance applications  

Microsoft Academic Search

Realizing that current file systems can not cope with the diverse requirements of wide-area collaborations, researchers have developed data access facilities to meet their needs. Recent work has focused on comprehensive data access architectures. In order to fulfill the evolving requirements in this environment, we suggest a more fully-integrated architecture built upon the fundamental tenets of naming, security, scalability, extensibility,

Brian S. White; Michael Walker; Marty Humphrey; Andrew S. Grimshaw

2001-01-01

306

IRS Enrolled Actuaries File.  

National Technical Information Service (NTIS)

The file contains names, addresses, and IRS enrollment number for actuaries who are eligible to practice before the IRS. These actuaries are professionals who are eligible to perform actuarial services under the Employee Retirement Income Security Act of ...

1991-01-01

307

Exploiting Weak Connectivity for Mobile File Access  

Microsoft Academic Search

Weak corrrrecdvi~, in the form of intermittent, low-bandwidth, or expensive networks is a fact of life in mobile computing. In this paper, we describe how the Coda File System has evolved to exploit such networks. The underlying theme of this evolution has been the systematic introduction of adaptivity to eliminate hidden assumptions about strong connectivity. Many aspects of the system,

Lily B. Mummert; Maria Ebling; Mahadev Satyanarayanan

1995-01-01

308

47 CFR 1.10006 - Is electronic filing mandatory?  

Code of Federal Regulations, 2013 CFR

...2013-10-01 2013-10-01 false Is electronic filing mandatory? 1.10006 Section 1...International Bureau Filing System § 1.10006 Is electronic filing mandatory? Electronic filing is mandatory for all...

2013-10-01

309

Personal File Management for the Health Sciences.  

ERIC Educational Resources Information Center

Written as an introduction to the concepts of creating a personal or reprint file, this workbook discusses both manual and computerized systems, with emphasis on the preliminary groundwork that needs to be done before starting any filing system. A file assessment worksheet is provided; considerations in developing a personal filing system are…

Apostle, Lynne

310

Prototype Implementation of a Time Interval File Protection System in Linux.  

National Technical Information Service (NTIS)

Control of access to information based on temporal attributes has many potential applications. Examples include student user accounts set to expire upon graduation; files marked as time-sensitive so that their contents can be protected appropriately and t...

K. H. Chiang

2006-01-01

311

Analyzing Technique of Power Systems Under Deregulation  

NASA Astrophysics Data System (ADS)

Deregulation of the electric utilities has been progressing. Even under the deregulation, the reliability should be the most important problem of power systems. However, according to the deregulation, operation and scheduling of power systems are changing and new techniques to analyze power systems are introducing. To evaluate reliability of power systems, adequacy and security are well employed recently. This paper presents the new analyzing technique which will be realized in near future from the viewpoint of adequacy and security. First, simulation tool to evaluate adequacy is described. As an example of this tool, MARS and other methods are mentioned. Next, to evaluate the security, security constrained unit commitment (SCUC) and security constrained optimal power flow (SCOPF) are mentioned. Finally, some topics concerning ancillary service are described.

Miyauchi, Hajime; Kita, Hiroyuki; Ishigame, Atsushi

312

CT Teaching Files  

NSDL National Science Digital Library

CTisus is a project of the Advanced Medical Imaging Laboratory, and on this site they present their teaching files. The files are divided by organ or body systems (such as Stomach and Neuro), and each division contains from one to forty-two individual files. Each file contains 100 cases, which allow students to see CT scans, courtesy of Dr. Elliot K. Fishman, and diagnose the illness based on what the scan reveals. By clicking the âÂÂDiagnosisâ on/off buttons, they can see the correct diagnosis. This site will be helpful for students in the fields of diagnostic radiographic imaging or radiology to have an understanding of what diseases look like in CT scans, and for teachers who instruct those students to supplement their classroom lectures and activities with these ready-to-use teaching files.

Fishman, Elliot K.

2007-03-09

313

76 FR 9780 - Notification of Deletion of System of Records; EPA Parking Control Office File (EPA-10) and EPA...  

Federal Register 2010, 2011, 2012, 2013

...of Records; EPA Parking Control Office File (EPA-10) and EPA Transit and Guaranteed Ride Home Program Files (EPA-35) AGENCY: Environmental Protection...records for EPA Parking Control Office File (EPA-10), published in the...

2011-02-22

314

20 CFR 30.10 - Are all OWCP records relating to claims filed under EEOICPA considered confidential?  

Code of Federal Regulations, 2013 CFR

20 Employees' Benefits 1 2013-04-01...30.10 Employees' Benefits OFFICE OF WORKERS...DEPARTMENT OF LABOR ENERGY EMPLOYEES OCCUPATIONAL...COMPENSATION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL...relating to claims for benefits under EEOICPA are...

2013-04-01

315

ON THE ROLE OF HELPERS IN PEER-TO-PEER FILE DOWNLOAD SYSTEMS: DESIGN, ANALYSIS AND SIMULATION  

Microsoft Academic Search

While BitTorrent has been successfully used in peer- to-peer content distribution, its performance is limited by the fact that typical internet users have much lower upload bandwidths than download bandwidths. This asymmetry in bandwidth results in the overall aver- age download speed of a BitTorrent-like file down- load system to be bottle-necked by the much lower up- load capacity. This

Jiajun Wang; Chuohao Yeo; Vinod Prabhakaran; Kannan Ramchandran

316

OS Support for a Commodity Database on PC clusters - Distributed Devices vs. Distributed File Systems  

Microsoft Academic Search

In this paper we attempt to parallelise a commodity database for OLAP on a cluster of commodity PCs by using a distributed high-performance storage sub- system. By parallelising the underlying storage archi- tecture we eliminate the need to make any changes to the database software. We look at two options that difier in their complexity and features: Distributed devices and

Felix Rauch; Thomas Stricker

2005-01-01

317

75 FR 29312 - Notice Regarding the Elimination of the Fee for Petitions To Make Special Filed Under the Patent...  

Federal Register 2010, 2011, 2012, 2013

...applicants must pay a petition fee under 37 CFR 1.17(h) to have an application...accompanied by a petition to make special under 37 CFR 1.102(d) along with the required petition fee set forth in 37 CFR 1.17(h). The PPH...

2010-05-25

318

Computer-based method and system for linking records in data files  

US Patent & Trademark Office Database

The present invention relates to computer-based technology for linking or matching records in data files, based on at least one identifier in common, with a threshold probability that records are linked, the method uses a Bayesian probabilistic approach to determine the likelihood that the identified records are linked.

2003-12-02

319

Effectiveness of a vocabulary data file, encyclopaedia, and Internet homepages in a conversation?support system for people with moderate?to?severe aphasia  

Microsoft Academic Search

Background: In order to facilitate conversation for people with moderate?to?severe aphasia, a conversation?support system has been developed. This system consists of three electronic resources: a vocabulary data file, an encyclopaedia, and homepages on the Internet. The vocabulary data file we created contains approximately 50,000 words, mostly consisting of various proper names, which are classified into 10 categories. These words function

Kiyoshi Yasuda; Tatsuya Nemoto; Keisuke Takenaka; Mami Mitachi; Kazuhiro Kuwabara

2007-01-01

320

76 FR 48833 - Notice of Filings of Self-Certifications of Coal Capability Under the Powerplant and Industrial...  

Federal Register 2010, 2011, 2012, 2013

...DEPARTMENT OF ENERGY [Certification Notice...Self-Certifications of Coal Capability Under...powerplants submitted coal capability self-certifications...the Department of Energy (DOE) pursuant...capability to use coal or another alternate fuel as a primary energy source....

2011-08-09

321

77 FR 74473 - Notice of Filing of Self-Certification of Coal Capability Under the Powerplant and Industrial...  

Federal Register 2010, 2011, 2012, 2013

...DEPARTMENT OF ENERGY [Certification Notice...Self-Certification of Coal Capability Under...powerplant, submitted a coal capability self...the Department of Energy (DOE) pursuant...capability to use coal or another alternate fuel as a primary energy source....

2012-12-14

322

Reliability of dynamic systems under limited information.  

SciTech Connect

A method is developed for reliability analysis of dynamic systems under limited information. The available information includes one or more samples of the system output; any known information on features of the output can be used if available. The method is based on the theory of non-Gaussian translation processes and is shown to be particularly suitable for problems of practical interest. For illustration, we apply the proposed method to a series of simple example problems and compare with results given by traditional statistical estimators in order to establish the accuracy of the method. It is demonstrated that the method delivers accurate results for the case of linear and nonlinear dynamic systems, and can be applied to analyze experimental data and/or mathematical model outputs. Two complex applications of direct interest to Sandia are also considered. First, we apply the proposed method to assess design reliability of a MEMS inertial switch. Second, we consider re-entry body (RB) component vibration response during normal re-entry, where the objective is to estimate the time-dependent probability of component failure. This last application is directly relevant to re-entry random vibration analysis at Sandia, and may provide insights on test-based and/or model-based qualification of weapon components for random vibration environments.

Field, Richard V., Jr. (.,; .); Grigoriu, Mircea

2006-09-01

323

Efficient algorithms for multi-file caching  

SciTech Connect

Multi-File Caching issues arise in applications where a set of jobs are processed and each job requests one or more input files. A given job can only be started if all its input files are preloaded into a disk cache. Examples of applications where Multi-File caching may be required are scientific data mining, bit-sliced indexes, and analysis of sets of vertically partitioned files. The difference between this type of caching and traditional file caching systems is that in this environment, caching and replacement decisions are made based on ''combinations of files (file bundles),'' rather than single files. In this work we propose new algorithms for Multi-File caching and analyze their performance. Extensive simulations are presented to establish the effectiveness of the Multi-File caching algorithm in terms of job response time and job queue length.

Otoo, Ekow J.; Rotem, Doron; Seshadri, Sridhar

2004-03-15

324

Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System (AWIPS) Using Shapefiles and DGM Files  

NASA Technical Reports Server (NTRS)

Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or DARE Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU). The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) and 45th Weather Squadron (45 WS) to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. Advantages of both file types will be listed.

Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

2007-01-01

325

Cognitive and Neuronal Systems Underlying Obesity  

PubMed Central

Since the late 1970’s obesity prevalence and per capita food intake in the USA have increased dramatically. Understanding the mechanisms underlying the hyperphagia that drives obesity requires focus on the cognitive processes and neuronal systems controlling feeding that occurs in the absence of metabolic need (i.e., "non-homeostatic” intake). Given that a portion of the increased caloric intake per capita since the late 1970’s is attributed to increased meal and snack frequency, and given the increased pervasiveness of environmental cues associated with energy dense, yet nutritionally deplete foods, there’s a need to examine the mechanisms through which food-related cues stimulate excessive energy intake. Here, learning and memory principles and their underlying neuronal substrates are discussed with regard to stimulus-driven food intake and excessive energy consumption. Particular focus is given to the hippocampus, a brain structure that utilizes interoceptive cues relevant to energy status (e.g., neurohormonal signals such as leptin) to modulate stimulus-driven food procurement and consumption. This type of hippocampal-dependent modulatory control of feeding behavior is compromised by consumption of foods common to Western diets, including saturated fats and simple carbohydrates. The development of more effective treatments for obesity will benefit from a more complete understanding of the complex interaction between dietary, environmental, cognitive, and neurophysiological mechanisms contributing to excessive food intake.

Kanoski, Scott E.

2012-01-01

326

FileSearchCube: A File Grouping Tool Combining Multiple Types of Interfile-Relationships  

Microsoft Academic Search

\\u000a Files in computers are increasing in number, so we require file management tools to find target files and to classify large\\u000a groups of files. Our research group has been developing a system that provides virtual directories made up of related files.\\u000a Many methods to extract inter-file relationships are available, such as word frequency, access co-occurrence, and so on. In\\u000a practice,

Yousuke Watanabe; Kenichi Otagiri; Haruo Yokota

2010-01-01

327

Single File Reciprocating Technique Using Conventional Nickel-Titanium Rotary Endodontic Files.  

PubMed

This study was aimed to evaluate the applicability of a reciprocating movement technique with conventional nickel-titanium files for root canal preparation. Forty-four simulated canals in resin blocks were used in this study and divided as following four groups according to the instruments used and preparation methods. Group CP (n?=?12) and CR (n?=?12) were instrumented with continuous rotation using four files of ProFile and RaCe, respectively. Group RP (n?=?10) and RR (n?=?10) were instrumented with a reciprocation movement by using a single ProFile and RaCe file, respectively. The resin blocks were scanned before and after instrumentation, and the images were superimposed. To compare the efficiency of canal shaping, the preparation time, and centering ratio were calculated. Morphologic changes of tested files were examined by scanning electron microscopy (SEM). Data were analyzed by ANOVA and Duncan's post hoc test at p?files used for Groups CP and CR showed no distortion under the SEM evaluation, the files used for Groups RP and RR had considerable torsional distortion. This study suggests that the reciprocating instrumentation technique using conventional nickel-titanium rotary file systems might have a comparable efficacy for the root canal shaping with reduced shaping time. Although the reciprocating technique seems to be an effective alternative to the conventional rotation technique, the risk of torsional distortion and fracture should be considered before clinical application. SCANNING 9999:1-9, 2013. © 2013 Wiley Periodicals, Inc. PMID:23364950

Jin, So-Youn; Lee, Woocheol; Kang, Mo K; Hur, Bock; Kim, Hyeon-Cheol

2013-01-30

328

A low-cost digital filing system for echocardiography data with MPEG4 compression and its application to remote diagnosis.  

PubMed

The high cost of digital echocardiographs and the large size of data files hinder the adoption of remote diagnosis of digitized echocardiography data. We have developed a low-cost digital filing system for echocardiography data. In this system, data from a conventional analog echocardiograph are captured using a personal computer (PC) equipped with an analog-to-digital converter board. Motion picture data are promptly compressed using a moving pictures expert group (MPEG) 4 codec. The digitized data with preliminary reports obtained in a rural hospital are then sent to cardiologists at distant urban general hospitals via the internet. The cardiologists can evaluate the data using widely available movie-viewing software (Windows Media Player). The diagnostic accuracy of this double-check system was confirmed by comparison with ordinary super-VHS videotapes. We have demonstrated that digitization of echocardiography data from a conventional analog echocardiograph and MPEG 4 compression can be performed using an ordinary PC-based system, and that this system enables highly efficient digital storage and remote diagnosis at low cost. PMID:15562270

Umeda, Akira; Iwata, Yasushi; Okada, Yasumasa; Shimada, Megumi; Baba, Akiyasu; Minatogawa, Yasuyuki; Yamada, Takayasu; Chino, Masao; Watanabe, Takafumi; Akaishi, Makoto

2004-12-01

329

78 FR 54879 - Notice of Filing of Self-Certification of Coal Capability Under the Powerplant and Industrial...  

Federal Register 2010, 2011, 2012, 2013

...DEPARTMENT OF ENERGY [Certification Notice...Self-Certification of Coal Capability Under...powerplant, submitted a coal capability self-certification...the Department of Energy (DOE) pursuant...capability to use coal or another alternate fuel as a primary energy source....

2013-09-06

330

Anisotropy of zero-resistance states in InN films under an in-plane magnetic filed  

Microsoft Academic Search

We report low temperature current-voltage measurements on n-type InN films grown by molecular beam epitaxy. The zero-resistance state with a large critical current around 1 mA has been observed at 0.3 K. Under in-plane field configuration, the zero-resistance state shows a large anisotropy in critical current for B parallel and perpendicular to applied current. The ratio of critical current between

Xiaowei He; Yanhua Dai; Ivan Knez; Rui-Rui Du; Xingqiang Wang; Bo Shen

2011-01-01

331

75 FR 39230 - Combined Notice of Filings  

Federal Register 2010, 2011, 2012, 2013

...m. Eastern Time on Monday, July 12, 2010. Docket Numbers: RP10-922-000. Applicants: Venice Gathering System, LLC. Description: Venice Gathering System, LLC submits tariff filing per 154.203: Baseline Tariff Filing to be...

2010-07-08

332

A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system  

NASA Astrophysics Data System (ADS)

The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

2014-06-01

333

Transferring Files Between the Deep Impact Spacecrafts and the Ground Data System Using the CCSDS File Delivery Protocol (CFDP): A Case Study  

NASA Technical Reports Server (NTRS)

The CCSDS File Delivery Protocol (CFDP) Standard could reshape ground support architectures by enabling applications to communicate over the space link using reliable-symmetric transport services. JPL utilized the CFDP standard to support the Deep Impact Mission. The architecture was based on layering the CFDP applications on top of the CCSDS Space Link Extension Services for data transport from the mission control centers to the ground stations. On July 4, 2005 at 1:52 A.M. EDT, the Deep Impact impactor successfully collided with comet Tempel 1. During the final 48 hours prior to impact, over 300 files were uplinked to the spacecraft, while over 6 thousand files were downlinked from the spacecraft using the CFDP. This paper uses the Deep Impact Mission as a case study in a discussion of the CFDP architecture, Deep Impact Mission requirements, and design for integrating the CFDP into the JPL deep space support services. Issues and recommendations for future missions using CFDP are also provided.

Sanders, Felicia A.; Jones, Grailing, Jr.; Levesque, Michael

2006-01-01

334

Integrated Postsecondary Education Data System (IPEDS): Fall Staff Data File 1995  

NSDL National Science Digital Library

The Fall Staff database contains detailed information on staffing for all Post-secondary institutions in the 50 states, the District of Columbia, and the outlying areas that are eligible to participate in Title IV federal financial aid programs. Information gathered from the IPEDS forms used to create this data file include: distribution of full- and part-time staff by primary occupation, sex, and race-ethnicity; full-time faculty by academic rank and tenure; full-time new hires by sex and race-ethnicity; and number of staff by employment status, primary occupation, and sex. Information on software requirements for reading the data is provided.

Statistics., National C.

1998-01-01

335

Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF  

NASA Technical Reports Server (NTRS)

The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.

Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.

2001-01-01

336

Compress Your Files  

ERIC Educational Resources Information Center

File compression enables data to be squeezed together, greatly reducing file size. Why would someone want to do this? Reducing file size enables the sending and receiving of files over the Internet more quickly, the ability to store more files on the hard drive, and the ability pack many related files into one archive (for example, all files

Branzburg, Jeffrey

2005-01-01

337

You Share, I Share: Network Effects and Economic Incentives in P2P File-Sharing Systems  

Microsoft Academic Search

We study the interaction between network effects and external incentives on file sharing behavior in Peer-to-Peer (P2P) networks. Many current or envisioned P2P networks reward individuals for sharing files, via financial incentives or social recognition. Peers weigh this reward against the cost of sharing incurred when others download the shared file. As a result, if other nearby nodes share files

Mahyar Salek; Shahin Shayandeh; David Kempe

2011-01-01

338

You Share, I Share: Network Effects and Economic Incentives in P2P File-Sharing Systems  

Microsoft Academic Search

\\u000a We study the interaction between network effects and external incentives on file sharing behavior in Peer-to-Peer (P2P) networks.\\u000a Many current or envisioned P2P networks reward individuals for sharing files, via financial incentives or social recognition.\\u000a Peers weigh this reward against the cost of sharing incurred when others download the shared file. As a result, if other nearby\\u000a nodes share files

Mahyar Salek; Shahin Shayandeh; David Kempe

2010-01-01

339

29 CFR 15.303 - How does a Job Corps student file a claim for loss of or damages to personal property under the WIA?  

Code of Federal Regulations, 2013 CFR

...2013-07-01 2013-07-01 false How does a Job Corps student file a claim for loss of or damages...STATUTES Claims Arising Out of the Operation of the Job Corps § 15.303 How does a Job Corps student file a claim for loss of or...

2013-07-01

340

Bimodal Biometric Person Identification System Under Perturbations  

Microsoft Academic Search

Multibiometric person identification systems play a crucial role in environments where security must be ensured. However, build- ing such systems must jointly encompass a good compromise between computational costs and overall performance. These systems must also be robust against inherent or potential noise on the data-acquisition ma- chinery. In this respect, we proposed a bimodal identification system that combines two

Miguel Carrasco; Luis Pizarro; Domingo Mery

2007-01-01

341

Superfund Public Information System (SPIS), June 1998 (on CD-ROM). Data file  

SciTech Connect

The Superfund Public Information System (SPIS) on CD-ROM contains Superfund data for the United States Environmental Protection Agency. The Superfund data is a collection of four databases, CERCLIS, Archive (NFRAP), RODS, and NPL Sites. Descriptions of these databases and CD contents are listed below. The FolioViews browse and retrieval engine is used as a graphical interface to the data. Users can access simple queries and can do complex searching on key words or fields. In addition, context sensitive help, a Superfund process overview, and an integrated data dictionary are available. RODS is the Records Of Decision System. RODS is used to track site clean-ups under the Superfund program to justify the type of treatment chosen at each site. RODS contains information on technology justification, site history, community participation, enforcement activities, site characteristics, scope and role of response action, and remedy. Explanation of Significant Differences (ESDs) are also available on the CD. CERCLIS is the Comprehensive Environmental Response, Compensation, and Liability Information System. It is the official repository for all Superfund site and incident data. It contains comprehensive information on hazardous waste sites, site inspections, preliminary assessments, and remedial status. The system is sponsored by the EPA`s Office of Emergency and Remedial Response, Information Management Center. Archive (NFRAP) consists of hazardous waste sites that have no further remedial action planned; only basic identifying information is provided for archive sites. The sites found in the Archive database were originally in the CERCLIS database, but were removed beginning in the fall of 1995. NPL sites (available online) are fact sheets that describe the location and history of Superfund sites. Included are descriptions of the most recent activities and past actions at the sites that have contributed to the contamination. Population estimates, land usages, and nearby resources give background on the local setting surrounding a site.

NONE

1998-06-01

342

Viewing Files  

Cancer.gov

In addition to standard HTML Web pages, our Web sites sometimes contain other file formats. You may need additional software or browser plug-ins to view some of the information available on our sites. The following lists show each format, along with links

343

RSX system development under VAX/VMS compatibility mode  

SciTech Connect

The Control System for the Proton Storage Ring now being built at Los Alamos will use a VAX-11/750 as its main control computer with several LSI-11/23 microprocessors reading and controlling the hardware. The VMS Compatibility Mode makes it possible to use the VAX as a development system for the LSI-11/23 microprocessors running the RSX-11S (stand-alone) operating system. Digital Equipment Corporation (DEC)-supplied software is used to generate the RSX-11S operating system and DECNET-11S network software. We use the VMS editors to create source files, the Macro-11 assembler and the PDP-11 Fortran-77 compiler to generate object code, and the RSX Task Builder to link the executable RSX task image. The RSX task then can be tested to some extent on the VAX before it is down-line loaded to the LSI-11/23 for further testing.

Fuka, M.A.

1983-01-01

344

76 FR 13176 - Combined Notice of Filings #1  

Federal Register 2010, 2011, 2012, 2013

...Description: California Independent System Operator Corporation submits tariff filing per 35: 2011-03-02 CAISO's Convergence Bidding Compliance Filing to be effective 2/1/2011. Filed Date: 03/02/2011 Accession Number: 20110302-5205...

2011-03-10

345

76 FR 23319 - Combined Notice of Filings #1  

Federal Register 2010, 2011, 2012, 2013

...Corporation Description: California Independent System Operator Corporation submits tariff filing per 35: 2011-04-18 CAISO's CPM Compliance Filing to be effective 4/1/2011. Filed Date: 04/18/2011 Accession Number: 20110418-5229 Comment...

2011-04-26

346

The Case for Efficient File Access Pattern Modeling  

Microsoft Academic Search

Most modern I\\/O systems treat each file access indepen- dently. However, events in a computer system are driven by programs. Thus, accesses to files occur in consistent pat- terns and are by no means independent. The result is that modern I\\/O systems ignore useful information. Using traces of file system activity we show that file ac- cesses are strongly correlated

Thomas M. Kroeger; Darrell D. E. Long

1999-01-01

347

12 CFR 1412.7 - Filing instructions.  

Code of Federal Regulations, 2013 CFR

...2013-01-01 2013-01-01 false Filing instructions. 1412.7 Section 1412.7 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION GOLDEN PARACHUTE AND INDEMNIFICATION PAYMENTS § 1412.7 Filing instructions....

2013-01-01

348

Common Biometric Exchange File Format (CBEFF).  

National Technical Information Service (NTIS)

The Common Biometric Exchange File Format (CBEFE) describes a set of data elements necessary to support biometric technologies in a common way. These data can be placed in a single file used to exchange biometric information between different system compo...

F. L. Podio J. S. Dunn L. Reinert C. J. Tilton L. O'Gorman M. P. Collier M. Jerde B. Wirtz

2001-01-01

349

District Reclaims Filing Cabinet Space.  

ERIC Educational Resources Information Center

The Dade County (Florida) school system saved building space and money with a records management program for student and administrative records and with a modern microfilm electronic filing system. (Author/MLF)

American School and University, 1981

1981-01-01

350

Scalable I/O Systems via Node-Local Storage: Approaching 1 TB/sec File I/O  

SciTech Connect

In the race to PetaFLOP-speed supercomputing systems, the increase in computational capability has been accompanied by corresponding increases in CPU count, total RAM, and storage capacity. However, a proportional increase in storage bandwidth has lagged behind. In order to improve system reliability and to reduce maintenance effort for modern large-scale systems, system designers have opted to remove node-local storage from the compute nodes. Today's multi-TeraFLOP supercomputers are typically attached to parallel file systems that provide only tens of GBs/s of I/O bandwidth. As a result, such machines have access to much less than 1GB/s of I/O bandwidth per TeraFLOP of compute power, which is below the generally accepted limit required for a well-balanced system. In a many ways, the current I/O bottleneck limits the capabilities of modern supercomputers, specifically in terms of limiting their working sets and restricting fault tolerance techniques, which become critical on systems consisting of tens of thousands of components. This paper resolves the dilemma between high performance and high reliability by presenting an alternative system design which makes use of node-local storage to improve aggregate system I/O bandwidth. In this work, we focus on the checkpointing use-case and present an experimental evaluation of the Scalable Checkpoint/Restart (SCR) library, a new adaptive checkpointing library that uses node-local storage to significantly improve the checkpointing performance of large-scale supercomputers. Experiments show that SCR achieves unprecedented write speeds, reaching a measured 700GB/s of aggregate bandwidth on 8,752 processors and an estimated 1TB/s for a similarly structured machine of 12,500 processors. This corresponds to a speedup of over 70x compared to the bandwidth provided by the 10GB/s parallel file system the cluster uses. Further, SCR can adapt to an environment in which there is wide variation in performance or capacity among the individual node-local storage elements.

Bronevetsky, G; Moody, A

2009-08-18

351

File Management In Space  

NASA Technical Reports Server (NTRS)

We propose that the user interact with the spacecraft as if the spacecraft were a file server, so that the user can select and receive data as files in standard formats (e.g., tables or images, such as jpeg) via the Internet. Internet technology will be used end-to-end from the spacecraft to authorized users, such as the flight operation team, and project scientists. The proposed solution includes a ground system and spacecraft architecture, mission operations scenarios, and an implementation roadmap showing migration from current practice to the future, where distributed users request and receive files of spacecraft data from archives or spacecraft with equal ease. This solution will provide ground support personnel and scientists easy, direct, secure access to their authorized data without cumbersome processing, and can be extended to support autonomous communications with the spacecraft.

Critchfield, Anna R.; Zepp, Robert H.

2000-01-01

352

Job Scheduling Under the Portable Batch System  

NASA Technical Reports Server (NTRS)

The typical batch queuing system schedules jobs for execution by a set of queue controls. The controls determine from which queues jobs may be selected. Within the queue, jobs are ordered first-in, first-run. This limits the set of scheduling policies available to a site. The Portable Batch System removes this limitation by providing an external scheduling module. This separate program has full knowledge of the available queued jobs, running jobs, and system resource usage. Sites are able to implement any policy expressible in one of several procedural language. Policies may range from "bet fit" to "fair share" to purely political. Scheduling decisions can be made over the full set of jobs regardless of queue or order. The scheduling policy can be changed to fit a wide variety of computing environments and scheduling goals. This is demonstrated by the use of PBS on an IBM SP-2 system at NASA Ames.

Henderson, Robert L.; Woodrow, Thomas S. (Technical Monitor)

1995-01-01

353

Supply Categories under a Functionalized Supply System.  

National Technical Information Service (NTIS)

The study presents a supply categorization derived from the item behavior patterns in the theater supply system. This categorization is compatible with the supply groupings of other elements of the Armed Forces, potential major allies, and the organizatio...

R. A. Hafner B. R. Baldwin G. P. Chin A. C. Giarratana L. S. Stoneback

1966-01-01

354

File Construction Using FAMULUS.  

ERIC Educational Resources Information Center

Describes the use of FAMULUS, a database management system, to teach library science students at Case Western Reserve University indexing, computerized file construction, and online information retrieval. The special features of FAMULUS and their use in course instruction are outlined and evaluated. A 19-item reference list is attached. (Author/JL)

Pao, Miranda Lee

1982-01-01

355

Personal File Organization  

Microsoft Academic Search

Using this system, personal papers and reprints can be classified and filed in about a minute and can be quickly retrieved knowing either the author or the general subject of interest. All papers relating to a given subject can be located easily.

Theodore B. Warner

1972-01-01

356

Usage analysis of user files in UNIX  

NASA Technical Reports Server (NTRS)

Presented is a user-oriented analysis of short term file usage in a 4.2 BSD UNIX environment. The key aspect of this analysis is a characterization of users and files, which is a departure from the traditional approach of analyzing file references. Two characterization measures are employed: accesses-per-byte (combining fraction of a file referenced and number of references) and file size. This new approach is shown to distinguish differences in files as well as users, which cam be used in efficient file system design, and in creating realistic test workloads for simulations. A multi-stage gamma distribution is shown to closely model the file usage measures. Even though overall file sharing is small, some files belonging to a bulletin board system are accessed by many users, simultaneously and otherwise. Over 50% of users referenced files owned by other users, and over 80% of all files were involved in such references. Based on the differences in files and users, suggestions to improve the system performance were also made.

Devarakonda, Murthy V.; Iyer, Ravishankar K.

1987-01-01

357

Automated File Transfer and Storage Management Concepts for Space  

NASA Technical Reports Server (NTRS)

This presentation will summarize work that has been done to prototype and analyze approaches for automated file transfer and storage management for space missions. The concepts were prototyped in an environment with data files being generated at the target mission rates and stored in onboard files. The space-to-ground link was implemented using a channel simulator to introduce representative mission delays and errors. The system was operated for days with data files building up on the spacecraft and periodically being transferred to ground storage during a limited contact time. Overall performance was measured to identify limits under which the entire data volume could be transferred automatically while still fitting into the mission s limited contact time. The overall concepts, measurements, and results will be presented.

Hogie, Keith; Criscuolo, Ed; Parise, Ron

2004-01-01

358

MCard\\/FS: a file manager for memory cards  

Microsoft Academic Search

A general-purpose file management system designed specifically for memory cards is described. The file system provides a standardized data handling technique to support a wide variety of biomedical applications. Any type of file may be used. System overhead is minimal, and the file structure is independent of microprocessor, word width and implementation language

P. Frenger

1989-01-01

359

Directionally solidified composite systems under evaluation  

NASA Technical Reports Server (NTRS)

Various types of high temperature in-situ composites were reviewed and attempts were made to determine which ones offer the most potential for future development. Some of the systems that were investigated according to the ductility of the component phases were categorized. The categories range from ductile-ductile to brittle-brittle. Examples in each category are considered with special emphasis on systems which look attractive for use in gas turbine engines. Data also touch on microstructure, mechanical properties, and process problems.

Ashbrook, R. L.

1974-01-01

360

The Soviet School System under Perestroika.  

ERIC Educational Resources Information Center

Describes changes at the three levels of the Soviet educational system (primary, basic, and secondary) brought about by Perestroika. The basic level offers a compulsive general studies program while a differentiated secondary curriculum offers more electives. Discusses the teacher's role and the establishment of public governing councils. (SLM)

Nikolaeva, Anna

1990-01-01

361

Directionally solidified composite systems under evaluation  

NASA Technical Reports Server (NTRS)

The directionally solidified eutectic in-situ composites being evaluated for use as turbine materials range from ductile-ductile systems, where both matrix and reinforcement are ductile, to brittle-brittle systems, where both phases are brittle. The alloys most likely to be used in gas turbine engines in the near term are the lamellar ductile-semi ductile alloys gamma prime-delta, Ni3Al-Ni3Nb and gamma/gamma prime-delta Ni,Cr,Cb,Al/Ni3Al-Ni3Nb and the fibrous ductile-brittle alloys M-MC CoTaC or NiTaC and M-M7C3(Co,Cr,Al)-(Cr,Co)7C3. The results of tests are given which indicate that gamma prime strengthened NiTaC alloys and a (Co,Cr,Al)7C3 have greater tensile strength than the strongest superalloys at temperatures up to about 600 C. The gamma prime-delta and gamma/gamma prime-delta alloys in the Ni,Al,Nb(Cr) systems have greater tensile strength than the superalloys at temperatures greater than 800 C. At low stresses fibrous carbide reinforced eutectic alloys have longer lives at high temperatures than the strongest superalloys. Lamellar delta, Ni3Nb reinforced eutectic alloys have longer lives at high temperatures than the strongest superalloys at all stresses. The experience currently being gained in designing with the brittle ceramics SiC and Si3N4 may eventually be applied to ceramic matrix eutectic in-situ composites. However, the refractory metal fiber reinforced brittle-ductile systems may find acceptance as turbine materials before the ceramic-ceramic brittle-brittle systems.

Ashbrook, R. L.

1974-01-01

362

Studies of Glassy Colloidal Systems Under Shear  

NASA Astrophysics Data System (ADS)

In analogy with the glass transition of polymer (and other molecular) liquids, colloidal suspensions can undergo dynamic arrest to form a glassy solid, when the system is concentrated beyond a critical volume fraction. However, in contrast to their molecular counterparts, studies of the glass transition in colloidal systems are facilitated by their natural length- and time-scales, which make it possible to directly visualize the behaviour of the individual constituent particles. Using confocal microscopy, we follow the dynamics of colloidal suspensions near the glass transition, and in particular, their reaction to an imposed deformation. We investigate the evolution from a quiescent solid to a shear melted liquid, to elucidate the nature of the structural rearrangements that govern the properties of glassy materials.

Massa, Michael; Kim, Chanjoong; Weitz, David

2008-03-01

363

File concepts for parallel I/O  

NASA Technical Reports Server (NTRS)

The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

Crockett, Thomas W.

1989-01-01

364

Evaluation of coal mine electrical system safety. Open file report (final) 8 Jul 74-30 May 81  

Microsoft Academic Search

This final report concludes the documentation under grant G0155003 and details research not covered under foregoing report volumes. The first chapter lists all other reports. The following chapters are divided into three major research tasks: Continuous Safety Monitoring Systems, Battery and Battery-Charging Safety, and Mine Power System Transients. The monitoring chapter discusses the prediction of power-system failures. The battery chapter

L. A. Morley; F. C. Trutt; J. A. Kohler

1981-01-01

365

78 FR 62613 - Combined Notice of Filings  

Federal Register 2010, 2011, 2012, 2013

...154.203: Compliance Filing to 153 to be effective 12/1/2012. Filed Date: 10/10/13. Accession Number: 20131010-5083. Comments Due: 5 p.m. ET 10/22/13. The filings are accessible in the Commission's eLibrary system by clicking...

2013-10-22

366

77 FR 71409 - Combined Notice of Filings  

Federal Register 2010, 2011, 2012, 2013

...necessary to become a party to the proceeding. Filings in Existing Proceedings Docket Numbers: RP12-1064-001. Applicants: Venice Gathering System, L.L.C. Description: Order Number 587-V Compliance Filing to be effective 12/1/2012. Filed...

2012-11-30

367

Inverted File Compression through Document Identifier Reassignment.  

ERIC Educational Resources Information Center

Discusses the use of inverted files in information retrieval systems and proposes a document identifier reassignment method to reduce the average gap values in an inverted file. Highlights include the d-gap technique; document similarity; heuristic algorithms; file compression; and performance evaluation from a simulation environment. (LRW)

Shieh, Wann-Yun; Chen, Tien-Fu; Shann, Jean Jyh-Jiun; Chung, Chung-Ping

2003-01-01

368

Two Systems of Spatial Representation Underlying Navigation  

PubMed Central

We review evidence for two distinct cognitive processes by which humans and animals represent the navigable environment. One process uses the shape of the extended 3D surface layout to specify the navigator’s position and orientation. A second process uses objects and patterns as beacons to specify the locations of significant objects. Although much of the evidence for these processes comes from neurophysiological studies of navigating animals and neuroimaging studies of human adults, behavioral studies of navigating children shed light both on the nature of these systems and on their interactions.

Lee, Sang Ah; Spelke, Elizabeth S.

2011-01-01

369

Low cost CPU I\\/O analyzer under system conditions  

Microsoft Academic Search

This paper describes new methodology of component I\\/O testing under system conditions, enabling analog validation of modem buses without the need for full system functionality. The inexpensive implementation is appropriate for chips with IBIST-DFT functionality.

Genadly Zobin; M. Sotman; A. Kostinsky

2005-01-01

370

Machine-readable data files from the Madison Limestone and northern Great Plains regional aquifer system analysis projects, Montana, Nebraska, North Dakota, South Dakota, and Wyoming  

USGS Publications Warehouse

Lists of machine-readable data files were developed for the Madison Limestone and Northern Great Plains Regional Aquifer System Analysis (RASA) projects. They are stored on magnetic tape and available from the U.S. Geological Survey. Record format, file content, and size are given for: (1) Drill-stem-test data for Paleozoic and Mesozoic formations, (2) geologic data from the Madison Limestone project, (3) data sets used in the regional simulation model, (4) head data for the Lower and Upper Cretaceous aquifers, and (5) geologic data for Mesozoic formations of the Northern Great Plains. (USGS)

Downey, J. S.

1982-01-01

371

Incommensurability of a confined system under shear.  

PubMed

We study a chain of harmonically interacting atoms confined between two sinusoidal substrate potentials, when the top substrate is driven through an attached spring with a constant velocity. This system is characterized by three inherent length scales and closely related to physical situations with confined lubricant films. We show that, contrary to the standard Frenkel-Kontorova model, the most favorable sliding regime is achieved by choosing chain-substrate incommensurabilities belonging to the class of cubic irrational numbers (e.g., the spiral mean). At large chain stiffness, the well known golden mean incommensurability reveals a very regular time-periodic dynamics with always higher kinetic friction values with respect to the spiral mean case. PMID:16090702

Braun, O M; Vanossi, A; Tosatti, E

2005-07-01

372

Tank waste remediation system year 2000 dedicated file server project HNF-3418 project plan  

SciTech Connect

The Server Project is to ensure that all TWRS supporting hardware (fileservers and workstations) will not cause a system failure because of the BIOS or Operating Systems cannot process Year 2000 dates.

SPENCER, S.G.

1999-04-26

373

The Ammonia?Hydrogen System under Pressure  

SciTech Connect

Binary mixtures of hydrogen and ammonia were compressed in diamond anvil cells to 15 GPa at room temperature over a range of compositions. The phase behavior was characterized using optical microscopy, Raman spectroscopy, and synchrotron X-ray diffraction. Below 1.2 GPa we observed two-phase coexistence between liquid ammonia and fluid hydrogen phases with limited solubility of hydrogen within the ammonia-rich phase. Complete immiscibility was observed subsequent to the freezing of ammonia phase III at 1.2 GPa, although hydrogen may become metastably trapped within the disordered face-centered-cubic lattice upon rapid solidification. For all compositions studied, the phase III to phase IV transition of ammonia occurred at {approx}3.8 GPa and hydrogen solidified at {approx}5.5 GPa, transition pressures equivalent to those observed for the pure components. A P-x phase diagram for the NH{sub 3}-H{sub 2} system is proposed on the basis of these observations with implications for planetary ices, molecular compound formation, and possible hydrogen storage materials.

Chidester, Bethany A.; Strobel, Timothy A. (CIW)

2012-01-20

374

Document analysis of PDF files: methods, results and implications  

Microsoft Academic Search

SUMMARY A strategy for document analysis is presented which uses Portable Document Format (PDF — the underlying file structure for Adobe Acrobat software) as its starting point. This strategy examines the appearance and geometric position of text and image blocks distributed over an entire document. A blackboard system is used to tag the blocks as a first stage in deducing

WILLIAM S. LOVEGROVE; DAVID F. BRAILSFORD

1995-01-01

375

Testing using Log File Analysis: Tools, Methods, and Issues  

Microsoft Academic Search

Large software systems often keep log files of events. Such log files can be analyzed to check whether a run of a program reveals faults in the system. We discuss how such log files can be used in software testing. We present a frame- work for automatically analyzing log files, and describe a language for specifying analyzer programs and an

James H. Andrews

1998-01-01

376

75 FR 40805 - Combined Notice of Filings #2  

Federal Register 2010, 2011, 2012, 2013

...Market-Based Rates Tariff Under Order No. 714 to be effective 7/1/2010. Filed Date...Market-Based Rate Tariff Under Order No. 714 to be effective 7/1/2010. Filed Date...Market-Based Rate Tariff under Order No. 714 to be effective 7/1/2010. Filed...

2010-07-14

377

Fail-over file transfer process  

NASA Technical Reports Server (NTRS)

The present invention provides a fail-over file transfer process to handle data file transfer when the transfer is unsuccessful in order to avoid unnecessary network congestion and enhance reliability in an automated data file transfer system. If a file cannot be delivered after attempting to send the file to a receiver up to a preset number of times, and the receiver has indicated the availability of other backup receiving locations, then the file delivery is automatically attempted to one of the backup receiving locations up to the preset number of times. Failure of the file transfer to one of the backup receiving locations results in a failure notification being sent to the receiver, and the receiver may retrieve the file from the location indicated in the failure notification when ready.

Semancik, Susan K. (Inventor); Conger, Annette M. (Inventor)

2005-01-01

378

NFS File Handle Security  

Microsoft Academic Search

Each file on an NFS server is uniquely identified by a persistent file handle that is used whenever a client performs any NFS operation. NFS file handles reveal significant amounts of information about the server. If attackers can sniff the file handle, then they may be able to obtain useful information. For example, the encod- ing used by a file

Avishay Traeger; Abhishek Rai; Charles P. Wright; Erez Zadok

379

75 FR 38805 - Filing Via the Internet; Electronic Tariff Filings Notice of Display of Time on Commission's...  

Federal Register 2010, 2011, 2012, 2013

...Electronic Tariff Filings Notice of Display of Time on Commission's Electronic Filing System...display on its electronic filing system the time used by the Commission to mark officially the time that eFilings and eTariff submissions are...

2010-07-06

380

Experience, use, and performance measurement of the Hadoop File System in a typical nuclear physics analysis workflow  

NASA Astrophysics Data System (ADS)

The quantity of information produced in Nuclear and Particle Physics (NPP) experiments necessitates the transmission and storage of data across diverse collections of computing resources. Robust solutions such as XRootD have been used in NPP, but as the usage of cloud resources grows, the difficulties in the dynamic configuration of these systems become a concern. Hadoop File System (HDFS) exists as a possible cloud storage solution with a proven track record in dynamic environments. Though currently not extensively used in NPP, HDFS is an attractive solution offering both elastic storage and rapid deployment. We will present the performance of HDFS in both canonical I/O tests and for a typical data analysis pattern within the RHIC/STAR experimental framework. These tests explore the scaling with different levels of redundancy and numbers of clients. Additionally, the performance of FUSE and NFS interfaces to HDFS were evaluated as a way to allow existing software to function without modification. Unfortunately, the complicated data structures in NPP are non-trivial to integrate with Hadoop and so many of the benefits of the MapReduce paradigm could not be directly realized. Despite this, our results indicate that using HDFS as a distributed filesystem offers reasonable performance and scalability and that it excels in its ease of configuration and deployment in a cloud environment.

Sangaline, E.; Lauret, J.

2014-06-01

381

75 FR 5075 - New York Independent System Operator, Inc.; Notice of Filings  

Federal Register 2010, 2011, 2012, 2013

...1\\ New York Independent System Operator, Inc., 130 FERC ] 61,029 (2010). \\2\\ Foley & Lardner LLP, accession number 20100120-5120; New York ISO, accession numbers 20100120-5119; Steptoe & Johnson...

2010-02-01

382

CineFiles  

NSDL National Science Digital Library

The Pacific Film Archives at Berkeley has been collecting all types of film ephemera for decades. Over the past few years, they have worked to place this material online for the use of film historians and persons with a general interest in cinema. The CineFiles site serves as a database of reviews, press kits, festival and showcase program notes, newspaper articles and other documents from their collection. On their homepage, visitors can perform simple searches, or also perform a filmographic search to search for films by title, subject, genre, and so on. To get visitors started, they have included several sample searches that will be most illustrative. From a 1927 Variety review of Buster Keaton's masterpiece film "College" to an interview with John Cassavetes regarding his 1974 film "A Woman Under the Influence", the CineFiles collection is quite engaging and useful.

383

Implementing MPI-IO atomic mode and shared file pointers using MPI one-sided communication.  

SciTech Connect

The ROMIO implementation of the MPI-IO standard provides a portable infrastructure for use on top of a variety of underlying storage targets. These targets vary widely in their capabilities, and in some cases additional effort is needed within ROMIO to support all MPI-IO semantics. Two aspects of the interface that can be problematic to implement are MPI-IO atomic mode and the shared file pointer access routines. Atomic mode requires enforcing strict consistency semantics, and shared file pointer routines require communication and coordination in order to atomically update a shared resource. For some file systems, native locks may be used to implement these features, but not all file systems have lock support. In this work, we describe algorithms for implementing efficient mutex locks using MPI-1 and the one-sided capabilities from MPI-2. We then show how these algorithms may be used to implement both MPI-IO atomic mode and shared file pointer methods for ROMIO without requiring any features from the underlying file system. We show that these algorithms can outperform traditional file system lock approaches. Because of the portable nature of these algorithms, they are likely useful in a variety of situations where distributed locking or coordination is needed in the MPI-2 environment.

Latham, R.; Ross, R.; Thakur, R.; Mathematics and Computer Science

2007-07-01

384

HYPO-COBOL Compiler Validation System (HCCVS) - Population File (Tape) Release 1.0.  

National Technical Information Service (NTIS)

HYPO-COBOL is a proper subset of the full American National Standard Programming Language COBOL as defined in ANSI X3.23-1974. It is oriented toward a compiling system which need not place heavy demands on its environment in terms of time and space and pr...

M. M. Cook R. J. Gorg

1976-01-01

385

Course Management Systems and Campus-Based Learning. Professional File. Number 29  

ERIC Educational Resources Information Center

Course management systems (CMSs) have become a symbol of innovation at institutions of higher education and in less than a decade they have been rapidly adopted by a large number of colleges and universities in many countries around the world (Coates, 2005; Dutton, Cheong, & Park, 2004; Malikowski, Thompson, & Theis, 2007; Wise & Quealy, 2006).…

Lopes, Valerie

2008-01-01

386

Paperless Policy: Digital Filing System Benefits to DoD Contracting Organizations.  

National Technical Information Service (NTIS)

The year 2000 was the cutoff date for the Department of Defense (DoD) to have paperless processes in place. Since then, advances in computer technology have led to such paperless contracting processes as the DoD-wide Standard Procurement System (SPS), Wid...

B. J. Sherman E. Freeman

2007-01-01

387

Mitigating the Effects of Optimistic Replication in a Distributed File System  

Microsoft Academic Search

Optimistic replication strategies can significantly increase availability of data in distributedsystems. However such strategies cannot guarantee global consistency in the presence ofpartitioned updates. The danger of conflicting partitioned updates, combined with the fearthat the machinery needed to cope with conflicts might be excessively complex has preventeddesigners from using optimistic replication in real systems.This dissertation puts these fears to rest by

Puneet Kumar

1994-01-01

388

GatorShare: a file system framework for high-throughput data management  

Microsoft Academic Search

Voluntary Computing systems or Desktop Grids (DGs) enable sharing of commodity computing resources across the globe and have gained tremendous popularity among scientific research communities. Data management is one of the major challenges of adopting the Voluntary Computing paradigm for large data-intensive applications. To date, middleware for supporting such applications either lacks an efficient cooperative data distribution scheme or cannot

Jiangyan Xu; Renato J. O. Figueiredo

2010-01-01

389

17 CFR 242.608 - Filing and amendment of national market system plans.  

...and complete version of the plan is posted on a plan Web site or on a Web site designated by plan participants within two business...effective national market system plan shall ensure that such Web site is updated to reflect amendments to such...

2014-04-01

390

75 FR 65467 - Combined Notice of Filings No. 1  

Federal Register 2010, 2011, 2012, 2013

...2010. Docket Numbers: RP11-1413-000. Applicants: Venice Gathering System, L.L.C. Description: Venice Gathering System, L.L.C. submits tariff filing per 154.203: Venice Gathering System Rate Settlement Compliance Filing...

2010-10-25

391

Deep hydrogeologic flow system underlying the Oak Ridge Reservation.  

National Technical Information Service (NTIS)

The deep hydrogeologic system underlying the Oak Ridge Reservation contains some areas contaminated with radionuclides, heavy metals, nitrates, and organic compounds. The groundwater at that depth is saline and has previously been considered stagnant. On ...

R. Nativ A. E. Hunley

1993-01-01

392

Demonstration of coal mine illumination systems. Open file report (final) October 1977-June 1980  

SciTech Connect

The purpose of this program was to demonstrate the feasibility of illuminating various types of underground coal mining machinery as required by the Federal Coal Mine Illumination Standards Part 75.1719 to 75.1719-4 Code of Federal Regulations Title 30. Nine various machines were illuminated and the illumination systems were evaluated for a 3-month period. Factors evaluted were ease of implementation, reliability, ease of maintenance, acceptance by mine workers and operations, illumination degradation, and durability.

Szpak, A.D.; Hahn, W.F.; Skinner, C.S.

1981-01-01

393

Demonstration of coal mine illumination systems. Open file report (final) October 1977June 1980  

Microsoft Academic Search

The purpose of this program was to demonstrate the feasibility of illuminating various types of underground coal mining machinery as required by the Federal Coal Mine Illumination Standards Part 75.1719 to 75.1719-4 Code of Federal Regulations Title 30. Nine various machines were illuminated and the illumination systems were evaluated for a 3-month period. Factors evaluted were ease of implementation, reliability,

A. D. Szpak; W. F. Hahn; C. S. Skinner

1981-01-01

394

New Paradigm of Power System Planning under Competitive Environment  

NASA Astrophysics Data System (ADS)

This paper presents a new paradigm of power system planning under competitive environment. As the liberalization of power systems become more competitive, power systems are faced with new aspects that the conventional bundled power company has never encountered. The conventional power system planning methods do not match the requirements of competitive environment. In practice, the power system liberalization brings about new environment that puts emphasis on the profit maximization and the risk minimization. Thus, the problem formulation of power system planning should be reformulated to reflect the new aspects in power systems. As the tasks of power system planning, this paper outlines transmission network expansion planning, distribution network expansion planning, and unit commitment under competitive environment. In addition, new tasks such as very short-term load forecasting, electricity price forecasting, and wind power forecasting are described.

Mori, Hiroyuki

395

A generic filter driver for file classification in Linux  

Microsoft Academic Search

Whenever, user has to deal with lots of files present in the system, managing and storing these files systematically is very important in order to make it easier to access them. Here comes the role of our FS Filter Driver which helps user to classify these files and check the intrusions in these files and store them in separate directories

Pravin Dilp; Ajit Ambeka; Pramila Chawan

2011-01-01

396

Continuous working level detector system. Open file report (final) Dec 80-Jun 81  

SciTech Connect

Studies show that the exposure of radon daughter products to miners in underground mines causes a fivefold increase in the incidence of lung cancer. To aid the uranium mining industry in complying with the standards enforced to limit exposure to these daughter products, the Bureau of Mines and the Mining Safety and Health Administration have been conducting studies on personal dosimeters to determine the most accurate measurement of working level exposure. This report documents a contract undertaken to develop a commercially available source of a continuous working level detector developed by the Bureau of Mines. The report contains a system and circuit description of the continuous working level detector.

Strombotne, T.R.; Beggs, A.L.

1982-05-01

397

A Systems Modeling Approach for Risk Management of Command File Errors  

NASA Technical Reports Server (NTRS)

The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.

Meshkat, Leila

2012-01-01

398

SIDS-toADF File Mapping Manual  

NASA Technical Reports Server (NTRS)

The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its I/O software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The SIDS-toADF File Mapping Manual specifies the exact manner in which, under CGNS conventions, CFD data structures (the SIDS) are to be stored in (i.e., mapped onto) the file structure provided by the database manager (ADF). The result is a conforming CGNS database. Adherence to the mapping conventions guarantees uniform meaning and location of CFD data within ADF files, and thereby allows the construction of universal software to read and write the data.

McCarthy, Douglas; Smith, Matthew; Poirier, Diane; Smith, Charles A. (Technical Monitor)

2002-01-01

399

Optimal Capacitor Allocation in Radial Distribution Systems under APDRP  

Microsoft Academic Search

Optimum location and size of capacitors for a distribution system under APDRP (Accelerated Power Development Programme) is presented. In the present study capacitor sizes are assumed as discrete known variables, which are to be placed on the buses such that it reduces the losses of the distribution system to a minimum. Genetic algorithm is used as an optimization tool, which

S. Azim; K. S. Swarup

2005-01-01

400

Report filing in histopathology.  

PubMed

An assessment of alternative methods of filing histopathology report forms in alphabetical order showed that orthodox card index filing is satisfactory up to about 100000 reports but, because of the need for long-term retrieval, when the reports filed exceed this number they should be copied on jacketed microfilm and a new card index file begun. PMID:591645

Blenkinsopp, W K

1977-11-01

401

Bit Transposed Files  

Microsoft Academic Search

Introduction and Motivation Conventional access methods cannot be effectively used in large Scientific\\/Statistical Database (SSDB) applications. A file structure (called bit tran- sposed file) is proposed which offers several attractive features that are better suited for the special charac- teristics that SSDBs exhibit. This file structure is an extreme version of the (attribute) transposed file. The data is stored by

Harry K. T. Wong; Hsiu-fen Liu; Frank Olken; Doron Rotem; Linda Wong

1985-01-01

402

29 CFR 1977.15 - Filing of complaint for discrimination.  

Code of Federal Regulations, 2013 CFR

...CONTINUED) DISCRIMINATION AGAINST EMPLOYEES EXERCISING RIGHTS UNDER THE WILLIAMS-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 Procedures § 1977.15 Filing of complaint for discrimination. (a) Who may file. A complaint of section...

2013-07-01

403

78 FR 16848 - Combined Notice of Filings #1  

Federal Register 2010, 2011, 2012, 2013

...facility filings: Docket Numbers: QF13-325-000. Applicants: IPS Power Engineering. Description: Form 556--Notice of self-certification of qualifying cogeneration facility status of IPS Power Engineering under QF13-325. Filed Date:...

2013-03-19

404

Connections: using context to enhance file search  

Microsoft Academic Search

Connections is a file system search tool that combines traditional content-based search with context information gathered from user activity. By tracing file system calls, Connections can identify temporal relationships between files and use them to expand and reorder traditional content search results. Doing so improves both recall (reducing false-positives) and precision (reducing false-negatives). For example, Connections improves the average recall

Craig A. N. Soules; Gregory R. Ganger

2005-01-01

405

Evaluation and Analysis of GreenHDFS: A Self-Adaptive, Energy-Conserving Variant of the Hadoop Distributed File System  

Microsoft Academic Search

We present a detailed evaluation and sensitivity analysis of an energy-conserving, highly scalable variant of the Hadoop Distributed File System (HDFS) called Green-HDFS. Green HDFS logically divides the servers in a Hadoop cluster into Hot and Cold Zones and relies on insightful data-classification driven energy-conserving data placement to realize guaranteed, substantially long periods(several days) of idleness in a significant subset

Rini T. Kaushik; Milind A. Bhandarkar; Klara Nahrstedt

2010-01-01

406

Computer program modifications of Open-file report 82-1065; a comprehensive system for interpreting seismic-refraction and arrival-time data using interactive computer methods  

USGS Publications Warehouse

The computer programs published in Open-File Report 82-1065, A comprehensive system for interpreting seismic-refraction arrival-time data using interactive computer methods (Ackermann, Pankratz, and Dansereau, 1982), have been modified to run on a mini-computer. The new version uses approximately 1/10 of the memory of the initial version, is more efficient and gives the same results.

Ackermann, Hans D.; Pankratz, Leroy W.; Dansereau, Danny A.

1983-01-01

407

Downloading Replicated, Wide-Area Files - A Framework and Empirical Evaluation  

Microsoft Academic Search

The challenge of efficiently retrieving files that are bro- ken into segments and replicated across the wide-area is of prime importance to wide-area, peer-to-peer, and Grid file systems. Two differing algorithms addressing this challenge have been proposed and evaluated. While both have been successful in differing performance scenarios, there has been no unifying work that can view both algorithms under

Rebecca L. Collins; James S. Plank

2004-01-01

408

75 FR 65471 - Combined Notice of Filings No. 2  

Federal Register 2010, 2011, 2012, 2013

...Eastern Time on Tuesday, September 14, 2010. Docket Numbers: RP10-922-001. Applicants: Venice Gathering System, L.L.C. Description: Venice Gathering System, L.L.C. submits tariff filing per 154.205(b): Errata Filing...

2010-10-25

409

75 FR 72820 - Combined Notice Of Filings No. 1  

Federal Register 2010, 2011, 2012, 2013

...Eastern Time on Monday, November 29, 2010. Docket Numbers: RP11-1532-000. Applicants: Venice Gathering System, L.L.C. Description: Venice Gathering System, L.L.C. submits tariff filing per 154.203: Compliance Filing and...

2010-11-26

410

Endodontic Treatment of Maxillary Premolar with Three Root Canals Using Optical Microscope and NiTi Rotatory Files System  

PubMed Central

The aim of the study was to report a clinical case of endodontic treatment of a maxillary first premolar with three root canals using an optical microscope and rotary instrumentation technique. The main complaint of the patient, a 16-year-old girl, was pain in tooth 14. After clinical and radiographic examination, irreversible pulpitis was diagnosed. An alteration in the middle third of the pulp chamber radiographically observed suggested the presence of three root canals. Pulp chamber access and initial catheterization using size number 10 K-files were performed. The optical microscope and radiographic examination were used to confirm the presence of three root canals. PathFiles #13, #16, and #19 were used to perform catheterization and ProTaper files S1 and S2 for cervical preparation. Apical preparation was performed using F1 file in the buccal canals and F2 in the palatal canal up to the working length. The root canals were filled with Endofill sealer by thermal compaction technique using McSpadden #50. The case has been receiving follow-up for 12 months and no painful symptomatology or periapical lesions have been found. The use of technological tools was able to assist the endodontic treatment of teeth with complex internal anatomy, such as three-canal premolars.

Relvas, Joao Bosco Formiga; de Carvalho, Fredsom Marcio Acris; Marques, Andre Augusto Franco; Sponchiado, Emilio Carlos; Garcia, Lucas da Fonseca Roberti

2013-01-01

411

Microbiological quality of goat's milk obtained under different production systems.  

PubMed

In order to determine the safety of milk produced by smallholder dairy goat farms, a farm-based research study was conducted on commercial dairy goat farms to compare the microbiological quality of milk produced using 3 different types of dairy goat production systems (intensive, semi-intensive and extensive). A survey of dairy goat farms in and around Pretoria carried out by means of a questionnaire revealed that most of the smallholder dairy goat farms surveyed used an extensive type of production system. The method of milking varied with the type of production system, i.e. machine milking; bucket system machine milking and hand-milking, respectively. Udder half milk samples (n=270) were analysed, of which 31.1% were infected with bacteria. The lowest intra-mammary infection was found amongst goats in the herd under the extensive system (13.3%), compared with 43.3% and 36.7% infection rates under the intensive and semi-intensive production systems, respectively. Staphylococcus intermedius (coagulase positive), Staphylococcus epidermidis and Staphylococcus simulans (both coagulase negative), were the most common cause of intramammary infection with a prevalence of 85.7% of the infected udder halves. The remaining 14.3% of the infection was due to Staphylococcus aureus. Bacteriology of bulk milk samples on the other hand, showed that raw milk obtained by the bucket system milking machine had the lowest total bacterial count (16,450 colony forming units (CFU)/ml) compared to that by pipeline milking machine (36,300 CFU/ml) or hand-milking (48,000 CFU/ml). No significant relationship was found between the somatic cell counts (SCC) and presence of bacterial infection in goat milk In comparison with the herds under the other 2 production systems, it was shown that dairy goat farming under the extensive production system, where hand-milking was used, can be adequate for the production of safe raw goat milk. PMID:16108524

Kyozaire, J K; Veary, C M; Petzer, I M; Donkin, E F

2005-06-01

412

Band/wheel system vibration under impulsive boundary excitation  

NASA Astrophysics Data System (ADS)

Measurements of band vibration undertaken here show that passage of the butt weld connecting the ends of continuous band over wheels excites vibration in the band/wheel system. A displacement impulse occurs each time the weld initially contacts and separates from the wheels. The excitation is periodic, and it can excite instability. The vibration and stability of the coupled band/wheel system under impulsive boundary displacements are analyzed in this paper. The theoretical and experimental findings show that resonance occurs in the system when the weld passage (impulse) period is an integer multiple of any system natural period.

Wang, K. W.; Mote, C. D.

1987-06-01

413

Efficient diagnosis of multiprocessor systems under probabilistic models  

NASA Technical Reports Server (NTRS)

The problem of fault diagnosis in multiprocessor systems is considered under a probabilistic fault model. The focus is on minimizing the number of tests that must be conducted in order to correctly diagnose the state of every processor in the system with high probability. A diagnosis algorithm that can correctly diagnose the state of every processor with probability approaching one in a class of systems performing slightly greater than a linear number of tests is presented. A nearly matching lower bound on the number of tests required to achieve correct diagnosis in arbitrary systems is also proven. Lower and upper bounds on the number of tests required for regular systems are also presented. A class of regular systems which includes hypercubes is shown to be correctly diagnosable with high probability. In all cases, the number of tests required under this probabilistic model is shown to be significantly less than under a bounded-size fault set model. Because the number of tests that must be conducted is a measure of the diagnosis overhead, these results represent a dramatic improvement in the performance of system-level diagnosis techniques.

Blough, Douglas M.; Sullivan, Gregory F.; Masson, Gerald M.

1989-01-01

414

Correlation Based File Prefetching Approach for Hadoop  

Microsoft Academic Search

Hadoop Distributed File System (HDFS) has been widely adopted to support Internet applications because of its reliable, scalable and low-cost storage capability. Blue Sky, one of the most popular e-Learning resource sharing systems in China, is utilizing HDFS to store massive courseware. However, due to the inefficient access mechanism of HDFS, access latency of reading files from HDFS significantly impacts

Bo Dong; Xiao Zhong; Qinghua Zheng; Lirong Jian; Jian Liu; Jie Qiu; Ying Li

2010-01-01

415

Indexing and filing of pathological illustrations.  

PubMed Central

An inexpensive feature card retrieval system has been combined with the Systematised Nomenclature of Pathology (SNOP) to provide simple but efficient means of indexing and filing 2 in. x 2 in. transparencies within a department of pathology. Using this system 2400 transparencies and the associated index cards can be conveniently stored in one drawer of a standard filing cabinet. Images

Brown, R A; Fawkes, R S; Beck, J S

1975-01-01

416

Conversion of Files for Circulation Control.  

ERIC Educational Resources Information Center

Presents suggestions for reducing labor costs, improving accuracy, and maximizing computer use when converting circulation system bibliographic files. Planning advice is offered for system integration with regional, state, or national systems. (RAA)

Barkalow, Pat

1979-01-01

417

Wotsit's File Format Collection  

NSDL National Science Digital Library

Wotsit's File Format Collection, provided by Paul Oliver, features a very large number of file formats. These include JPEG image files, wave sound files, Rich Text files, and common database and word-processing files such as Paradox and Wordperfect. Documents collected or linked at the site are primarily either original specifications from the creator or an improved version of the original. All of the specifications are very technical and are directed towards programmers. Users can subscribe to a mailing list for notification of site updates.

418

Stability of quantized control systems under dynamic bit assignment  

Microsoft Academic Search

In recent years, there have been several papers characterizing the minimum number of quantization levels required to assure closed-loop stability. This minimum bit rate is usually achieved through time-varying quantization policies. Many networks, however, prefer a constant bit rate configuration, so it is useful to characterize the stability of quantized feedback systems under constant bit rate quantization. This note first

Qiang Ling; Michael D. Lemmon

2005-01-01

419

Cholinergic System Under Aluminium Toxicity in Rat Brain  

PubMed Central

The present investigation envisages the toxic effects of aluminium on the cholinergic system of male albino rat brain. Aluminium toxicity (LD50/24 h) evaluated as per Probit method was found to be 700 mg/kg body weight. One-fifth of lethal dose was taken as the sublethal dose. For acute dose studies, rats were given a single lethal dose of aluminium acetate orally for one day only and for chronic dose studies, the rats were administered with sublethal dose of aluminium acetate once in a day for 25 days continuously. The two constituents of the cholinergic system viz. acetylcholine and acetylcholinesterase were determined in selected regions of rat brain such as cerebral cortex, hippocampus, hypothalamus, cerebellum, and pons-medulla at selected time intervals/days under acute and chronic treatment with aluminium. The results revealed that while acetylcholinesterase activity was inhibited, acetylcholine level was elevated differentially in all the above mentioned areas of brain under aluminium toxicity, exhibiting area-specific response. All these changes in the cholinergic system were subsequently manifested in the behavior of rat exhibiting the symptoms such as adipsia, aphagia, hypokinesia, fatigue, seizures, etc. Restoration of the cholinergic system and overt behavior of rat to the near normal levels under chronic treatment indicated the onset of either detoxification mechanisms or development of tolerance to aluminium toxicity in the animal which was not probably so efficient under acute treatment.

Yellamma, K.; Saraswathamma, S.; Kumari, B. Nirmala

2010-01-01

420

11. An abandoned electrical system was found under the pressedsteel ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

11. An abandoned electrical system was found under the pressed-steel ceiling. For some undetermined reason the pattern of the ceiling panels has 'photographed' onto the cardboard substrate. Two different panel designs were utilized in a checkerboard pattern. One panel of each design remains in place. Credit GADA/MRM. - Stroud Building, 31-33 North Central Avenue, Phoenix, Maricopa County, AZ

421

Failure of thermal barrier coating systems under cyclic thermomechanical loading  

Microsoft Academic Search

The failure mechanisms of thermal barrier coating (TBC) systems applied on gas turbine blades and vanes are investigated using thermomechanical fatigue (TMF) tests and finite element (FE) modeling. TMF tests were performed at two levels of applied mechanical strain, namely five times and three times the critical in-service mechanical strain of an industrial gas turbine. TMF testing under the higher

E Tzimas; H Müllejans; S. D Peteves; J Bressers; W Stamm

2000-01-01

422

Sorted pulse data (SPD) library. Part I: A generic file format for LiDAR data from pulsed laser systems in terrestrial environments  

NASA Astrophysics Data System (ADS)

The management and spatial-temporal integration of LiDAR data from different sensors and platforms has been impeded by a lack of generic open source tools and standards. This paper presents a new generic file format description (sorted pulse data; SPD) for the storage and processing of airborne and terrestrial LiDAR data. The format is designed specifically to support both traditional discrete return and waveform data, using a pulse (rather than point) based data model. The SPD format also supports 2D spatial indexing of the pulses, where pulses can be referenced using cartesian, spherical, polar or scan geometry coordinate systems and projections. These indexes can be used to significantly speed up data processing whilst allowing the data to be appropriately projected and are particularly useful when analysing and interpreting TLS data. The format is defined within a HDF5 file, which provides a number of benefits including broad support across a wide range of platforms and architectures and support for file compression. An implementation of the format is available within the open source sorted pulse data software library (SPDLib; http://www.spdlib.org).

Bunting, Peter; Armston, John; Lucas, Richard M.; Clewley, Daniel

2013-07-01

423

Factors affecting the wear of sonic files.  

PubMed

The aim of this study was to investigate factors affecting the wear and cutting ability of sonic files. A model system was used and the following variables evaluated, file type; Heliosonic, Rispisonic or Shaper, load; 25, 50 or 100 grams and length of time in use; new, 30 or 60 seconds. A 3(3) full factorial analysis with two replications into the effect of the above variables on the cutting ability of the Heliosonic, Rispisonic and Shaper files powered by the MM1500 sonic instrument was performed. A new file size 25 (Heliosonic and Shaper) or No 3 (Rispisonic) was used for each cut together with water irrigation and the substrate used was 1 mm thick sections of bovine bone. All variables had a significant effect on cutting (ANOVA. p < 0.001). However examination of the F values showed that the most significant variable was load, followed by file type, and time. The most significant interaction was between file type and load followed by time and file type. The interaction between time and load was not significant (p > 0.05). The Rispisonic file was most susceptible to wear during use especially at higher loads and the Heliosonic file cut least. It is suggested that the Shaper file is the better design of the three with respect to cutting ability and wear with use. PMID:9028184

Lumley, P J

1996-08-01

424

Electroporation visualized under a multishot pulsed laser fluorescence microscope system  

NASA Astrophysics Data System (ADS)

We describe a new fluorescence microscope system, which is the third generation of our pulsed-laser microscope systems developed for the purpose of capturing rapid cellular phenomena. Time resolution of this latest version is supported by the combination of a Q- switched Nd:YAG laser producing a burst of 4 pulses and a large format framing camera. We obtain series images at intervals on the order of 10 microsecond(s) with exposure times of 30 ns. With this multi-shot pulsed laser fluorescence microscope system, we examined the behavior of the transmembrane potential in a sea urchin egg under an intense electric field. Irreversible process of cell electroporation was revealed in serial images taken under a single electric pulse of microsecond duration.

Itoh, Hiroyasu; Yu, Irene I. K.; Hibino, Masahiro; Hayakawa, Tsuyoshi; Kinosita, Kazuhiko, Jr.

1993-10-01

425

Neural systems underlying approach and avoidance in anxiety disorders  

PubMed Central

Approach-avoidance conflict is an important psychological concept that has been used extensively to better understand cognition and emotion. This review focuses on neural systems involved in approach, avoidance, and conflict decision making, and how these systems overlap with implicated neural substrates of anxiety disorders. In particular, the role of amygdala, insula, ventral striatal, and prefrontal regions are discussed with respect to approach and avoidance behaviors. Three specific hypotheses underlying the dysfunction in anxiety disorders are proposed, including: (i) over-representation of avoidance valuation related to limbic overactivation; (ii) under- or over-representation of approach valuation related to attenuated or exaggerated striatal activation respectively; and (iii) insufficient integration and arbitration of approach and avoidance valuations related to attenuated orbitofrontal cortex activation. These dysfunctions can be examined experimentally using versions of existing decision-making paradigms, but may also require new translational and innovative approaches to probe approach-avoidance conflict and related neural systems in anxiety disorders.

Robin L., Aupperle; Martin, P. Paulus

2010-01-01

426

76 FR 35876 - Combined Notice of Filings #1  

Federal Register 2010, 2011, 2012, 2013

...Consolidated Edison Company of New York, Inc. submits tariff filing per 35.13(a)(1): Amendment to PASNY and EDDS for Targeted DSM Program June 2011 to be effective 6/14/2011 under ER11-3789 Filing Type: 320. Filed Date: 06/13/2011....

2011-06-20

427

ACCIDENT, ILLNESS AND INJURY AND EMPLOYMENT SELF-EXTRACTING FILES  

EPA Science Inventory

Files containing information of accidents, illnesses and injuries of miners. These "self-extracting" files are the actual information (raw data) from the accident and injury MSHA Form 7000-2 filed with MSHA by mining operators and contractors as required under the 30 CFR Part 50....

428

General Test Result Checking with Log File Analysis  

Microsoft Academic Search

We describe and apply a lightweight formal method for checking test results. The method assumes that the software under test writes a text log file; this log file is then analyzed by a program to see if it reveals failures. We suggest a state-machine-based formalism for specifying the log file analyzer programs and describe a language and implementation based on

James H. Andrews; Yingjun Zhang

2003-01-01

429

File caching in data intensive scientific applications  

SciTech Connect

We present some theoretical and experimental results of animportant caching problem that arises frequently in data intensivescientific applications. In such applications, jobs need to processseveral files simultaneously, i.e., a job can only be serviced if all itsneeded files are present in the disk cache. The set of files requested bya job is called a file-bundle. This requirement introduces the need forcache replacement algorithms based on file-bundles rather then individualfiles. We show that traditional caching algorithms such Least RecentlyUsed (LRU), and GreedyDual-Size (GDS), are not optimal in this case sincethey are not sensitive to file-bundles and may hold in the cachenon-relevant combinations of files. In this paper we propose and analyzea new cache replacement algorithm specifically adapted to deal withfile-bundles. We tested the new algorithm using a disk cache simulationmodel under a wide range of parameters such as file requestdistributions, relative cache size, file size distribution,and queuesize. In all these tests, the results show significant improvement overtraditional caching algorithms such as GDS.

Otoo, Ekow; Rotem, Doron; Romosan, Alexandru; Seshadri, Sridhar

2004-07-18

430

Text File Display Program  

NASA Technical Reports Server (NTRS)

LOOK program permits user to examine text file in pseudorandom access manner. Program provides user with way of rapidly examining contents of ASCII text file. LOOK opens text file for input only and accesses it in blockwise fashion. Handles text formatting and displays text lines on screen. User moves forward or backward in file by any number of lines or blocks. Provides ability to "scroll" text at various speeds in forward or backward directions.

Vavrus, J. L.

1986-01-01

431

Competitive distributed file allocation  

Microsoft Academic Search

This paper deals with the file allocation problem (BFR92) con- cerning the dynamic optimization of communication costs to ac- cess data in a distributed environment. We develop a dynamic file re-allocation strategy that adapts online to a sequence of r ead and write requests whose location and relative frequencies are com- pletely unpredictable. This is achieved by replicating the file

Baruch Awerbuch; Yair Bartalt; Amos Fiati

1993-01-01

432

Competitive distributed file allocation  

Microsoft Academic Search

This paper deals with the file allocation problem [6] concerning the dynamic optimization of communication costs to access data in a distributed environment. We develop a dynamic file re-allocation strategy that adapts on-line to a sequence of read and write requests whose location and relative frequencies are completely unpredictable. This is achieved by replicating the file in response to read

Baruch Awerbuch; Yair Bartal; Amos Fiat

2003-01-01

433

The Grid File: An Adaptable, Symmetric Multikey File Structure  

Microsoft Academic Search

Traditional file structures that provide multikey access to records, for example, inverted files, are extensions of file structures originally designed for single-key access. They manifest various deficiencies in particular for multikey access to highly dynamic files. We study the dynamic aspects of file structures that treat all keys symmetrically, that is, file structures which avoid the distinction between primary and

Jurg Nievergelt; Hans Hinterberger

1984-01-01

434

76 FR 52323 - Combined Notice of Filings; Filings Instituting Proceedings  

Federal Register 2010, 2011, 2012, 2013

...submits tariff filing per 154.402: ACA Filing effective 10-1-11 to be effective...Company submits tariff filing per 154.402: ACA Filing--effective 10-1-11 to be effective...Company submits tariff filing per 154.402: ACA Filing--effective 10-1-11 to be...

2011-08-22

435

77 FR 59920 - Combined Notice of Filings #1  

Federal Register 2010, 2011, 2012, 2013

...California Independent System Operator Corporation. Description: 2012-09-21 CAISO Certificate of Concurrence Filing re Tulloch Powerhouse LGIA to be effective 3/31/2012. Filed Date: 9/21/12. Accession Number: 20120921-5133. Comments...

2012-10-01

436

New Techniques for Modeling File Data Distribution on Storage Nodes  

Microsoft Academic Search

This paper presents a new probabilistic model which describes the way data blocks belonging to a certain file are distributed along the disk in general purpose systems. The distribution type is classified depending on the kind of file system, the file size and the disk occupancy ratio. The resulting algorithm will be used to simulate the access time to the

Alberto Nuñez; Javier Fernández; José Daniel García; Laura Prada; Jesús Carretero

2008-01-01

437

File concepts for parallel I/O  

NASA Technical Reports Server (NTRS)

The subject of input/output (I/O) was often been neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, based on common data partitioning techniques. Implementation strategies for the proposed organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

Crockett, Thomas W.

1989-01-01

438

Nonlinear system identification of smart structures under high impact loads  

NASA Astrophysics Data System (ADS)

The main purpose of this paper is to develop numerical models for the prediction and analysis of the highly nonlinear behavior of integrated structure control systems subjected to high impact loading. A time-delayed adaptive neuro-fuzzy inference system (TANFIS) is proposed for modeling of the complex nonlinear behavior of smart structures equipped with magnetorheological (MR) dampers under high impact forces. Experimental studies are performed to generate sets of input and output data for training and validation of the TANFIS models. The high impact load and current signals are used as the input disturbance and control signals while the displacement and acceleration responses from the structure-MR damper system are used as the output signals. The benchmark adaptive neuro-fuzzy inference system (ANFIS) is used as a baseline. Comparisons of the trained TANFIS models with experimental results demonstrate that the TANFIS modeling framework is an effective way to capture nonlinear behavior of integrated structure-MR damper systems under high impact loading. In addition, the performance of the TANFIS model is much better than that of ANFIS in both the training and the validation processes.

Sarp Arsava, Kemal; Kim, Yeesock; El-Korchi, Tahar; Park, Hyo Seon

2013-05-01

439

77 FR 60418 - Combined Notice of Filings  

Federal Register 2010, 2011, 2012, 2013

...Accession Number: 20120925-5132. Comments Due: 5 p.m. ET 10/9/12. Docket Numbers: RP12-1064-000. Applicants: Venice Gathering System, L.L.C. Description: NAESB 2.0 Compliance Filing to be effective 12/1/2012. Filed Date:...

2012-10-03

440

CHRIS/HACS Chemical Property File.  

National Technical Information Service (NTIS)

This report represents a listing of the Chemical Properties File which is an integral part of the Hazard Assessment Computer System (HACS). This file contains the physical and chemical properties of some 900 chemical substances; as many as 74 properties m...

E. Atkinson

1976-01-01

441

Open File: School Autonomy and Evaluation.  

ERIC Educational Resources Information Center

The editorial, "Some Aspects of the Educational Change Dynamic: Setting School Autonomy and Evaluation in Context" (Cecilia Braslavsky), explains the focus of this issue. This "Open File: School Autonomy and Evaluation" section contains: "Introduction to the Open File" (Norberto Bottani; Bernard Favre); "IPES The System of Indicators for Secondary…

Bottani, Norberto, Ed.; Favre, Bernard, Ed.

2001-01-01

442

77 FR 71408 - Combined Notice of Filings  

Federal Register 2010, 2011, 2012, 2013

...12/3/12. Docket Numbers: RP13-301-000. Applicants: Venice Gathering System, LLC. Description: Petition for Temporary Exemption from Certain Tariff Provisions of Venice Gathering System, L.L.C. Filed Date: 11/19/12....

2012-11-30

443

Evaluated nuclear structure data file  

NASA Astrophysics Data System (ADS)

The Evaluated Nuclear Structure Data File (ENSDF) contains the evaluated nuclear properties of all known nuclides. These properties are derived both from nuclear reaction and radioactive decay measurements. All experimental data are evaluated to create the adopted properties for each nuclide. ENSDF, together with other numeric and biographic files, can be accessed on-line through the INTERNET or modem. Some of the databases are also available on the World Wide Web. The structure and the scope of ENSDF are presented along with the on-line access system of the National Nuclear Data Center at Brookhaven National Laboratory.

Tuli, J. K.

444

Evaluated nuclear structure data file  

NASA Astrophysics Data System (ADS)

The Evaluated Nuclear Structure Data File (ENSDF) contains the evaluated nuclear properties of all known nuclides, as derived both from nuclear reaction and radioactive decay measurements. All experimental data are evaluated to create the adopted properties for each nuclide. ENSDF, together with other numeric and bibliographic files, can be accessed on-line through the INTERNET or modem, and some of the databases are also available on the World Wide Web. The structure and the scope of ENSDF are presented along with the on-line access system of the National Nuclear Data Center at Brookhaven National Laboratory.

Tuli, J. K.

1996-02-01

445

FITS Foreign File Encapsulation  

NASA Astrophysics Data System (ADS)

FITS FOREIGN is a new FITS extension type that has been submitted to the FITS Registry {http://fits.gsfc.nasa.gov/fits_registry.html} as a standard way to wrap an arbitrary file, allowing a file or tree of files to be wrapped up in FITS and later restored to disk. Certain of the file attribute keywords can be included in the header of any FITS file or extension to support such things as storing a directory tree containing images, tables, and other non-FITS types of files in a multi-extension FITS (MEF) file, and later restoring the whole tree to disk. The motivation for this extension was to allow an implementation that is based on the FITS multi-extension mechanism to encapsulate and pass non-FITS data.

Zárate, N.; Seaman, R.; Tody, D.

2007-10-01

446

Soil Erodibility Parameters Under Various Cropping Systems of Maize  

NASA Astrophysics Data System (ADS)

For four years, runoff and soil loss from seven cropping systems of fodder maize have been measured on experimental plots under natural and simulated rainfall. Besides runoff and soil loss, several variables have also been measured, including rainfall kinetic energy, degree of slaking, surface roughness, aggregate stability, soil moisture content, crop cover, shear strength and topsoil porosity. These variables explain a large part of the variance in measured runoff, soil loss and splash erosion under the various cropping systems. The following conclusions were drawn from the erosion measurements on the experimental plots (these conclusions apply to the spatial level at which the measurements were carried out). (1) Soil tillage after maize harvest strongly reduced surface runoff and soil loss during the winter; sowing of winter rye further reduced winter erosion, though the difference with a merely tilled soil is small. (2) During spring and the growing season, soil loss is reduced strongly if the soil surface is partly covered by plant residues; the presence of plant residue on the surface appeared to be essential in achieving erosion reduction in summer. (3) Soil loss reductions were much higher than runoff reductions; significant runoff reduction is only achieved by the straw system having flat-lying, non-fixed plant residue on the soil surface; the other systems, though effective in reducing soil loss, were not effective in reducing runoff.

van Dijk, P. M.; van der Zijp, M.; Kwaad, F. J. P. M.

1996-08-01

447

Characterizing parallel file-access patterns on a large-scale multiprocessor  

NASA Technical Reports Server (NTRS)

Rapid increases in the computational speeds of multiprocessors have not been matched by corresponding performance enhancements in the I/O subsystem. To satisfy the large and growing I/O requirements of some parallel scientific applications, we need parallel file systems that can provide high-bandwidth and high-volume data transfer between the I/O subsystem and thousands of processors. Design of such high-performance parallel file systems depends on a thorough grasp of the expected workload. So far there have been no comprehensive usage studies of multiprocessor file systems. Our CHARISMA project intends to fill this void. The first results from our study involve an iPSC/860 at NASA Ames. This paper presents results from a different platform, the CM-5 at the National Center for Supercomputing Applications. The CHARISMA studies are unique because we collect information about every individual read and write request and about the entire mix of applications running on the machines. The results of our trace analysis lead to recommendations for parallel file system design. First the file system should support efficient concurrent access to many files, and I/O requests from many jobs under varying load conditions. Second, it must efficiently manage large files kept open for long periods. Third, it should expect to see small requests predominantly sequential access patterns, application-wide synchronous access, no concurrent file-sharing between jobs appreciable byte and block sharing between processes within jobs, and strong interprocess locality. Finally, the trace data suggest that node-level write caches and collective I/O request interfaces may be useful in certain environments.

Purakayastha, Apratim; Ellis, Carla Schlatter; Kotz, David; Nieuwejaar, Nils; Best, Michael

1994-01-01

448

76 FR 49818 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...  

Federal Register 2010, 2011, 2012, 2013

...The NYSE Arca Electronic QCC Filing was based...implemented the NYSE Arca Electronic QCC Filing, and is...Secretary, Legal & Government Affairs, NYSE Euronext...Under the NYSE Arca Electronic QCC Filing, QCCs...

2011-08-11

449

76 FR 49812 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing and Immediate Effectiveness of...  

Federal Register 2010, 2011, 2012, 2013

...The NYSE Amex Electronic QCC Filing was based...implemented the NYSE Amex Electronic QCC Filing, and is...Secretary, Legal & Government Affairs, NYSE Euronext...Under the NYSE Amex Electronic QCC Filing, QCCs...

2011-08-11

450

11 CFR 100.19 - File, filed or filing (2 U.S.C. 434(a)).  

Code of Federal Regulations, 2011 CFR

...false File, filed or filing (2 U.S.C. 434(a)). 100.19 Section 100...GENERAL SCOPE AND DEFINITIONS (2 U.S.C. 431) General Definitions § 100.19 File, filed or filing (2 U.S.C. 434(a)). With respect to...

2014-01-01

451

Merged Federal Files [Academic Year] 1978-79 [machine-readable data file].  

ERIC Educational Resources Information Center

The Merged Federal File for 1978-79 contains school district level data from the following six source files: (1) the Census of Governments' Survey of Local Government Finances--School Systems (F-33) (with 16,343 records merged); (2) the National Center for Education Statistics Survey of School Systems (School District Universe) (with 16,743…

National Center for Education Statistics (ED), Washington, DC.

452

Discontinuous solutions to hyperbolic systems under operator splitting  

NASA Technical Reports Server (NTRS)

Two-dimensional systems of linear hyperbolic equations are studied with regard to their behavior under a solution strategy that in alternate time-steps solves exactly the component one-dimensional operators. The initial data is a step function across an oblique discontinuity. The manner in which this discontinuity breaks up under repeated applications of the split operator is analyzed, and it is shown that the split solution will fail to match the true solution in any case where the two operators do not share all their eigenvectors. The special case of the fluid flow equations is analyzed in more detail, and it is shown that arbitrary initial data gives rise to pseudo acoustic waves and a non-physical stationary wave. The implications of these findings for the design of high-resolution computing schemes are discussed.

Roe, P. L.

1987-01-01

453

The deep hydrogeologic flow system underlying the Oak Ridge Reservation  

SciTech Connect

The deep hydrogeologic system underlying the Oak Ridge Reservation contains some areas contaminated with radionuclides, heavy metals, nitrates, and organic compounds. The groundwater at that depth is saline and has previously been considered stagnant. On the basis of existing and newly collected data, the nature of flow of the saline groundwater and its potential discharge into shallow, freshwater systems was assessed. Data used for this purpose included (1) spatial and temporal pressures and hydraulic heads measured in the deep system, (2) hydraulic parameters of the formations in question, (3) spatial temperature variations, and (4) spatial and temporal chemical and isotopic composition of the saline groundwater. In addition, chemical analyses of brine in adjacent areas in Tennessee, Kentucky, Ohio, Pennsylvania, and West Virginia were compared with the deep water underlying the reservation to help assess the origin of the brine. Preliminary conclusions suggest that the saline water contained at depth is old but not isolated (in terms of recharge and discharge) from the overlying active and freshwater-bearing units. The confined water (along with dissolved solutes) moves along open fractures (or man-made shortcuts) at relatively high velocity into adjacent, more permeable units. Groundwater volumes involved in this flow probably are small.

Nativ, R. [Hebrew Univ., Jerusalem (IL); Hunley, A.E. [Oak Ridge National Lab., TN (United States)

1993-07-01

454

The oblate spheroidal harmonics under coordinate system rotation and translation  

NASA Astrophysics Data System (ADS)

Several recent studies in geodesy and related sciences make use of oblate spheroidal harmonics. For instance, the Earth's external gravitational potential can be mathematically expanded in an oblate spheroidal harmonic series which converges outside any spheroid enclosing all the masses. In this presentation, we develop the exact relations between the solid oblate spheroidal harmonics in two coordinate systems, related to each other by an arbitrary rotation or translation. We start with the relations which exist between the spherical harmonics in the two coordinate systems. This problem has received considerable attention in the past and equivalent results have been independently derived by several investigators. Then, combining the previous results with the expressions which relate the solid spherical harmonics and the solid spheroidal harmonics, we obtain the relations under consideration. For simplicity, complex notation has been adopted throughout the work. This approach is also suitable and easy to use in the zonal harmonic expansions. The spherical harmonics under coordinate system rotation and translation are obtained as a degenerate case. The above theory can be used in any spheroidal harmonic model. Finally, some simple examples are given, in order to illuminate the mathematical derivations.

Panou, Georgios

2014-05-01

455

Converting 80-character ASCII IGES sequential files into more conveniently accessible direct-access files  

Microsoft Academic Search

One of the main drawbacks of the Initial Graphics Exchange Specification (IGES) is the difficulty of accessing and retrieving the required information stored in the IGES files. This is because IGES files created by CAD systems are sequential. This paper describes a module, called “readiges”, which is a general-purpose software package for re-storing IGES files in more conveniently accessible direct-access

M. Kalta; B. J. Davies

1993-01-01

456

Human chorioretinal biopsy under controlled systemic hypotensive anaesthesia.  

PubMed Central

This paper describes a simplified technique for biopsy of the retina and choroid which had been used in 5 human volunteers. The biopsy was carried out in 4 immediately before enucleation of an eye for malignant melanoma and in 1 patient who was undergoing trabeculectomy for painful glaucoma associated with retinitis pigmentosa. A combination of intravenous mannitol and transient controlled systemic hypotension, induced under general anaesthesia with intravenous sodium nitroprusside, was used in 3 cases and resulted in no vitreous loss and minimal bleeding. In the 2 cases in which hypotension was not used bleeding was a definite problem, but no vitreous loss was experienced. Images

Constable, L. J.; Chester, G. H.; Horne, R.; Harriott, J. F.

1980-01-01

457

Analytical theory for the description of powder systems under compression  

NASA Astrophysics Data System (ADS)

In this work, a new theoretical approach to modelling some properties of powder systems under compression is presented. This new theoretical route consists of modelling an actual powder system (with particles of unequal size and irregular form) by means of a system of deforming spheres in a simple cubic arrangement and with a certain global porosity that, in some way, makes it equivalent to the actual one. The study of the evolution of the effective contact area between particles and the effective path of the electric or thermal flow through the powder aggregate is the starting point for establishing the equivalence relationship between the actual system and the simple cubic one. In order to exemplify the utility of this new theoretical tool, two classic problems of practical interest have been studied: the electrical conduction in sintered powders and the law governing the powders’ cold die compaction. The proposed solutions to these problems, as well as the equations allowing one to obtain the equivalence relationship, are validated by experiments carried out in actual powder systems.

Montes, J. M.; Cuevas, F. G.; Cintas, J.

2010-06-01

458

18 CFR 281.211 - Filing and documentation.  

Code of Federal Regulations, 2013 CFR

...COMMISSION, DEPARTMENT OF ENERGY OTHER REGULATIONS UNDER...calculations necessary to determine alternative fuel volumes under § 281...have the ability to use an alternative fuel shall be filed under...3301-3432; Department of Energy Organization Act, 42...

2013-04-01

459

P2P application for file sharing  

Microsoft Academic Search

The novelty of the peer-to-peer (P2P) paradigm relies on two main concepts: cooperation among users and resource sharing. There are many applications based on peer-to-peer paradigm, but the most popular one is the file sharing. We can classify the file sharing application into centralized systems, (having a central server), and decentralized systems. Another classification would be structured and unstructured systems,

Hala Amin; Mohamed Khaled Chahine; Gianluca Mazzini

2012-01-01

460

Units of Instruction for Vocational Office Education. Volume 1. Filing, Office Machines, and General Office Clerical Occupations. Teacher's Guide.  

ERIC Educational Resources Information Center

Nineteen units on filing, office machines, and general office clerical occupations are presented in this teacher's guide. The unit topics include indexing, alphabetizing, and filing (e.g., business names); labeling and positioning file folders and guides; establishing a correspondence filing system; utilizing charge-out and follow-up file systems;…

East Texas State Univ., Commerce. Occupational Curriculum Lab.

461

77 FR 33209 - Combined Notice of Filings #1  

Federal Register 2010, 2011, 2012, 2013

...Description: California Independent System Operator Corporation submits tariff filing per 35.13(a)(2)(iii: 2012-05-25 TPP-GIP Tariff Amendment Filing to be effective 7/25/2012. Filed Date: 5/25/12. Accession Number:...

2012-06-05

462

76 FR 5798 - Combined Notice of Filings No. 1  

Federal Register 2010, 2011, 2012, 2013

...Dominion Cove Point LNG, LP submits tariff filing per 154.204: DCP--Off-System Capacity to be effective 2/12/2011. Filed...Dominion Cove Point LNG, LP submits tariff filing per 154.204: DCP--Contract Quantities to be effective 2/14/2011....

2011-02-02

463

49 CFR 1152.12 - Filing and publication.  

Code of Federal Regulations, 2013 CFR

... 2013-10-01 false Filing and publication. 1152.12 Section 1152.12 ...System Diagram § 1152.12 Filing and publication. (a) Each carrier required to...descriptions in conformance with the filing and publication requirements of this section....

2013-10-01

464

An application of group testing to the file comparison problem  

Microsoft Academic Search

The file comparison problem involves the detection of differences between two copies of the same file located at different sites in a distributed computing system. The file is assumed to be partitioned into n pages, and a signature (checksum) is available for each page. Some ideas from nonadaptive group testing are used to obtain a solution to this problem for

T. Madej

1989-01-01

465

77 FR 71408 - Combined Notice of Filings #1  

Federal Register 2010, 2011, 2012, 2013

...filed on 10/19/12. Filed Date: 11/21/12. Accession Number: 20121121-5221. Comments Due: 5 p.m. ET 12/3/12. The filings are accessible in the Commission's eLibrary system by clicking on the links or querying the docket...

2012-11-30

466

Mathematical modeling of the behavior of geothermal systems under exploitation  

SciTech Connect

Analytical and numerical methods have been used in this investigation to model the behavior of geothermal systems under exploitation. The work is divided into three parts: (1) development of a numerical code, (2) theoretical studies of geothermal systems, and (3) field applications. A new single-phase three-dimensional simulator, capable of solving heat and mass flow problems in a saturated, heterogeneous porous or fractured medium has been developed. The simulator uses the integrated finite difference method for formulating the governing equations and an efficient sparse solver for the solution of the linearized equations. In the theoretical studies, various reservoir engineering problems have been examined. These include (a) well-test analysis, (b) exploitation strategies, (c) injection into fractured rocks, and (d) fault-charged geothermal reservoirs.

Bodvarsson, G.S.

1982-01-01

467

Propulsion system assessment for very high UAV under ERAST  

NASA Technical Reports Server (NTRS)

A series of propulsion systems were configured to power a sensor platform to very high altitudes under the Experimental Research Advanced Sensor Technology (ERAST) program. The unmanned aircraft was required to carry a 100 kg instrument package to 90,000 ft altitude, collect samples and make scientific measurements for 4 hr, and then return to base. A performance screening evaluation of 11 propulsion systems for this high altitude mission was conducted. Engine configurations ranged from turboprop, spark ignition, two- and four-stroke diesel, rotary, and fuel cell concepts. Turbo and non-turbo-compounded, recuperated and nonrecuperated arrangements, along with regular JP and hydrogen fuels were interrogated. Each configuration was carried through a preliminary design where all turbomachinery, heat exchangers, and engine core concepts were sized and weighed for near-optimum design point performance. Mission analysis, which sized the aircraft for each of the propulsion systems investigated, was conducted. From the array of configurations investigated, the propulsion system for each of three different technology levels (i.e., state of the art, near term, and far term) that was best suited for this very high altitude mission was identified and recommended for further study.

Bettner, James L.; Blandford, Craig S.; Rezy, Bernie J.

1995-01-01

468

Medical image file formats.  

PubMed

Image file format is often a confusing aspect for someone wishing to process medical images. This article presents a demystifying overview of the major file formats currently used in medical imaging: Analyze, Neuroimaging Informatics Technology Initiative (Nifti), Minc, and Digital Imaging and Communications in Medicine (Dicom). Concepts common to all file formats, such as pixel depth, photometric interpretation, metadata, and pixel data, are first presented. Then, the characteristics and strengths of the various formats are discussed. The review concludes with some predictive considerations about the future trends in medical image file formats. PMID:24338090

Larobina, Michele; Murino, Loredana

2014-04-01

469

Speed of disentanglement in multiqubit systems under a depolarizing channel  

SciTech Connect

We investigate the speed of disentanglement in the multiqubit systems under the local depolarizing channel, in which each qubit is independently coupled to the environment. We focus on the bipartition entanglement between one qubit and the remaining qubits constituting the system, which is measured by the negativity. For the two-qubit system, the speed for the pure state completely depends on its entanglement. The upper and lower bounds of the speed for arbitrary two-qubit states, and the necessary conditions for a state achieving them, are obtained. For the three-qubit system, we study the speed for pure states, whose entanglement properties can be completely described by five local-unitary-transformation invariants. An analytical expression of the relation between the speed and the invariants is derived. The speed is enhanced by the three-tangle which is the entanglement among the three qubits, but reduced by the two-qubit correlations outside the concurrence. The decay of the negativity can be restrained by the other two negativity with the coequal sense. The unbalance between two qubits can reduce the speed of disentanglement of the remaining qubit in the system, and even can retrieve the entanglement partially. For the k-qubit systems in an arbitrary superposition of Greenberger–Horne–Zeilinger state and W state, the speed depends almost entirely on the amount of the negativity when k increases to five or six. An alternative quantitative definition for the robustness of entanglement is presented based on the speed of disentanglement, with comparison to the widely studied robustness measured by the critical amount of noise parameter where the entanglement vanishes. In the limit of large number of particles, the alternative robustness of the Greenberger–Horne–Zeilinger-type states is inversely proportional to k, and the one of the W states approaches 1/?(k)

Zhang, Fu-Lin, E-mail: flzhang@tju.edu.cn; Jiang, Yue; Liang, Mai-Lin, E-mail: mailinliang@yahoo.com.cn

2013-06-15

470

System for Processing Coded OFDM Under Doppler and Fading  

NASA Technical Reports Server (NTRS)

An advanced communication system has been proposed for transmitting and receiving coded digital data conveyed as a form of quadrature amplitude modulation (QAM) on orthogonal frequency-division multiplexing (OFDM) signals in the presence of such adverse propagation-channel effects as large dynamic Doppler shifts and frequency-selective multipath fading. Such adverse channel effects are typical of data communications between mobile units or between mobile and stationary units (e.g., telemetric transmissions from aircraft to ground stations). The proposed system incorporates novel signal processing techniques intended to reduce the losses associated with adverse channel effects while maintaining compatibility with the high-speed physical layer specifications defined for wireless local area networks (LANs) as the standard 802.11a of the Institute of Electrical and Electronics Engineers (IEEE 802.11a). OFDM is a multi-carrier modulation technique that is widely used for wireless transmission of data in LANs and in metropolitan area networks (MANs). OFDM has been adopted in IEEE 802.11a and some other industry standards because it affords robust performance under frequency-selective fading. However, its intrinsic frequency-diversity feature is highly sensitive to synchronization errors; this sensitivity poses a challenge to preserve coherence between the component subcarriers of an OFDM system in order to avoid intercarrier interference in the presence of large dynamic Doppler shifts as well as frequency-selective fading. As a result, heretofore, the use of OFDM has been limited primarily to applications involving small or zero Doppler shifts. The proposed system includes a digital coherent OFDM communication system that would utilize enhanced 802.1la-compatible signal-processing algorithms to overcome effects of frequency-selective fading and large dynamic Doppler shifts. The overall transceiver design would implement a two-frequency-channel architecture (see figure) that would afford frequency diversity for reducing the adverse effects of multipath fading. By using parallel concatenated convolutional codes (also known as Turbo codes) across the dual-channel and advanced OFDM signal processing within each channel, the proposed system is intended to achieve at least an order of magnitude improvement in received signal-to-noise ratio under adverse channel effects while preserving spectral efficiency.

Tsou, Haiping; Darden, Scott; Lee, Dennis; Yan, Tsun-Yee

2005-01-01

471

An inconvenient truth: file-level metadata and in-file metadata caching in the (file-agnostic) ATLAS event store  

NASA Astrophysics Data System (ADS)

In the ATLAS event store, files are sometimes 'an inconvenient truth.' From the point of view of the ATLAS distributed data management system, files are too small—datasets are the units of interest. From the point of view of the ATLAS event store architecture, files are simply a physical clustering optimization: the units of interest are event collections—sets of events that satisfy common conditions or selection predicates—and such collections may or may not have been accumulated into files that contain those events and no others. It is nonetheless important to maintain file-level metadata, and to cache metadata in event data files. When such metadata may or may not be present in files, or when values may have been updated after files are written and replicated, a clear and transparent model for metadata retrieval from the file itself or from remote databases is required. In this paper we describe how ATLAS reconciles its file and non-file paradigms, the machinery for associating metadata with files and event collections, and the infrastructure for metadata propagation from input to output for provenance record management and related purposes.

Malon, D.; van Gemmeren, P.; Hawkings, R.; Schaffer, A.

2008-07-01

472

Comparative evaluation of the sealing ability of different obturation systems used over apically separated rotary nickel-titanium files: An in vitro study  

PubMed Central

Aim: The study was designed to investigate the sealing ability of two obturation systems (cold laterally compacted gutta percha and Obtura II) over different apically separated rotary nickel-titanium files (RACE and K3 system) using dye extraction method. Materials and Methods: Sixty-two mandibular premolars were divided into 2 groups of 30 teeth each, and 2 teeth served as negative controls. In Groups A and B, roots were prepared using RACE and K3 system, respectively, and were further subdivided into 4 subgroups. In subgroups A1, B1 and A2, B2 (n = 10 each), files were separated at 3 mm from the tip in apical 3rd of the canal. In subgroups A3, B3 and A4, B4 (n = 5), instruments were not separated. Subgroups A1, A3, B1, B3 and A2, A4, B2, B4 were obturated by lateral condensation method and Obtura II techniques, respectively. The sealing ability of the obturated specimens were tested using dye extraction method. The values for each group were recorded and analysis of variance (ANOVA), Student “t” test (two-tailed, independent), and Leven's test were performed. Results: Group A1 showed significantly less leakage than B1. No statistical significant difference between Groups A2 and B2 and Groups A3 and B3, respectively, were observed. Group A4 showed significantly less leakage than B4. Conclusion: Groups obturated with Obtura II showed less leakage than the lateral condensation technique irrespective of presence or absence of fractured NiTi rotary system.

Hegde, Jayshree; Bashetty, Kusum; Kumar, Krishna K; Chikkamallaiah, Champa

2013-01-01

473

On-Board File Management and Its Application in Flight Operations  

NASA Technical Reports Server (NTRS)

In this paper, the author presents the minimum functions required for an on-board file management system. We explore file manipulation processes and demonstrate how the file transfer along with the file management system will be utilized to support flight operations and data delivery.

Kuo, N.

1998-01-01

474

78 FR 27217 - Combined Notice of Filings  

Federal Register 2010, 2011, 2012, 2013

...Numbers: RP13-845-000. Applicants: ETC Tiger Pipeline, LLC. Description: ETC Tiger 2013--System Map Filing to be effective 6...Numbers: RP13-859-000. Applicants: ETC Tiger Pipeline, LLC. Description: ETC Tiger...

2013-05-09

475

Operating Water Resources Systems Under Climate Change Scenarios  

NASA Astrophysics Data System (ADS)

Population and industrial growth has resulted in intense demands on the quantity and quality of water resources worldwide. Moreover, climate change/variability is making a growing percentage of the earth's population vulnerable to extreme weather events (drought and flood). The 1996 Saguenay flood, 1997 Red River flood, the 1998 ice storm, and recent droughts in prairies are few examples of extreme weather events in Canada. Rising economic prosperity, growth in urban population, aging infrastructure, and a changing climate are increasing the vulnerability of Canadians to even more serious impacts. This growing threat can seriously undermine the social and economic viability of the country. Our ability to understand the impacts of climate change/variability on water quantity, quality, and its distribution in time and space can prepare us for sustainable management of this precious resource. The sustainability of water resources, over the medium to long-term, is critically dependent on the ability to manage (plan and operate) water resource systems under a more variable and perhaps warmer future climate. Studying the impacts of climate change/variability on water resources is complex and challenging. It is further complicated by the fact that impacts vary with time and are different at different locations. This study deals with the impacts of climate change/variability on water resources in a portion of the Red River Basin in Canada, both in terms of change in quantity and spatial-temporal distribution. A System Dynamics model is developed to describe the operation of the Shellmouth Reservoir located on the Red River in Canada. The climate data from Canadian Global Coupled Model, CGCM1 is used. The spatial system dynamics approach, based on distributed parameter control theory, is used to model the impacts of climate change/variability on water resources in time and space. A decision support system is developed to help reservoir operators and decision makers in sustainable management of water resources. The decision support system helps in analyzing the impacts of different reservoir operation scenarios, under changing climate conditions, by exploring multiple- what-if- scenarios. Canadian study areas and data sets are used for the research. However, the proposed approach provides a general framework that can be used in other parts of the world.

Ahmad, S.

2002-12-01

476

Phasmida Species File Online  

NSDL National Science Digital Library

The Phasmida Species File (PSF) is a taxonomic database of the world's Phasmida (stick and leaf insects, known as walking sticks and walking leaves in the U.S.). It provides useful and accessible information for professional taxonomists and systematists, such as full synonymic and taxonomic information for over 2,700 valid species and 3,900 taxonomic names (all ranks, valid and not valid), and over 11,000 citations to references. The PSF home page also lists phasmid specialists by geographic location, so users can email them with questions. What makes the PSF stand out as excellent is the substantial amount of documentation and "help" features to guide users. This makes the site easily-accessible to professionals as well as students and educators with more general interests (e.g., rearing records and photographs). If you are not sure where to start looking, or if you are interested in how the database is constructed, use the home page links listed under "Other Places to Start". For information and statistics about the current status of the database (as of October 2006), click on the "About this website and the underlying database" link on the home page.

0002-11-30

477

EPA FACILITY POINT LOCATION FILES  

EPA Science Inventory

Data includes locations of facilities from which pollutants are discharged. The epapoints.tar.gz file is a gzipped tar file of 14 Arc/Info export files and text documents. The .txt files define the attributes located in the INFO point coverage files. Projections are defined in...

478

On-line file caching  

Microsoft Academic Search

Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost, maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is not in the cache, bring it into the

Neal E. Young

1998-01-01

479

Performance Analysis of the Unitree Central File  

NASA Technical Reports Server (NTRS)

This report consists of two parts. The first part briefly comments on the documentation status of two major systems at NASA#s Center for Computational Sciences, specifically the Cray C98 and the Convex C3830. The second part describes the work done on improving the performance of file transfers between the Unitree Mass Storage System running on the Convex file server and the users workstations distributed over a large georgraphic area.

Pentakalos, Odysseas I.; Flater, David

1994-01-01

480

76 FR 41774 - Combined Notice of Filings #1  

Federal Register 2010, 2011, 2012, 2013

...submits tariff filing per 35.12: Granite MBR Petition to be effective 6/30/2011 under...Brookfield Energy Marketing LP Revised MBR Tariff to be effective 7/1/2011 under...35.37: J. Aron & Company 2nd Revised MBR to be effective 7/1/2011. Filed...

2011-07-15

481

Frank Sinatra FBI Files  

NSDL National Science Digital Library

On December 8, the FBI released its 1,275-page file on Frank Sinatra, long rumored to be involved with organized crime. Sinatra first came to the Bureau's attention during World War II, when he bowled over bobby-soxers across the nation. The real focus of FBI investigations into Sinatra, however, was his frequent association with known mobsters. The released files contain no hard evidence of criminal activity on Sinatra's part and portray him as more of a groupie than a wiseguy. At present, the FBI does not plan to place the files on its Freedom of Information Act Reading Room site (described in the June 30, 1998 Scout Report for Social Sciences), but APB Online, a site specializing in police and crime news, has scanned the entire file and placed it online. Users can view the entire document or selected highlights in .gif image format or download the report as one file or in fourteen sections in .pdf format.

482

File I/O for MPI Applications in Redundant Execution Scenarios  

SciTech Connect

As multi-petascale and exa-scale high-performance computing (HPC) systems inevitably have to deal with a number of resilience challenges, such as a significant growth in component count and smaller circuit sizes with lower circuit voltages, redundancy may offer an acceptable level of resilience that traditional fault tolerance techniques, such as checkpoint/restart, do not. Although redundancy in HPC is quite controversial due to the associated cost for redundant components, the constantly increasing number of cores-per-processor is tilting this cost calculation toward a system design where computation, such as for redundancy, is much cheaper and communication, needed for checkpoint/restart, is much more expensive. Recent research and development activities in redundancy for Message Passing Interface (MPI) applications focused on availability/reliability models and replication algorithms. This paper takes a first step toward solving an open research problem associated with running a parallel application redundantly, which is file I/O under redundancy. The approach intercepts file I/O calls made by a redundant application to employ coordination protocols that execute file I/O operations in a redundancy-oblivious fashion when accessing a node-local file system, or in a redundancy-aware fashion when accessing a shared networked file system. A proof-of concept prototype is presented and a number of coordination protocols are described and evaluated. The results show the performance impact for redundantly accessing a shared networked file system, but also demonstrate the capability to regain performance by utilizing MPI communication between replicas and parallel file I/O.

Boehm, Swen [ORNL; Engelmann, Christian [ORNL

2012-01-01

483

76 FR 62092 - Filing Procedures  

Federal Register 2010, 2011, 2012, 2013

...Commission. ACTION: Notice of issuance of Handbook on Filing Procedures...Commission (``Commission'') is issuing a Handbook on Filing Procedures to replace its Handbook on Electronic Filing Procedures. The...

2011-10-06

484

Computational Hemodynamic Simulation of Human Circulatory System under Altered Gravity  

NASA Technical Reports Server (NTRS)

A computational hemodynamics approach is presented to simulate the blood flow through the human circulatory system under altered gravity conditions. Numerical techniques relevant to hemodynamics issues are introduced to non-Newtonian modeling for flow characteristics governed by red blood cells, distensible wall motion due to the heart pulse, and capillary bed modeling for outflow boundary conditions. Gravitational body force terms are added to the Navier-Stokes equations to study the effects of gravity on internal flows. Six-type gravity benchmark problems are originally presented to provide the fundamental understanding of gravitational effects on the human circulatory system. For code validation, computed results are compared with steady and unsteady experimental data for non-Newtonian flows in a carotid bifurcation model and a curved circular tube, respectively. This computational approach is then applied to the blood circulation in the human brain as a target problem. A three-dimensional, idealized Circle of Willis configuration is developed with minor arteries truncated based on anatomical data. Demonstrated is not only the mechanism of the collateral circulation but also the effects of gravity on the distensible wall motion and resultant flow patterns.

Kim. Chang Sung; Kiris, Cetin; Kwak, Dochan

2003-01-01

485

Logical stochastic resonance in bistable system under ?-stable noise  

NASA Astrophysics Data System (ADS)

In the presence of ?-stable noise, the logical stochastic resonance (LSR) phenomenon in a class of double well nonlinear system is investigated in this paper. LSR effect is obtained under ?-stable noise. The probability of getting correct logic outputs is used to evaluate LSR behavior. Four main results are presented. Firstly, in the optimal band of noise intensity, Gaussian white noise is considered a better choice than heavy tailed noise to obtain clean logic operation. But at weak noise background, the success probability of getting the right logic outputs is higher when the system is subjected to heavy tailed noise. Secondly, it is shown that over the entire range of noise variance, the asymmetric noise induced LSR performs better than that induced by the symmetric noise. Furthermore, we find which side the tail skews also affects the correct probability of LSR. At last, the fractional Fokker-Planck equation is presented to show when the characteristic exponent of ?-stable noise is less than 1, LSR behavior will not be obtained irrespective of the setting for other parameters.

Wang, Nan; Song, Aiguo

2014-05-01

486

Optimal Management and Design of Energy Systems under Atmospheric Uncertainty  

NASA Astrophysics Data System (ADS)

The generation and distpatch of electricity while maintaining high reliability levels are two of the most daunting engineering problems of the modern era. This was demonstrated by the Northeast blackout of August 2003, which resulted in the loss of 6.2 gigawatts that served more than 50 million people and which resulted in economic losses on the order of $10 billion. In addition, there exist strong socioeconomic pressures to improve the efficiency of the grid. The most prominent solution to this problem is a substantial increase in the use of renewable energy such as wind and solar. In turn, its uncertain availability—which is due to the intrinsic weather variability—will increase the likelihood of disruptions. In this endeavors of current and next-generation power systems, forecasting atmospheric conditions with uncertainty can and will play a central role, at both the demand and the generation ends. User demands are strongly correlated to physical conditions such as temperature, humidity, and solar radiation. The reason is that the ambient temperature and solar radiation dictate the amount of air conditioning and lighting needed in residential and commercial buildings. But these potential benefits would come at the expense of increased variability in the dynamics of both production and demand, which would become even more dependent on weather state and its uncertainty. One of the important challenges for energy in our time is how to harness these benefits while “keeping the lights on”—ensuring that the demand is satisfied at all times and that no blackout occurs while all energy sources are optimally used. If we are to meet this challenge, accounting for uncertainty in the atmospheric conditions is essential, since this will allow minimizing the effects of false positives: committing too little baseline power in anticipation of demand that is underestimated or renewable energy levels that fail to materialize. In this work we describe a framework for the optimal management and design of energy systems, such as the power grid or building systems, under atmospheric conditions uncertainty. The framework is defined in terms of a mathematical paradigm called stochastic programming: minimization of the expected value of the decision-makers objective function subject to physical and operational constraints, such as low blackout porbability, that are enforced on each scenario. We report results on testing the framework on the optimal management of power grid systems under high wind penetration scenarios, a problem whose time horizon is in the order of days. We discuss the computational effort of scenario generation which involves running WRF at high spatio-temporal resolution dictated by the operational constraints as well as solving the optimal dispatch problem. We demonstrate that accounting for uncertainty in atmospheric conditions results in blackout prevention, whereas decisions using only mean forecast does not. We discuss issues in using the framework for planning problems, whose time horizon is of several decades and what requirements this problem would entail from climate simulation systems.

Anitescu, M.; Constantinescu, E. M.; Zavala, V.

2010-12-01

487

Bit transposed files  

SciTech Connect

This paper first examines the reasons why sophisticated access methods are often not used in large Scientific/Statistical Database (SSDB) applications. A file structure (called bit transposed file) is proposed which offers several attractive features that are better suited for the special characteristics that SSDBs exhibit. This file structure is an extreme version of the transposed file where the data is stored by vertical bitwise partitions (rather than by attributewise). The bit patterns of attributes are assigned using one of several index encoding methods. Each of these encoding methods is appropriate for different query types and access requirements. The bit partitions can also be compressed using a version of the run length encoding scheme. Efficient operators on compressed bit vectors are available to form the backbone of a query language. In addition to selective power with low overhead for SSDBs, the bit transposed file also is amenable to special parallel hardware. Results from experiments with the file structure suggest that this may be a reasonable alternative file structure for large SSDBs.

Wong, H.K.T.; Liu, F.; Olken, F.; Rotem, D.; Wong, L.

1985-02-01

488

Selective File Dumper  

NASA Astrophysics Data System (ADS)

During a computer forensics investigation we faced a problem how to get all the interesting files we need fast. We work, mainly, using the Open Source software products and Linux OS, and we consider the Sleuthkit and the Foremost two very useful tools, but for reaching our target they were too complicated and time consuming to us