Text and Illustration Processing System (TIPS) User’s Manual. Volume 1. Text Processing System.
1981-07-01
m.st De in tre file citalog. To copy a file, begin by calling up the file. Access the Main Menu and, T<ESSq: 2 - Edit an Existing File After you have...23 III MAKING REVISIONS............................................ 24 Call Up an Existing File...above the keyboard is called a Cathode Ray Tube (CRT). It displays information as you key it in. A CURSOR is an underscore character on the screen which
Design of a steganographic virtual operating system
NASA Astrophysics Data System (ADS)
Ashendorf, Elan; Craver, Scott
2015-03-01
A steganographic file system is a secure file system whose very existence on a disk is concealed. Customarily, these systems hide an encrypted volume within unused disk blocks, slack space, or atop conventional encrypted volumes. These file systems are far from undetectable, however: aside from their ciphertext footprint, they require a software or driver installation whose presence can attract attention and then targeted surveillance. We describe a new steganographic operating environment that requires no visible software installation, launching instead from a concealed bootstrap program that can be extracted and invoked with a chain of common Unix commands. Our system conceals its payload within innocuous files that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a file system, user interface and applications through a web architecture.
14 CFR 1212.501 - Record systems determined to be exempt.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Act from which exempted. (i) The Inspector General Investigations Case Files system of records is... criminal laws. (ii) To the extent that noncriminal investigative files may exist within this system of records, the Inspector General Investigations Case Files system of records is exempt from the following...
14 CFR 1212.501 - Record systems determined to be exempt.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Act from which exempted. (i) The Inspector General Investigations Case Files system of records is... extent that there may exist noncriminal investigative files within this system of records, the Inspector General Investigations Case Files system of records is exempt from the following sections of the Privacy...
14 CFR 1212.501 - Record systems determined to be exempt.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Act from which exempted. (i) The Inspector General Investigations Case Files system of records is... extent that there may exist noncriminal investigative files within this system of records, the Inspector General Investigations Case Files system of records is exempt from the following sections of the Privacy...
14 CFR § 1212.501 - Record systems determined to be exempt.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Act from which exempted. (i) The Inspector General Investigations Case Files system of records is... criminal laws. (ii) To the extent that noncriminal investigative files may exist within this system of records, the Inspector General Investigations Case Files system of records is exempt from the following...
Accessing files in an Internet: The Jade file system
NASA Technical Reports Server (NTRS)
Peterson, Larry L.; Rao, Herman C.
1991-01-01
Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.
Accessing files in an internet - The Jade file system
NASA Technical Reports Server (NTRS)
Rao, Herman C.; Peterson, Larry L.
1993-01-01
Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.
14 CFR 1212.501 - Record systems determined to be exempt.
Code of Federal Regulations, 2011 CFR
2011-01-01
... exempted. (i) The Inspector General Investigations Case Files system of records is exempt from all sections... there may exist noncriminal investigative files within this system of records, the Inspector General Investigations Case Files system of records is exempt from the following sections of the Privacy Act (5 U.S.C...
High-performance metadata indexing and search in petascale data storage systems
NASA Astrophysics Data System (ADS)
Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.
2008-07-01
Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.
LVFS: A Scalable Petabye/Exabyte Data Storage System
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.
2013-12-01
Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system. The LVFS system replaces the NFS disk mounting approach of LAADS and utilizes the already existing highly optimized metadata database server, which is applicable to most scientific big data intensive compute systems. Thus, LVFS ties the existing storage system with the existing metadata infrastructure system which we believe leads to a scalable exabyte virtual file system. The uniqueness of the implemented design is not limited to LAADS but can be employed with most scientific data processing systems. By utilizing the Filesystem In Userspace (FUSE), a kernel module available in many operating systems, LVFS was able to replace the NFS system while staying POSIX compliant. As a result, the LVFS system becomes scalable to exabyte sizes owing to the use of highly scalable database servers optimized for metadata storage. The flexibility of the LVFS design allows it to organize data on the fly in different ways, such as by region, date, instrument or product without the need for duplication, symbolic links, or any other replication methods. We proposed here a strategic reference architecture that addresses the inefficiencies of scientific petabyte/exabyte file system access through the dynamic integration of the observing system's large metadata file.
47 CFR 1.10015 - Are there exceptions for emergency filings?
Code of Federal Regulations, 2010 CFR
2010-10-01
... International Bureau Filing System § 1.10015 Are there exceptions for emergency filings? (a) Sometimes we grant... where we find that it is not feasible to secure renewal applications from existing licensees or to...
Zebra: A striped network file system
NASA Technical Reports Server (NTRS)
Hartman, John H.; Ousterhout, John K.
1992-01-01
The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.
The Jade File System. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rao, Herman Chung-Hwa
1991-01-01
File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its design. The prototype consists of interfaces to the Unix File System, the Sun Network File System, and the File Transfer Protocol.
Distributed PACS using distributed file system with hierarchical meta data servers.
Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato
2012-01-01
In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-13
... 0607-AA50 Foreign Trade Regulations (FTR): Mandatory Automated Export System Filing for All Shipments... an existing information collection and the collection of two new data elements in the Automated... report shipments of used self-propelled vehicles and temporary exports through the AES or through...
75 FR 65312 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-22
...: Request for Reauthorization and Extension of Existing Blanket Authorization to Acquire Securities under.... Applicants: Western Electricity Coordinating Council. Description: Notice of Proposed Cancellation of Western Electricity Coordinating Council's Reliability Management System. Filed Date: 10/12/2010. Accession Number...
AVE-SESAME program for the REEDA System
NASA Technical Reports Server (NTRS)
Hickey, J. S.
1981-01-01
The REEDA system software was modified and improved to process the AVE-SESAME severe storm data. A random access file system for the AVE storm data was designed, tested, and implemented. The AVE/SESAME software was modified to incorporate the random access file input and to interface with new graphics hardware/software now available on the REEDA system. Software was developed to graphically display the AVE/SESAME data in the convention normally used by severe storm researchers. Software was converted to AVE/SESAME software systems and interfaced with existing graphics hardware/software available on the REEDA System. Software documentation was provided for existing AVE/SESAME programs underlining functional flow charts and interacting questions. All AVE/SESAME data sets in random access format was processed to allow developed software to access the entire AVE/SESAME data base. The existing software was modified to allow for processing of different AVE/SESAME data set types including satellite surface and radar data.
The Metadata Cloud: The Last Piece of a Distributed Data System Model
NASA Astrophysics Data System (ADS)
King, T. A.; Cecconi, B.; Hughes, J. S.; Walker, R. J.; Roberts, D.; Thieman, J. R.; Joy, S. P.; Mafi, J. N.; Gangloff, M.
2012-12-01
Distributed data systems have existed ever since systems were networked together. Over the years the model for distributed data systems have evolved from basic file transfer to client-server to multi-tiered to grid and finally to cloud based systems. Initially metadata was tightly coupled to the data either by embedding the metadata in the same file containing the data or by co-locating the metadata in commonly named files. As the sources of data multiplied, data volumes have increased and services have specialized to improve efficiency; a cloud system model has emerged. In a cloud system computing and storage are provided as services with accessibility emphasized over physical location. Computation and data clouds are common implementations. Effectively using the data and computation capabilities requires metadata. When metadata is stored separately from the data; a metadata cloud is formed. With a metadata cloud information and knowledge about data resources can migrate efficiently from system to system, enabling services and allowing the data to remain efficiently stored until used. This is especially important with "Big Data" where movement of the data is limited by bandwidth. We examine how the metadata cloud completes a general distributed data system model, how standards play a role and relate this to the existing types of cloud computing. We also look at the major science data systems in existence and compare each to the generalized cloud system model.
LVFS: A Big Data File Storage Bridge for the HPC Community
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.
2015-12-01
Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.
Development and evaluation of oral reporting system for PACS.
Umeda, T; Inamura, K; Inamoto, K; Ikezoe, J; Kozuka, T; Kawase, I; Fujii, Y; Karasawa, H
1994-05-01
Experimental workstations for oral reporting and synchronized image filing have been developed and evaluated by radiologists and referring physicians. The file media is a 5.25-inch rewritable magneto-optical disk of 600-Mb capacity whose file format is in accordance with the IS&C specification. The results of evaluation tell that this system is superior to other existing methods of the same kind such as transcribing, dictating, handwriting, typewriting and key selections. The most significant advantage of the system is that images and their interpretation are never separated. The first practical application to the teaching file and the teaching conference is contemplated in the Osaka University Hospital. This system is a complete digital system in terms of images, voices and demographic data, so that on-line transmission, off-line communication or filing to any database will be easily realized in a PACS environment. We are developing an integrated system of a speech recognizer connected to this digitized oral system.
A mass spectrometry proteomics data management platform.
Sharma, Vagisha; Eng, Jimmy K; Maccoss, Michael J; Riffle, Michael
2012-09-01
Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-20
... consist of the following: (1) An existing 32-foot-high, 3500-foot-long earth filled dam; (2) a reservoir... bypass channel; (4) a 65-foot-wide, 35- foot-long intake structure with a trash rack cleaning system; (5... system at http://www.ferc.gov/docs-filing/ecomment.asp . You must include your name and contact...
A Mass Spectrometry Proteomics Data Management Platform*
Sharma, Vagisha; Eng, Jimmy K.; MacCoss, Michael J.; Riffle, Michael
2012-01-01
Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are “organically” distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/. PMID:22611296
DMFS: A Data Migration File System for NetBSD
NASA Technical Reports Server (NTRS)
Studenmund, William
1999-01-01
I have recently developed dmfs, a Data Migration File System, for NetBSD. This file system is based on the overlay file system, which is discussed in a separate paper, and provides kernel support for the data migration system being developed by my research group here at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal meta data in a flat file, which resides on a separate file system. Our data migration system provides archiving and file migration services. System utilities scan the dmfs file system for recently modified files, and archive them to two separate tape stores. Once a file has been doubly archived, files larger than a specified size will be truncated to that size, potentially freeing up large amounts of the underlying file store. Some sites will choose to retain none of the file (deleting its contents entirely from the file system) while others may choose to retain a portion, for instance a preamble describing the remainder of the file. The dmfs layer coordinates access to the file, retaining user-perceived access and modification times, file size, and restricting access to partially migrated files to the portion actually resident. When a user process attempts to read from the non-resident portion of a file, it is blocked and the dmfs layer sends a request to a system daemon to restore the file. As more of the file becomes resident, the user process is permitted to begin accessing the now-resident portions of the file. For simplicity, our data migration system divides a file into two portions, a resident portion followed by an optional non-resident portion. Also, a file is in one of three states: fully resident, fully resident and archived, and (partially) non-resident and archived. For a file which is only partially resident, any attempt to write or truncate the file, or to read a non-resident portion, will trigger a file restoration. Truncations and writes are blocked until the file is fully restored so that a restoration which only partially succeed does not leave the file in an indeterminate state with portions existing only on tape and other portions only in the disk file system. We chose layered file system technology as it permits us to focus on the data migration functionality, and permits end system administrators to choose the underlying file store technology. We chose the overlay layered file system instead of the null layer for two reasons: first to permit our layer to better preserve meta data integrity and second to prevent even root processes from accessing migrated files. This is achieved as the underlying file store becomes inaccessible once the dmfs layer is mounted. We are quite pleased with how the layered file system has turned out. Of the 45 vnode operations in NetBSD, 20 (forty-four percent) required no intervention by our file layer - they are passed directly to the underlying file store. Of the twenty five we do intercept, nine (such as vop_create()) are intercepted only to ensure meta data integrity. Most of the functionality was concentrated in five operations: vop_read, vop_write, vop_getattr, vop_setattr, and vop_fcntl. The first four are the core operations for controlling access to migrated files and preserving the user experience. vop_fcntl, a call generated for a certain class of fcntl codes, provides the command channel used by privileged user programs to communicate with the dmfs layer.
Solving data-at-rest for the storage and retrieval of files in ad hoc networks
NASA Astrophysics Data System (ADS)
Knobler, Ron; Scheffel, Peter; Williams, Jonathan; Gaj, Kris; Kaps, Jens-Peter
2013-05-01
Based on current trends for both military and commercial applications, the use of mobile devices (e.g. smartphones and tablets) is greatly increasing. Several military applications consist of secure peer to peer file sharing without a centralized authority. For these military applications, if one or more of these mobile devices are lost or compromised, sensitive files can be compromised by adversaries, since COTS devices and operating systems are used. Complete system files cannot be stored on a device, since after compromising a device, an adversary can attack the data at rest, and eventually obtain the original file. Also after a device is compromised, the existing peer to peer system devices must still be able to access all system files. McQ has teamed with the Cryptographic Engineering Research Group at George Mason University to develop a custom distributed file sharing system to provide a complete solution to the data at rest problem for resource constrained embedded systems and mobile devices. This innovative approach scales very well to a large number of network devices, without a single point of failure. We have implemented the approach on representative mobile devices as well as developed an extensive system simulator to benchmark expected system performance based on detailed modeling of the network/radio characteristics, CONOPS, and secure distributed file system functionality. The simulator is highly customizable for the purpose of determining expected system performance for other network topologies and CONOPS.
Utilizing HDF4 File Content Maps for the Cloud
NASA Technical Reports Server (NTRS)
Lee, Hyokyung Joe
2016-01-01
We demonstrate a prototype study that HDF4 file content map can be used for efficiently organizing data in cloud object storage system to facilitate cloud computing. This approach can be extended to any binary data formats and to any existing big data analytics solution powered by cloud computing because HDF4 file content map project started as long term preservation of NASA data that doesn't require HDF4 APIs to access data.
Heterogeneous distributed query processing: The DAVID system
NASA Technical Reports Server (NTRS)
Jacobs, Barry E.
1985-01-01
The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.
77 FR 28391 - Announcement of Requirements and Registration for “Ocular Imaging Challenge”
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-14
..., color, zoom, pan) Integrate with existing EHRs (e.g. ``single sign-on'') Where applicable, leverage and... existing office hardware platforms, and to integrate with existing EHR systems (e.g. ``single sign-on... on the acquisition devices in proprietary databases and file formats, and therefore have limited...
Considerations of persistence and security in CHOICES, an object-oriented operating system
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Madany, Peter W.
1990-01-01
The current design of the CHOICES persistent object implementation is summarized, and research in progress is outlined. CHOICES is implemented as an object-oriented system, and persistent objects appear to simplify and unify many functions of the system. It is demonstrated that persistent data can be accessed through an object-oriented file system model as efficiently as by an existing optimized commercial file system. The object-oriented file system can be specialized to provide an object store for persistent objects. The problems that arise in building an efficient persistent object scheme in a 32-bit virtual address space that only uses paging are described. Despite its limitations, the solution presented allows quite large numbers of objects to be active simultaneously, and permits sharing and efficient method calls.
Learning to File: Reconfiguring Information and Information Work in the Early Twentieth Century.
Robertson, Craig
2017-01-01
This article uses textbooks and advertisements to explore the formal and informal ways in which people were introduced to vertical filing in the early twentieth century. Through the privileging of "system" an ideal mode of paperwork emerged in which a clerk could "grasp" information simply by hand without having to understand or comprehend its content. A file clerk's hands and fingers became central to the representation and teaching of filing. In this way, filing offered an example of a distinctly modern form of information work. Filing textbooks sought to enhance dexterity as the rapid handling of paper came to represent information as something that existed in discrete units, in bits that could be easily extracted. Advertisements represented this mode of information work in its ideal form when they frequently erased the worker or reduced him or her to hands, as "instant" filing became "automatic" filing, with the filing cabinet presented as a machine.
75 FR 33748 - Amateur Radio Use of the Allocation at 5 MHz
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-15
... envelope power (PEP). 3. The existing amateur radio use of the 60 meter band represents a balancing of... Comment Filing System (ECFS), (2) the Federal Government's eRulemaking Portal, or (3) by filing paper... transmitter output power in modern amateur radio transceivers is 100 W PEP, and that the present 50 W PEP...
A Database Design for a Unit Status Reporting System.
1987-03-01
definitions. g. Extraction of data dictionary entries from existing programs. [Ref. 7:pp. 63-66] The third tool is used to define the logic of the...Automation of the Unit Status Reporting System is feasible, and would require: integrated files of data, some direct data extraction from those files...an extract of AR 220-1. Relevant sections of the regulation are included to provide an easy reference for the reader. The last section of the
Electronic Document Management Using Inverted Files System
NASA Astrophysics Data System (ADS)
Suhartono, Derwin; Setiawan, Erwin; Irwanto, Djon
2014-03-01
The amount of documents increases so fast. Those documents exist not only in a paper based but also in an electronic based. It can be seen from the data sample taken by the SpringerLink publisher in 2010, which showed an increase in the number of digital document collections from 2003 to mid of 2010. Then, how to manage them well becomes an important need. This paper describes a new method in managing documents called as inverted files system. Related with the electronic based document, the inverted files system will closely used in term of its usage to document so that it can be searched over the Internet using the Search Engine. It can improve document search mechanism and document save mechanism.
Interactive visualization tools for the structural biologist.
Porebski, Benjamin T; Ho, Bosco K; Buckle, Ashley M
2013-10-01
In structural biology, management of a large number of Protein Data Bank (PDB) files and raw X-ray diffraction images often presents a major organizational problem. Existing software packages that manipulate these file types were not designed for these kinds of file-management tasks. This is typically encountered when browsing through a folder of hundreds of X-ray images, with the aim of rapidly inspecting the diffraction quality of a data set. To solve this problem, a useful functionality of the Macintosh operating system (OSX) has been exploited that allows custom visualization plugins to be attached to certain file types. Software plugins have been developed for diffraction images and PDB files, which in many scenarios can save considerable time and effort. The direct visualization of diffraction images and PDB structures in the file browser can be used to identify key files of interest simply by scrolling through a list of files.
Distributed File System Utilities to Manage Large DatasetsVersion 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-05-21
FileUtils provides a suite of tools to manage large datasets typically created by large parallel MPI applications. They are written in C and use standard POSIX I/Ocalls. The current suite consists of tools to copy, compare, remove, and list. The tools provide dramatic speedup over existing Linux tools, which often run as a single process.
49 CFR Appendix to Part 10 - Exemptions
Code of Federal Regulations, 2013 CFR
2013-10-01
... Federal Records Centers located throughout the country. 2. FHWA Investigations Case File System... following systems of records that consist of (a) Information compiled for the purpose of identifying...) (Publication of existence and character of system); (e)(6) (Ensure records are accurate, relevant, timely, and...
49 CFR Appendix to Part 10 - Exemptions
Code of Federal Regulations, 2014 CFR
2014-10-01
... Federal Records Centers located throughout the country. 2. FHWA Investigations Case File System... following systems of records that consist of (a) Information compiled for the purpose of identifying...) (Publication of existence and character of system); (e)(6) (Ensure records are accurate, relevant, timely, and...
49 CFR Appendix to Part 10 - Exemptions
Code of Federal Regulations, 2012 CFR
2012-10-01
... Federal Records Centers located throughout the country. 2. FHWA Investigations Case File System... following systems of records that consist of (a) Information compiled for the purpose of identifying...) (Publication of existence and character of system); (e)(6) (Ensure records are accurate, relevant, timely, and...
The storage system of PCM based on random access file system
NASA Astrophysics Data System (ADS)
Han, Wenbing; Chen, Xiaogang; Zhou, Mi; Li, Shunfen; Li, Gezi; Song, Zhitang
2016-10-01
Emerging memory technologies such as Phase change memory (PCM) tend to offer fast, random access to persistent storage with better scalability. It's a hot topic of academic and industrial research to establish PCM in storage hierarchy to narrow the performance gap. However, the existing file systems do not perform well with the emerging PCM storage, which access storage medium via a slow, block-based interface. In this paper, we propose a novel file system, RAFS, to bring about good performance of PCM, which is built in the embedded platform. We attach PCM chips to the memory bus and build RAFS on the physical address space. In the proposed file system, we simplify traditional system architecture to eliminate block-related operations and layers. Furthermore, we adopt memory mapping and bypassed page cache to reduce copy overhead between the process address space and storage device. XIP mechanisms are also supported in RAFS. To the best of our knowledge, we are among the first to implement file system on real PCM chips. We have analyzed and evaluated its performance with IOZONE benchmark tools. Our experimental results show that the RAFS on PCM outperforms Ext4fs on SDRAM with small record lengths. Based on DRAM, RAFS is significantly faster than Ext4fs by 18% to 250%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadayappan, Ponnuswamy
Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. We propose a new approach to the data and work distribution model provided by system software based on the unifying formalism of an abstract file system. The proposed hierarchical data model providesmore » simple, familiar visibility and access to data structures through the file system hierarchy, while providing fault tolerance through selective redundancy. The hierarchical task model features work queues whose form and organization are represented as file system objects. Data and work are both first class entities. By exposing the relationships between data and work to the runtime system, information is available to optimize execution time and provide fault tolerance. The data distribution scheme provides replication (where desirable and possible) for fault tolerance and efficiency, and it is hierarchical to make it possible to take advantage of locality. The user, tools, and applications, including legacy applications, can interface with the data, work queues, and one another through the abstract file model. This runtime environment will provide multiple interfaces to support traditional Message Passing Interface applications, languages developed under DARPA's High Productivity Computing Systems program, as well as other, experimental programming models. We will validate our runtime system with pilot codes on existing platforms and will use simulation to validate for exascale-class platforms. In this final report, we summarize research results from the work done at the Ohio State University towards the larger goals of the project listed above.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inman, Jeffrey; Bonnie, David; Broomfield, Matthew
There is a sea (mar is Spanish for sea) of data out there that needs to be handled efficiently. Object Stores are filling the hole of managing large amounts of data efficiently. However, in many cases, and our HPC case in particular, we need a traditional file (POSIX) interface to this data as HPC I/O models have not moved to object interfaces, such as Amazon S3, CDMI, etc.Eventually Object Store providers may deliver file interfaces to their object stores, but at this point those interfaces are not ready to do the job that we need done. MarFS will glue togethermore » two existing scalable components: a file system's scalable metadata component that provides the file interface; and existing scalable object stores (from one or more providers). There will be utilities to do work that is not critical to be done in real-time so that MarFS can manage the space used by objects and allocated to individual users.« less
49 CFR Appendix to Part 10 - Exemptions
Code of Federal Regulations, 2010 CFR
2010-10-01
... Centers located throughout the country. 2. FHWA Investigations Case File System, maintained by the Office... following systems of records that consist of (a) Information compiled for the purpose of identifying...) (Publication of existence and character of system); (e)(6) (Ensure records are accurate, relevant, timely, and...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-08
... System Section Agency Information Collection Activities: Existing collection, comments requested the Voluntary Appeal File (VAF) Brochure ACTION: 60-Day Notice of Information Collection Under Review. The... Criminal Background Check System (NICS) Section has submitted the following information collection request...
49 CFR Appendix to Part 10 - Exemptions
Code of Federal Regulations, 2011 CFR
2011-10-01
... Centers located throughout the country. 2. FHWA Investigations Case File System, maintained by the Office... following systems of records that consist of (a) Information compiled for the purpose of identifying...) (Publication of existence and character of system); (e)(6) (Ensure records are accurate, relevant, timely, and...
NASA Technical Reports Server (NTRS)
Ryan, J. W.; Ma, C.; Schupler, B. R.
1980-01-01
A data base handler which would act to tie Mark 3 system programs together is discussed. The data base handler is written in FORTRAN and is implemented on the Hewlett-Packard 21MX and the IBM 360/91. The system design objectives were to (1) provide for an easily specified method of data interchange among programs, (2) provide for a high level of data integrity, (3) accommodate changing requirments, (4) promote program accountability, (5) provide a single source of program constants, and (6) provide a central point for data archiving. The system consists of two distinct parts: a set of files existing on disk packs and tapes; and a set of utility subroutines which allow users to access the information in these files. Users never directly read or write the files and need not know the details of how the data are formatted in the files. To the users, the storage medium is format free. A user does need to know something about the sequencing of his data in the files but nothing about data in which he has no interest.
Library Circulation Systems: An Overview
ERIC Educational Resources Information Center
Surace, Cecily J.
1972-01-01
The model circulation system outlined is an on-line real time system in which the circulation file is created from the shelf list. The model extends beyond the operational limits of most existing circulation systems and can be considered a reflection of the current state of the art. (36 references) (Author/NH)
NASA Technical Reports Server (NTRS)
Soileau, Kerry M.; Baicy, John W.
2008-01-01
Rig Diagnostic Tools is a suite of applications designed to allow an operator to monitor the status and health of complex networked systems using a unique interface between Java applications and UNIX scripts. The suite consists of Java applications, C scripts, Vx- Works applications, UNIX utilities, C programs, and configuration files. The UNIX scripts retrieve data from the system and write them to a certain set of files. The Java side monitors these files and presents the data in user-friendly formats for operators to use in making troubleshooting decisions. This design allows for rapid prototyping and expansion of higher-level displays without affecting the basic data-gathering applications. The suite is designed to be extensible, with the ability to add new system components in building block fashion without affecting existing system applications. This allows for monitoring of complex systems for which unplanned shutdown time comes at a prohibitive cost.
5 CFR 293.509 - Use of existing Employee Medical Folders upon transfer or reemployment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Use of existing Employee Medical Folders upon transfer or reemployment. 293.509 Section 293.509 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.509 Use of...
5 CFR 293.509 - Use of existing Employee Medical Folders upon transfer or reemployment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Use of existing Employee Medical Folders upon transfer or reemployment. 293.509 Section 293.509 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.509 Use of...
5 CFR 293.509 - Use of existing Employee Medical Folders upon transfer or reemployment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Use of existing Employee Medical Folders upon transfer or reemployment. 293.509 Section 293.509 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.509 Use of...
5 CFR 293.509 - Use of existing Employee Medical Folders upon transfer or reemployment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Use of existing Employee Medical Folders upon transfer or reemployment. 293.509 Section 293.509 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.509 Use of...
5 CFR 293.509 - Use of existing Employee Medical Folders upon transfer or reemployment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Use of existing Employee Medical Folders upon transfer or reemployment. 293.509 Section 293.509 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.509 Use of...
NASA Astrophysics Data System (ADS)
Camilo, Ana E. F.; Grégio, André; Santos, Rafael D. C.
2016-05-01
Malware detection may be accomplished through the analysis of their infection behavior. To do so, dynamic analysis systems run malware samples and extract their operating system activities and network traffic. This traffic may represent malware accessing external systems, either to steal sensitive data from victims or to fetch other malicious artifacts (configuration files, additional modules, commands). In this work, we propose the use of visualization as a tool to identify compromised systems based on correlating malware communications in the form of graphs and finding isomorphisms between them. We produced graphs from over 6 thousand distinct network traffic files captured during malware execution and analyzed the existing relationships among malware samples and IP addresses.
Nemesis I: Parallel Enhancements to ExodusII
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hennigan, Gary L.; John, Matthew S.; Shadid, John N.
2006-03-28
NEMESIS I is an enhancement to the EXODUS II finite element database model used to store and retrieve data for unstructured parallel finite element analyses. NEMESIS I adds data structures which facilitate the partitioning of a scalar (standard serial) EXODUS II file onto parallel disk systems found on many parallel computers. Since the NEMESIS I application programming interface (APl)can be used to append information to an existing EXODUS II files can be used on files which contain NEMESIS I information. The NEMESIS I information is written and read via C or C++ callable functions which compromise the NEMESIS I API.
Dynamic Non-Hierarchical File Systems for Exascale Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Darrell E.; Miller, Ethan L
This constitutes the final report for “Dynamic Non-Hierarchical File Systems for Exascale Storage”. The ultimate goal of this project was to improve data management in scientific computing and high-end computing (HEC) applications, and to achieve this goal we proposed: to develop the first, HEC-targeted, file system featuring rich metadata and provenance collection, extreme scalability, and future storage hardware integration as core design goals, and to evaluate and develop a flexible non-hierarchical file system interface suitable for providing more powerful and intuitive data management interfaces to HEC and scientific computing users. Data management is swiftly becoming a serious problem in themore » scientific community – while copious amounts of data are good for obtaining results, finding the right data is often daunting and sometimes impossible. Scientists participating in a Department of Energy workshop noted that most of their time was spent “...finding, processing, organizing, and moving data and it’s going to get much worse”. Scientists should not be forced to become data mining experts in order to retrieve the data they want, nor should they be expected to remember the naming convention they used several years ago for a set of experiments they now wish to revisit. Ideally, locating the data you need would be as easy as browsing the web. Unfortunately, existing data management approaches are usually based on hierarchical naming, a 40 year-old technology designed to manage thousands of files, not exabytes of data. Today’s systems do not take advantage of the rich array of metadata that current high-end computing (HEC) file systems can gather, including content-based metadata and provenance1 information. As a result, current metadata search approaches are typically ad hoc and often work by providing a parallel management system to the “main” file system, as is done in Linux (the locate utility), personal computers, and enterprise search appliances. These search applications are often optimized for a single file system, making it difficult to move files and their metadata between file systems. Users have tried to solve this problem in several ways, including the use of separate databases to index file properties, the encoding of file properties into file names, and separately gathering and managing provenance data, but none of these approaches has worked well, either due to limited usefulness or scalability, or both. Our research addressed several key issues: High-performance, real-time metadata harvesting: extracting important attributes from files dynamically and immediately updating indexes used to improve search; Transparent, automatic, and secure provenance capture: recording the data inputs and processing steps used in the production of each file in the system; Scalable indexing: indexes that are optimized for integration with the file system; Dynamic file system structure: our approach provides dynamic directories similar to those in semantic file systems, but these are the native organization rather than a feature grafted onto a conventional system. In addition to these goals, our research effort will include evaluating the impact of new storage technologies on the file system design and performance. In particular, the indexing and metadata harvesting functions can potentially benefit from the performance improvements promised by new storage class memories.« less
Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Felix; Quach, Tu-Thach; Wheeler, Jason
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less
Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification
Wang, Felix; Quach, Tu-Thach; Wheeler, Jason; ...
2018-04-05
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less
Cross-Matching of Very Large Catalogs
NASA Astrophysics Data System (ADS)
Martynov, M. V.; Bodryagin, D. V.
Modern astronomical catalogs and sky surveys, that contain billions of objects, belong to the "big data" data class. Existing available services have limited functionality and do not include all required and available catalogs. The software package ACrId (Astronomical Cross Identification) for cross-matching large astronomical catalogs, which uses an sphere pixelation algorithm HEALPix, ReiserFS file system and JSON-type text files for storage, has been developed at the Research Institution "Mykolaiv Astronomical Observatory".
Grande, Nicola Maria; Ahmed, Hany Mohamed Aly; Cohen, Stephen; Bukiet, Frédéric; Plotino, Gianluca
2015-11-01
During the evolution of mechanical instrumentation in endodontics, an important role has been played by reciprocating stainless steel files using horizontal rotational, vertical translational, or combined movements. These kinds of systems are still in use mainly as an accessory to help in the first phases of the treatment. The literature concerning these systems has been analyzed using selected criteria. The latest evolution of horizontal rotational reciprocating movement brought to the development of a different kind of movement in which the angles are asymmetrical and that appears to be ideal in conjunction with modern nickel-titanium (NiTi) files with a greater taper. Initially, this movement was limited to particular handpieces available on the market that was used with existing NiTi files to complete root canal instrumentation. Later on, specific files and proprietary motors were introduced into the market. The differences between reciprocating motion used for NiTi and stainless steel files are described and critically analyzed. A classification of the different mechanical reciprocating motions used is presented, thus enabling an easier understanding of these systems and anticipated future developments. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-02
... Files System AGENCY: Office of the Chief Information Officer, HUD. ACTION: Notification of a New Privacy..., 2012. Jerry E. Williams, Chief Information Officer. HUD/PD&R.01 SYSTEM NAME: Veterans Homelessness..., assistance, or inquiry about the existence of records, contact Harold Williams, Acting Chief Privacy Officer...
Generalized Data Management Systems--Some Perspectives.
ERIC Educational Resources Information Center
Minker, Jack
A Generalized Data Management System (GDMS) is a software environment provided as a tool for analysts, administrators, and programmers who are responsible for the maintenance, query and analysis of a data base to permit the manipulation of newly defined files and data with the existing programs and system. Because the GDMS technology is believed…
Company's Data Security - Case Study
NASA Astrophysics Data System (ADS)
Stera, Piotr
This paper describes a computer network and data security problems in an existing company. Two main issues were pointed out: data loss protection and uncontrolled data copying. Security system was designed and implemented. The system consists of many dedicated programs. This system protect from data loss and detected unauthorized file copying from company's server by a dishonest employee.
FEQinput—An editor for the full equations (FEQ) hydraulic modeling system
Ancalle, David S.; Ancalle, Pablo J.; Domanski, Marian M.
2017-10-30
IntroductionThe Full Equations Model (FEQ) is a computer program that solves the full, dynamic equations of motion for one-dimensional unsteady hydraulic flow in open channels and through control structures. As a result, hydrologists have used FEQ to design and operate flood-control structures, delineate inundation maps, and analyze peak-flow impacts. To aid in fighting floods, hydrologists are using the software to develop a system that uses flood-plain models to simulate real-time streamflow.Input files for FEQ are composed of text files that contain large amounts of parameters, data, and instructions that are written in a format exclusive to FEQ. Although documentation exists that can aid in the creation and editing of these input files, new users face a steep learning curve in order to understand the specific format and language of the files.FEQinput provides a set of tools to help a new user overcome the steep learning curve associated with creating and modifying input files for the FEQ hydraulic model and the related utility tool, Full Equations Utilities (FEQUTL).
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-10
... expense of existing investor protection mechanisms, particularly without any impact analysis, because such... volatility, and to obstruct the development of ``subsystems within the national market system,'' objectives... development of subsystems within the national market system.'' See id. at 7 (emphasis added). Nevertheless...
Lipid-converter, a framework for lipid manipulations in molecular dynamics simulations
Larsson, Per; Kasson, Peter M.
2014-01-01
Construction of lipid membrane and membrane protein systems for molecular dynamics simulations can be a challenging process. In addition, there are few available tools to extend existing studies by repeating simulations using other force fields and lipid compositions. To facilitate this, we introduce lipidconverter, a modular Python framework for exchanging force fields and lipid composition in coordinate files obtained from simulations. Force fields and lipids are specified by simple text files, making it easy to introduce support for additional force fields and lipids. The converter produces simulation input files that can be used for structural relaxation of the new membranes. PMID:25081234
NASA Technical Reports Server (NTRS)
1987-01-01
This handbook is a guide for the use of all personnel engaged in handling NASA files. It is issued in accordance with the regulations of the National Archives and Records Administration, in the Code of Federal Regulations Title 36, Part 1224, Files Management; and the Federal Information Resources Management Regulation, Subpart 201-45.108, Files Management. It is intended to provide a standardized classification and filing scheme to achieve maximum uniformity and ease in maintaining and using agency records. It is a framework for consistent organization of information in an arrangement that will be useful to current and future researchers. The NASA Uniform Files Index coding structure is composed of the subject classification table used for NASA management directives and the subject groups in the NASA scientific and technical information system. It is designed to correlate files throughout NASA and it is anticipated that it may be useful with automated filing systems. It is expected that in the conversion of current files to this arrangement it will be necessary to add tertiary subjects and make further subdivisions under the existing categories. Established primary and secondary subject categories may not be changed arbitrarily. Proposals for additional subject categories of NASA-wide applicability, and suggestions for improvement in this handbook, should be addressed to the Records Program Manager at the pertinent installation who will forward it to the NASA Records Management Office, Code NTR, for approval. This handbook is issued in loose-leaf form and will be revised by page changes.
76 FR 56787 - Privacy Act of 1974; as Amended; Notice To Amend an Existing System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-14
... Bureau of Indian Affairs (BIA) Privacy Act system of records, ``Indian Social Services Case Files--Interior, BIA-8'' to change the name of the system to the ``Financial Assistance and Social Services--Case... to provide services to individual Indians who apply for and receive social services and direct...
The Use Of Videography For Three-Dimensional Motion Analysis
NASA Astrophysics Data System (ADS)
Hawkins, D. A.; Hawthorne, D. L.; DeLozier, G. S.; Campbell, K. R.; Grabiner, M. D.
1988-02-01
Special video path editing capabilities with custom hardware and software, have been developed for use in conjunction with existing video acquisition hardware and firmware. This system has simplified the task of quantifying the kinematics of human movement. A set of retro-reflective markers are secured to a subject performing a given task (i.e. walking, throwing, swinging a golf club, etc.). Multiple cameras, a video processor, and a computer work station collect video data while the task is performed. Software has been developed to edit video files, create centroid data, and identify marker paths. Multi-camera path files are combined to form a 3D path file using the DLT method of cinematography. A separate program converts the 3D path file into kinematic data by creating a set of local coordinate axes and performing a series of coordinate transformations from one local system to the next. The kinematic data is then displayed for appropriate review and/or comparison.
NASA ARCH- A FILE ARCHIVAL SYSTEM FOR THE DEC VAX
NASA Technical Reports Server (NTRS)
Scott, P. J.
1994-01-01
The function of the NASA ARCH system is to provide a permanent storage area for files that are infrequently accessed. The NASA ARCH routines were designed to provide a simple mechanism by which users can easily store and retrieve files. The user treats NASA ARCH as the interface to a black box where files are stored. There are only five NASA ARCH user commands, even though NASA ARCH employs standard VMS directives and the VAX BACKUP utility. Special care is taken to provide the security needed to insure file integrity over a period of years. The archived files may exist in any of three storage areas: a temporary buffer, the main buffer, and a magnetic tape library. When the main buffer fills up, it is transferred to permanent magnetic tape storage and deleted from disk. Files may be restored from any of the three storage areas. A single file, multiple files, or entire directories can be stored and retrieved. archived entities hold the same name, extension, version number, and VMS file protection scheme as they had in the user's account prior to archival. NASA ARCH is capable of handling up to 7 directory levels. Wildcards are supported. User commands include TEMPCOPY, DISKCOPY, DELETE, RESTORE, and DIRECTORY. The DIRECTORY command searches a directory of savesets covering all three archival areas, listing matches according to area, date, filename, or other criteria supplied by the user. The system manager commands include 1) ARCHIVE- to transfer the main buffer to duplicate magnetic tapes, 2) REPORTto determine when the main buffer is full enough to archive, 3) INCREMENT- to back up the partially filled main buffer, and 4) FULLBACKUP- to back up the entire main buffer. On-line help files are provided for all NASA ARCH commands. NASA ARCH is written in DEC VAX DCL for interactive execution and has been implemented on a DEC VAX computer operating under VMS 4.X. This program was developed in 1985.
CrossTalk. The Journal of Defense Software Engineering. Volume 16, Number 11, November 2003
2003-11-01
memory area, and stack pointer. These systems are classified as preemptive or nonpreemptive depending on whether they can preempt an existing task or not...of charge. The Software Technology Support Center was established at Ogden Air Logistics Center (AFMC) by Headquarters U.S. Air Force to help Air...device. A script file could be a list of commands for a command interpreter such as a batch file [15]. A communications port consists of a queue to hold
Razick, Sabry; Močnik, Rok; Thomas, Laurent F.; Ryeng, Einar; Drabløs, Finn; Sætrom, Pål
2014-01-01
Systematic data management and controlled data sharing aim at increasing reproducibility, reducing redundancy in work, and providing a way to efficiently locate complementing or contradicting information. One method of achieving this is collecting data in a central repository or in a location that is part of a federated system and providing interfaces to the data. However, certain data, such as data from biobanks or clinical studies, may, for legal and privacy reasons, often not be stored in public repositories. Instead, we describe a metadata cataloguing system and a software suite for reporting the presence of data from the life sciences domain. The system stores three types of metadata: file information, file provenance and data lineage, and content descriptions. Our software suite includes both graphical and command line interfaces that allow users to report and tag files with these different metadata types. Importantly, the files remain in their original locations with their existing access-control mechanisms in place, while our system provides descriptions of their contents and relationships. Our system and software suite thereby provide a common framework for cataloguing and sharing both public and private data. Database URL: http://bigr.medisin.ntnu.no/data/eGenVar/ PMID:24682735
Solar heating and domestic hot water system installed at North Dallas High School
NASA Technical Reports Server (NTRS)
1980-01-01
The solar energy system located at the North Dallas High School, Dallas, Texas is discussed. The system is designed as a retrofit in a three story with basement, concrete frame high school building. Extracts from the site files, specification references for solar modification to existing building heating and domestic hot water systems, drawings, installation, operation and maintenance instructions are included.
BOREAS HYD-6 Moss/Humus Moisture Data
NASA Technical Reports Server (NTRS)
Peck, Eugene L.; Hall, Forrest G. (Editor); Knapp, David E. (Editor); Carroll, Thomas; Smith, David E. (Technical Monitor)
2000-01-01
The Boreal Ecosystem-Atmosphere Study (BOREAS) Hydrology (HYD)-6 team collected several data sets related to the moisture content of soil and overlying humus layers. This data set contains water content measurements of the moss/humus layer, where it existed. These data were collected along various flight lines in the Southern Study Area (SSA) and Northern Study Area (NSA) during 1994. The data are available in tabular ASCII files. The HYD-06 moss/humus moisture data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).
BOREAS HYD-9 Belfort Rain Gauge Data
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Kouwen, Nick; Soulis, Ric; Jenkinson, Wayne; Graham, Allyson; Knapp, David E. (Editor); Smith, David E. (Technical Monitor)
2000-01-01
The Boreal Ecosystem-Atmosphere Study (BOREAS) Hydrology (HYD)-6 team collected several data sets related to the moisture content of soil and overlying humus layers. This data set contains water content measurements of the moss/humus layer, where it existed. These data were collected along various flight lines in the Southern Study Area (SSA) and Northern Study Area (NSA) during 1994. The data are available in tabular ASCII files. The HYD-9 Belfort rain gauge data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).
Logani, Ajay; Shah, Naseem
2008-01-01
To comparatively evaluate the amount of apically extruded debris when ProTaper hand, ProTaper rotary and ProFile systems were used for the instrumentation of root canals. Thirty minimally curved, mature, human mandibular premolars with single canals were randomly divided into three groups of ten teeth each. Each group was instrumented using one of the three instrumentation systems: ProTaper hand, ProTaper rotary and ProFile. Five milliliters of sterile water were used as an irrigant. Debris extruded was collected in preweighed polyethylene vials and the extruded irrigant was evaporated. The weight of the dry extruded debris was established by comparing the pre- and postinstrumentation weight of polyethylene vials for each group. The Kruskal-Wallis nonparametric test and Mann-Whitney U test were applied to determine if significant differences existed among the groups ( P< 0.05). All instruments tested produced a measurable amount of debris. No statistically significant difference was observed between ProTaper hand and ProFile system ( P > 0.05). Although ProTaper rotary extruded a relatively higher amount of debris, no statistically significant difference was observed between this type and the ProTaper hand instruments ( P > 0.05). The ProTaper rotary extruded significantly more amount of debris compared to the ProFile system ( P< 0.05). Within the limitations of this study, it can be concluded that all instruments tested produced apical extrusion of debris. The ProTaper rotary extruded a significantly higher amount of debris than the ProFile.
NASA Astrophysics Data System (ADS)
Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.
2015-12-01
A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, D.; Bockelman, B.; Blomer, J.
A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less
Evaluation of a data dictionary system. [information dissemination and computer systems programs
NASA Technical Reports Server (NTRS)
Driggers, W. G.
1975-01-01
The usefulness was investigated of a data dictionary/directory system for achieving optimum benefits from existing and planned investments in computer data files in the Data Systems Development Branch and the Institutional Data Systems Division. Potential applications of the data catalogue system are discussed along with an evaluation of the system. Other topics discussed include data description, data structure, programming aids, programming languages, program networks, and test data.
Enhancement/upgrade of Engine Structures Technology Best Estimator (EST/BEST) Software System
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2003-01-01
This report describes the work performed during the contract period and the capabilities included in the EST/BEST software system. The developed EST/BEST software system includes the integrated NESSUS, IPACS, COBSTRAN, and ALCCA computer codes required to perform the engine cycle mission and component structural analysis. Also, the interactive input generator for NESSUS, IPACS, and COBSTRAN computer codes have been developed and integrated with the EST/BEST software system. The input generator allows the user to create input from scratch as well as edit existing input files interactively. Since it has been integrated with the EST/BEST software system, it enables the user to modify EST/BEST generated files and perform the analysis to evaluate the benefits. Appendix A gives details of how to use the newly added features in the EST/BEST software system.
An integrated software system for geometric correction of LANDSAT MSS imagery
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Esilva, A. J. F. M.; Camara-Neto, G.; Serra, P. R. M.; Desousa, R. C. M.; Mitsuo, Fernando Augusta, II
1984-01-01
A system for geometrically correcting LANDSAT MSS imagery includes all phases of processing, from receiving a raw computer compatible tape (CCT) to the generation of a corrected CCT (or UTM mosaic). The system comprises modules for: (1) control of the processing flow; (2) calculation of satellite ephemeris and attitude parameters, (3) generation of uncorrected files from raw CCT data; (4) creation, management and maintenance of a ground control point library; (5) determination of the image correction equations, using attitude and ephemeris parameters and existing ground control points; (6) generation of corrected LANDSAT file, using the equations determined beforehand; (7) union of LANDSAT scenes to produce and UTM mosaic; and (8) generation of output tape, in super-structure format.
Mobile Care (Moca) for Remote Diagnosis and Screening
Celi, Leo Anthony; Sarmenta, Luis; Rotberg, Jhonathan; Marcelo, Alvin; Clifford, Gari
2010-01-01
Moca is a cell phone-facilitated clinical information system to improve diagnostic, screening and therapeutic capabilities in remote resource-poor settings. The software allows transmission of any medical file, whether a photo, x-ray, audio or video file, through a cell phone to (1) a central server for archiving and incorporation into an electronic medical record (to facilitate longitudinal care, quality control, and data mining), and (2) a remote specialist for real-time decision support (to leverage expertise). The open source software is designed as an end-to-end clinical information system that seamlessly connects health care workers to medical professionals. It is integrated with OpenMRS, an existing open source medical records system commonly used in developing countries. PMID:21822397
DOCU-TEXT: A tool before the data dictionary
NASA Technical Reports Server (NTRS)
Carter, B.
1983-01-01
DOCU-TEXT, a proprietary software package that aids in the production of documentation for a data processing organization and can be installed and operated only on IBM computers is discussed. In organizing information that ultimately will reside in a data dictionary, DOCU-TEXT proved to be a useful documentation tool in extracting information from existing production jobs, procedure libraries, system catalogs, control data sets and related files. DOCU-TEXT reads these files to derive data that is useful at the system level. The output of DOCU-TEXT is a series of user selectable reports. These reports can reflect the interactions within a single job stream, a complete system, or all the systems in an installation. Any single report, or group of reports, can be generated in an independent documentation pass.
NASA Technical Reports Server (NTRS)
Hinton, David A.
2001-01-01
A ground-based system has been developed to demonstrate the feasibility of automating the process of collecting relevant weather data, predicting wake vortex behavior from a data base of aircraft, prescribing safe wake vortex spacing criteria, estimating system benefit, and comparing predicted and observed wake vortex behavior. This report describes many of the system algorithms, features, limitations, and lessons learned, as well as suggested system improvements. The system has demonstrated concept feasibility and the potential for airport benefit. Significant opportunities exist however for improved system robustness and optimization. A condensed version of the development lab book is provided along with samples of key input and output file types. This report is intended to document the technical development process and system architecture, and to augment archived internal documents that provide detailed descriptions of software and file formats.
NASA Langley Research Center's distributed mass storage system
NASA Technical Reports Server (NTRS)
Pao, Juliet Z.; Humes, D. Creig
1993-01-01
There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.
47 CFR 25.118 - Modifications not requiring prior authorization.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., notification required. Authorized earth station operators may make the following modifications to their... electronically through the International Bureau Filing System (IBFS) in accordance with the applicable provisions... electrically identical to the existing equipment, an authorized earth station licensee may add, change or...
47 CFR 25.118 - Modifications not requiring prior authorization.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., notification required. Authorized earth station operators may make the following modifications to their... electronically through the International Bureau Filing System (IBFS) in accordance with the applicable provisions... electrically identical to the existing equipment, an authorized earth station licensee may add, change or...
47 CFR 25.118 - Modifications not requiring prior authorization.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., notification required. Authorized earth station operators may make the following modifications to their... electronically through the International Bureau Filing System (IBFS) in accordance with the applicable provisions... electrically identical to the existing equipment, an authorized earth station licensee may add, change or...
47 CFR 25.118 - Modifications not requiring prior authorization.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., notification required. Authorized earth station operators may make the following modifications to their... electronically through the International Bureau Filing System (IBFS) in accordance with the applicable provisions... electrically identical to the existing equipment, an authorized earth station licensee may add, change or...
47 CFR 25.118 - Modifications not requiring prior authorization.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., notification required. Authorized earth station operators may make the following modifications to their... electronically through the International Bureau Filing System (IBFS) in accordance with the applicable provisions... electrically identical to the existing equipment, an authorized earth station licensee may add, change or...
Development and Integration of WWW-Based Services in an Existing University Environment.
ERIC Educational Resources Information Center
Garofalakis, John; Kappos, Panagiotis; Tsakalidis, Athanasios; Tsaknakis, John; Tzimas, Giannis; Vassiliadis, Vassilios
This paper describes the experience and the problems solved in the process of developing and integrating advanced World Wide Web-based services into the University of Patras (Greece) system. In addition to basic network services (e.g., e-mail, file transfer protocol), the final system will integrate the following set of advanced services: a…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-15
... to use their existing quotation systems to enter quotes for complex order strategies rather than... posted on the complex order book are not firm, nor included in the national market system. The Exchange... Complex Orders July 11, 2011. Pursuant to Section 19(b)(1) of the Securities Exchange Act of 1934 (the...
Will Courts Shape Value-Added Methods for Teacher Evaluation? ACT Working Paper Series. WP-2014-2
ERIC Educational Resources Information Center
Croft, Michelle; Buddin, Richard
2014-01-01
As more states begin to adopt teacher evaluation systems based on value-added measures, legal challenges have been filed both seeking to limit the use of value-added measures ("Cook v. Stewart") and others seeking to require more robust evaluation systems ("Vergara v. California"). This study reviews existing teacher evaluation…
New directions in the CernVM file system
NASA Astrophysics Data System (ADS)
Blomer, Jakob; Buncic, Predrag; Ganis, Gerardo; Hardi, Nikola; Meusel, Rene; Popescu, Radu
2017-10-01
The CernVM File System today is commonly used to host and distribute application software stacks. In addition to this core task, recent developments expand the scope of the file system into two new areas. Firstly, CernVM-FS emerges as a good match for container engines to distribute the container image contents. Compared to native container image distribution (e.g. through the “Docker registry”), CernVM-FS massively reduces the network traffic for image distribution. This has been shown, for instance, by a prototype integration of CernVM-FS into Mesos developed by Mesosphere, Inc. We present a path for a smooth integration of CernVM-FS and Docker. Secondly, CernVM-FS recently raised new interest as an option for the distribution of experiment conditions data. Here, the focus is on improved versioning capabilities of CernVM-FS that allows to link the conditions data of a run period to the state of a CernVM-FS repository. Lastly, CernVM-FS has been extended to provide a name space for physics data for the LIGO and CMS collaborations. Searching through a data namespace is often done by a central, experiment specific database service. A name space on CernVM-FS can particularly benefit from an existing, scalable infrastructure and from the POSIX file system interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Christopher J; Ahrens, James P; Wang, Jun
2010-10-15
Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar tomore » other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.« less
STREAM Table Program: User's manual and program document
NASA Technical Reports Server (NTRS)
Hiles, K. H.
1981-01-01
This program was designed to be an editor for the Lewis Chemical Equilibrium program input files and is used for storage, manipulation and retrieval of the large amount of data required. The files are based on the facility name, case number, and table number. The data is easily recalled by supplying the sheet number to be displayed. The retrieval basis is a sheet defined to be all of the individual flow streams which comprise a given portion of a coal gasification system. A sheet may cover more than one page of output tables. The program allows for the insertion of a new table, revision of existing tables, deletion of existing tables, or the printing of selected tables. No calculations are performed. Only pointers are used to keep track of the data.
Robbins, Lisa L.; Hansen, Mark; Raabe, Ellen; Knorr, Paul O.; Browne, Joseph
2007-01-01
The Florida shelf represents a finite source of economic resources, including commercial and recreational fisheries, tourism, recreation, sand and gravel resources, phosphate, and freshwater reserves. Yet the basic information needed to locate resources, or to interpret and utilize existing data, comes from many sources, dates, and formats. A multi-agency effort is underway to coordinate and prioritize the compilation of suitable datasets for an integrated information system of Florida’s coastal and ocean resources. This report and the associated data files represent part of the effort to make data accessible and useable with computer-mapping systems, web-based technologies, and user-friendly visualization tools. Among the datasets compiled and developed are seafloor imagery, marine sediment data, and existing bathymetric data. A U.S. Geological Survey-sponsored workshop in January 2007 resulted in the establishment of mapping priorities for the state. Bathymetry was identified as a common priority among agencies and researchers. State-of-the-art computer-mapping techniques and data-processing tools were used to develop shelf-wide raster and vector data layers. Florida Shelf Habitat (FLaSH) Mapping Project (http://coastal.er.usgs.gov/flash) endeavors to locate available data, identify data gaps, synthesize existing information, and expand our understanding of geologic processes in our dynamic coastal and marine systems.
Analysis of the access patterns at GSFC distributed active archive center
NASA Technical Reports Server (NTRS)
Johnson, Theodore; Bedet, Jean-Jacques
1996-01-01
The Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC) has been operational for more than two years. Its mission is to support existing and pre Earth Observing System (EOS) Earth science datasets, facilitate the scientific research, and test Earth Observing System Data and Information System (EOSDIS) concepts. Over 550,000 files and documents have been archived, and more than six Terabytes have been distributed to the scientific community. Information about user request and file access patterns, and their impact on system loading, is needed to optimize current operations and to plan for future archives. To facilitate the management of daily activities, the GSFC DAAC has developed a data base system to track correspondence, requests, ingestion and distribution. In addition, several log files which record transactions on Unitree are maintained and periodically examined. This study identifies some of the users' requests and file access patterns at the GSFC DAAC during 1995. The analysis is limited to the subset of orders for which the data files are under the control of the Hierarchical Storage Management (HSM) Unitree. The results show that most of the data volume ordered was for two data products. The volume was also mostly made up of level 3 and 4 data and most of the volume was distributed on 8 mm and 4 mm tapes. In addition, most of the volume ordered was for deliveries in North America although there was a significant world-wide use. There was a wide range of request sizes in terms of volume and number of files ordered. On an average 78.6 files were ordered per request. Using the data managed by Unitree, several caching algorithms have been evaluated for both hit rate and the overhead ('cost') associated with the movement of data from near-line devices to disks. The algorithm called LRU/2 bin was found to be the best for this workload, but the STbin algorithm also worked well.
ECFS: A decentralized, distributed and fault-tolerant FUSE filesystem for the LHCb online farm
NASA Astrophysics Data System (ADS)
Rybczynski, Tomasz; Bonaccorsi, Enrico; Neufeld, Niko
2014-06-01
The LHCb experiment records millions of proton collisions every second, but only a fraction of them are useful for LHCb physics. In order to filter out the "bad events" a large farm of x86-servers (~2000 nodes) has been put in place. These servers boot from and run from NFS, however they use their local disk to temporarily store data, which cannot be processed in real-time ("data-deferring"). These events are subsequently processed, when there are no live-data coming in. The effective CPU power is thus greatly increased. This gain in CPU power depends critically on the availability of the local disks. For cost and power-reasons, mirroring (RAID-1) is not used, leading to a lot of operational headache with failing disks and disk-errors or server failures induced by faulty disks. To mitigate these problems and increase the reliability of the LHCb farm, while at same time keeping cost and power-consumption low, an extensive research and study of existing highly available and distributed file systems has been done. While many distributed file systems are providing reliability by "file replication", none of the evaluated ones supports erasure algorithms. A decentralised, distributed and fault-tolerant "write once read many" file system has been designed and implemented as a proof of concept providing fault tolerance without using expensive - in terms of disk space - file replication techniques and providing a unique namespace as a main goals. This paper describes the design and the implementation of the Erasure Codes File System (ECFS) and presents the specialised FUSE interface for Linux. Depending on the encoding algorithm ECFS will use a certain number of target directories as a backend to store the segments that compose the encoded data. When target directories are mounted via nfs/autofs - ECFS will act as a file-system over network/block-level raid over multiple servers.
TFaNS Tone Fan Noise Design/Prediction System. Volume 2; User's Manual; 1.4
NASA Technical Reports Server (NTRS)
Topol, David A.; Eversman, Walter
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: the codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. CUP3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides information on code input and file structure essential for potential users of TFANS. This report is divided into three volumes: Volume 1. System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume 2. User's Manual, TFANS Vers. 1.4; Volume 3. Evaluation of System Codes.
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1992-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.
Solar heating and hot water system installed at Arlington Raquetball Club, Arlington, Virginia
NASA Technical Reports Server (NTRS)
1981-01-01
A solar space and water heating system is described. The solar energy system consists of 2,520 sq. ft. of flat plate solar collectors and a 4,000 gallon solar storage tank. The transfer medium in the forced closed loop is a nontoxic antifreeze solution (50 percent water, 50 percent propylene glycol). The service hot water system consists of a preheat coil (60 ft. of 1 1/4 in copper tubing) located in the upper third of the solar storage tank and a recirculation loop between the preheat coil and the existing electric water heaters. The space heating system consists of two separate water to air heat exchangers located in the ducts of the existing space heating/cooling systems. The heating water is supplied from the solar storage tank. Extracts from site files, specification references for solar modifications to existing building heating and hot water systems, and installation, operation and maintenance instructions are included.
Music information retrieval in compressed audio files: a survey
NASA Astrophysics Data System (ADS)
Zampoglou, Markos; Malamos, Athanasios G.
2014-07-01
In this paper, we present an organized survey of the existing literature on music information retrieval systems in which descriptor features are extracted directly from the compressed audio files, without prior decompression to pulse-code modulation format. Avoiding the decompression step and utilizing the readily available compressed-domain information can significantly lighten the computational cost of a music information retrieval system, allowing application to large-scale music databases. We identify a number of systems relying on compressed-domain information and form a systematic classification of the features they extract, the retrieval tasks they tackle and the degree in which they achieve an actual increase in the overall speed-as well as any resulting loss in accuracy. Finally, we discuss recent developments in the field, and the potential research directions they open toward ultra-fast, scalable systems.
Teaching Information Retrieval: Lessons from Cornell.
ERIC Educational Resources Information Center
Stewart, Linda Guyotte; Markiewicz, James
1986-01-01
This article describes two separate workshops offered to college faculty during the spring 1984 semester: one in online bibliographic searching, one in using computers to manage personal files of bibliographic references (examination of existing systems, types of software available). A telephone survey evaluation conducted 3 months after sessions…
EMERALD: A Flexible Framework for Managing Seismic Data
NASA Astrophysics Data System (ADS)
West, J. D.; Fouch, M. J.; Arrowsmith, R.
2010-12-01
The seismological community is challenged by the vast quantity of new broadband seismic data provided by large-scale seismic arrays such as EarthScope’s USArray. While this bonanza of new data enables transformative scientific studies of the Earth’s interior, it also illuminates limitations in the methods used to prepare and preprocess those data. At a recent seismic data processing focus group workshop, many participants expressed the need for better systems to minimize the time and tedium spent on data preparation in order to increase the efficiency of scientific research. Another challenge related to data from all large-scale transportable seismic experiments is that there currently exists no system for discovering and tracking changes in station metadata. This critical information, such as station location, sensor orientation, instrument response, and clock timing data, may change over the life of an experiment and/or be subject to post-experiment correction. Yet nearly all researchers utilize metadata acquired with the downloaded data, even though subsequent metadata updates might alter or invalidate results produced with older metadata. A third long-standing issue for the seismic community is the lack of easily exchangeable seismic processing codes. This problem stems directly from the storage of seismic data as individual time series files, and the history of each researcher developing his or her preferred data file naming convention and directory organization. Because most processing codes rely on the underlying data organization structure, such codes are not easily exchanged between investigators. To address these issues, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The goal of the EMERALD project is to provide seismic researchers with a unified, user-friendly, extensible system for managing seismic event data, thereby increasing the efficiency of scientific enquiry. EMERALD stores seismic data and metadata in a state-of-the-art open source relational database (PostgreSQL), and can, on a timed basis or on demand, download the most recent metadata, compare it with previously acquired values, and alert the user to changes. The backend relational database is capable of easily storing and managing many millions of records. The extensible, plug-in architecture of the EMERALD system allows any researcher to contribute new visualization and processing methods written in any of 12 programming languages, and a central Internet-enabled repository for such methods provides users with the opportunity to download, use, and modify new processing methods on demand. EMERALD includes data acquisition tools allowing direct importation of seismic data, and also imports data from a number of existing seismic file formats. Pre-processed clean sets of data can be exported as standard sac files with user-defined file naming and directory organization, for use with existing processing codes. The EMERALD system incorporates existing acquisition and processing tools, including SOD, TauP, GMT, and FISSURES/DHI, making much of the functionality of those tools available in a unified system with a user-friendly web browser interface. EMERALD is now in beta test. See emerald.asu.edu or contact john.d.west@asu.edu for more details.
Non-volatile main memory management methods based on a file system.
Oikawa, Shuichi
2014-01-01
There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.
78 FR 5174 - Combined Notice of Filings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-24
... Existing Proceedings Docket Numbers: RP12-1067-002. Applicants: Leaf River Energy Center LLC. Description: Leaf River Energy Center LLC--Revised Compliance Filing to be effective 12/1/2012. Filed Date: 1/11/13...
Code of Federal Regulations, 2011 CFR
2011-01-01
... individual at the individual's request of the existence of records in an investigative file pertaining to...: Notifying an individual at the individual's request of the existence of records in an investigative file...) Alien Visits and Participation (DOE-52). (B) Clearance Board Cases (DOE-46). (C) Security Correspondence...
Code of Federal Regulations, 2013 CFR
2013-01-01
... individual at the individual's request of the existence of records in an investigative file pertaining to...: Notifying an individual at the individual's request of the existence of records in an investigative file...) Alien Visits and Participation (DOE-52). (B) Clearance Board Cases (DOE-46). (C) Security Correspondence...
Code of Federal Regulations, 2014 CFR
2014-01-01
... individual at the individual's request of the existence of records in an investigative file pertaining to...: Notifying an individual at the individual's request of the existence of records in an investigative file...) Alien Visits and Participation (DOE-52). (B) Clearance Board Cases (DOE-46). (C) Security Correspondence...
Code of Federal Regulations, 2012 CFR
2012-01-01
... individual at the individual's request of the existence of records in an investigative file pertaining to...: Notifying an individual at the individual's request of the existence of records in an investigative file...) Alien Visits and Participation (DOE-52). (B) Clearance Board Cases (DOE-46). (C) Security Correspondence...
78 FR 11701 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-19
... agencies will also have to provide training to staff members using the Electronic Form 19b-4 Filing System... will spend approximately 20 hours training all staff members who will use EFFS to submit Security-Based... training new compliance staff members and updating the training of existing compliance staff members to use...
77 FR 23474 - Combined Notice of Filings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-19
...: Young Gas Storage Company, Ltd. Description: EBB Notice Categories to be effective 5/15/2012. Filed Date... intervention is necessary to become a party to the proceeding. Filings in Existing Proceedings Docket Numbers... requirements, interventions, protests, and service can be found at: http://www.ferc.gov/docs-filing/efiling...
78 FR 13050 - Combined Notice of Filings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-26
... be considered, but intervention is necessary to become a party to the proceeding. Filings in Existing Proceedings Docket Numbers: RP13-106-002. Applicants: Young Gas Storage Company, Ltd. Description: Young NAESB..., protests, and service can be found at: http://www.ferc.gov/docs-filing/efiling/filing-req.pdf . For other...
A Lightweight, High-performance I/O Management Package for Data-intensive Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jun Wang
2007-07-17
File storage systems are playing an increasingly important role in high-performance computing as the performance gap between CPU and disk increases. It could take a long time to develop an entire system from scratch. Solutions will have to be built as extensions to existing systems. If new portable, customized software components are plugged into these systems, better sustained high I/O performance and higher scalability will be achieved, and the development cycle of next-generation of parallel file systems will be shortened. The overall research objective of this ECPI development plan aims to develop a lightweight, customized, high-performance I/O management package namedmore » LightI/O to extend and leverage current parallel file systems used by DOE. During this period, We have developed a novel component in LightI/O and prototype them into PVFS2, and evaluate the resultant prototype—extended PVFS2 system on data-intensive applications. The preliminary results indicate the extended PVFS2 delivers better performance and reliability to users. A strong collaborative effort between the PI at the University of Nebraska Lincoln and the DOE collaborators—Drs Rob Ross and Rajeev Thakur at Argonne National Laboratory who are leading the PVFS2 group makes the project more promising.« less
78 FR 6772 - Failure To File Gain Recognition Agreements and Other Required Filings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-31
... regulations that would amend the existing rules governing the consequences to U.S. persons for failing to file... current law, if a U.S. transferor fails to timely file an initial GRA, or fails to comply in any material... fails to timely file an annual certification), the U.S. transferor is subject to full gain recognition...
A Customizable Importer for the Clinical Data Warehouses PaDaWaN and I2B2.
Fette, Georg; Kaspar, Mathias; Dietrich, Georg; Ertl, Maximilian; Krebs, Jonathan; Stoerk, Stefan; Puppe, Frank
2017-01-01
In recent years, clinical data warehouses (CDW) storing routine patient data have become more and more popular to support scientific work in the medical domain. Although CDW systems provide interfaces to import new data, these interfaces have to be used by processing tools that are often not included in the systems themselves. In order to establish an extraction-transformation-load (ETL) workflow, already existing components have to be taken or new components have to be developed to perform the load part of the ETL. We present a customizable importer for the two CDW systems PaDaWaN and I2B2, which is able to import the most common import formats (plain text, CSV and XML files). In order to be run, the importer only needs a configuration file with the user credentials for the target CDW and a list of XML import configuration files, which determine how already exported data is indented to be imported. The importer is provided as a Java program, which has no further software requirements.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-22
... project would be a closed-loop pumped storage system, with an initial fill from the existing Otter Creek...: Federal Power Act 16 U.S.C. 791(a)-825(r). h. Applicant Contact: Parker Knoll Hydro, LLC., 975 South State... system; (12) approximately 1 mile of 345-kV transmission line; and (13) appurtenant facilities. The...
Cost Considerations in Cloud Computing
2014-01-01
investments. 2. Database Options The potential promise that “ big data ” analytics holds for many enterprise mission areas makes relevant the question of the...development of a range of new distributed file systems and data - bases that have better scalability properties than traditional SQL databases. Hadoop ... data . Many systems exist that extend or supplement Hadoop —such as Apache Accumulo, which provides a highly granular mechanism for managing security
Geographic information system (GIS) representation of coal-bearing areas in India and Bangladesh
Trippi, Michael H.; Tewalt, Susan J.
2011-01-01
Geographic information system (GIS) information may facilitate energy studies, which in turn provide input for energy policy decisions. Prior to this study, no GIS file representing the occurrence of coal-bearing units in India or Bangladesh was known to exist. This Open-File Report contains downloadable shapefiles representing the coalfields of India and Bangladesh and a limited number of chemical and petrographic analyses of India and Bangladesh coal samples. Also included are maps of India and Bangladesh showing the locations of the coalfields and coal samples in the shapefiles, figures summarizing the stratigraphic units in the coalfields of India and Bangladesh, and a brief report summarizing the stratigraphy and geographic locations of coal-bearing deposits in India and Bangladesh.
Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.
2016-12-01
Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.
18 CFR 5.5 - Notification of intent.
Code of Federal Regulations, 2010 CFR
2010-04-01
...'s intention to file an application for an original license, or, in the case of an existing licensee..., DEPARTMENT OF ENERGY REGULATIONS UNDER THE FEDERAL POWER ACT INTEGRATED LICENSE APPLICATION PROCESS § 5.5... that it intends to file an application for an original, new, or subsequent license, or for an existing...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-21
... filed an immediately effective proposal regarding Remote Specialists (the ``Remote Specialist filing'') that expanded the Remote Specialist concept.\\3\\ By the Remote Specialist filing, the Exchange enhanced the existing Remote Specialist \\4\\ model so that all eligible ROTs \\5\\ on the Exchange could function...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walkup, Elizabeth
This software is an analyzer for automated sandbox analysis of malware on the OS X operating system. It runs inside an OS X virtual machine to collect data about what happens when a given file is opened or run. As of August 2014, there was no sandbox software for Mac OS X malware, as it requires different methods from those used on the Windows OS (which most sandboxes are written for). This software adds OS X analysis capabilities to an existing open-source sandbox, Cuckoo Sandbox (http://cuckoosandbox.org/), which previously only worked for Windows. The analyzer itself can take many different typesmore » of files as input: the traditional Mach-O and FAT executables, .app files, zip files, Python scripts, Java archives, and web pages, as well as PDFs and other documents. While the file is running, the analyzer also simulates rudimentary human interaction with clicks and mouse movements in order to bypass the tests some malware use to see if they are being analyzed. The analyzer outputs several different kinds of data: function call traces, network captures, screenshots, and all created and modified files. This work also includes a static analysis Cuckoo module for Mach-O binary files. It extracts file structures, code library imports and exports, and signatures. This data can be used along with the analyzer results to create signatures for malware.« less
Integrated visualization of remote sensing data using Google Earth
NASA Astrophysics Data System (ADS)
Castella, M.; Rigo, T.; Argemi, O.; Bech, J.; Pineda, N.; Vilaclara, E.
2009-09-01
The need for advanced visualization tools for meteorological data has lead in the last years to the development of sophisticated software packages either by observing systems manufacturers or by third-party solution providers. For example, manufacturers of remote sensing systems such as weather radars or lightning detection systems include zoom, product selection, archive access capabilities, as well as quantitative tools for data analysis, as standard features which are highly appreciated in weather surveillance or post-event case study analysis. However, the fact that each manufacturer has its own visualization system and data formats hampers the usability and integration of different data sources. In this context, Google Earth (GE) offers the possibility of combining several graphical information types in a unique visualization system which can be easily accessed by users. The Meteorological Service of Catalonia (SMC) has been evaluating the use of GE as a visualization platform for surveillance tasks in adverse weather events. First experiences are related to the integration in real-time of remote sensing data: radar, lightning, and satellite. The tool shows the animation of the combined products in the last hour, giving a good picture of the meteorological situation. One of the main advantages of this product is that is easy to be installed in many computers and does not need high computational requirements. Besides this, the capability of GE provides information about the most affected areas by heavy rain or other weather phenomena. On the opposite, the main disadvantage is that the product offers only qualitative information, and quantitative data is only available though the graphical display (i.e. trough color scales but not associated to physical values that can be accessed by users easily). The procedure developed to run in real time is divided in three parts. First of all, a crontab file launches different applications, depending on the data type (satellite, radar, or lightning) to be treated. For each type of data, the time of launching is different, and goes from 5 (satellite and lightning) to 6 minutes (radar). The second part is the use of IDL and ENVI programs, which search in each archive file the last images in one hour. In the case of lightning data, the files are generated for the procedure, while for the others the procedure searches for existing imagery. Finally, the procedure generates metadata information required by GE, kml files, and sends them to the internal server. At the same time, in the local computer where GE is running, there exists kml files which update the information referring to the server ones. Another application that has been evaluated is the analysis of past events. In this sense, further work is devoted to develop access procedures to archived data via cgi scripts in order to retrieve and convert the information in a format suitable for GE. The presentation includes examples of the evaluation of the use of GE, and a brief comparison with other existing visualization systems available within the SMC.
FLASH Interface; a GUI for managing runtime parameters in FLASH simulations
NASA Astrophysics Data System (ADS)
Walker, Christopher; Tzeferacos, Petros; Weide, Klaus; Lamb, Donald; Flocke, Norbert; Feister, Scott
2017-10-01
We present FLASH Interface, a novel graphical user interface (GUI) for managing runtime parameters in simulations performed with the FLASH code. FLASH Interface supports full text search of available parameters; provides descriptions of each parameter's role and function; allows for the filtering of parameters based on categories; performs input validation; and maintains all comments and non-parameter information already present in existing parameter files. The GUI can be used to edit existing parameter files or generate new ones. FLASH Interface is open source and was implemented with the Electron framework, making it available on Mac OSX, Windows, and Linux operating systems. The new interface lowers the entry barrier for new FLASH users and provides an easy-to-use tool for experienced FLASH simulators. U.S. Department of Energy (DOE), NNSA ASC/Alliances Center for Astrophysical Thermonuclear Flashes, U.S. DOE NNSA ASC through the Argonne Institute for Computing in Science, U.S. National Science Foundation.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-20
.... Critical infrastructure means existing and proposed systems and assets, whether physical or virtual, the....ferc.gov/help/submission-guide.asp . To file the document electronically, access the Commission's Web... using the ``eLibrary'' link. For user assistance, contact [email protected] or toll-free at...
78 FR 24443 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-25
... agencies will also have to provide training to staff members using the Electronic Form 19b-4 Filing System... will spend approximately 20 hours training all staff members who will use EFFS to submit Security-Based... training new compliance staff members and updating the training of existing compliance staff members to use...
76 FR 32997 - Privacy Act of 1974: Update Existing System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-07
... with E.O. 13490, section 4(e), January 21, 2009. These records include the ethics pledges and all... media, and other general personnel records files, is the official repository of the records, reports of... organizations, including news media, which grant or publicize employee recognition. i. To consider employees for...
75 FR 65034 - Petition for Modification of Existing Mandatory Safety Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-21
... application, processing, and disposition of petitions for modification. This notice is a summary of a petition for modification filed by the party listed below to modify the application of an existing mandatory... operator or representative of miners to file a petition to modify the application of any mandatory safety...
2005-05-04
should be filed or issue a memorandum clarifying the existing guidance and revise the DCAA Management Information System (DMIS) to allow defective...APO Response. The DCAA comments were not responsive. In the past, we have found inaccuracies in the DCAA management information system . Neither...Audit Agency Management Information System to only allow defective pricing audit assignments to be closed by issuing an audit report or canceling the
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-05
...-deep, 3-mile-long canal carrying flows diverted from Cottonwood Creek by an existing diversion... on the Commission's Web site ( http://www.ferc.gov/docs-filing/ferconline.asp ) under the ``eFiling...Library'' link of Commission's Web site at http://www.ferc.gov/docs-filing/elibrary.asp . Enter the docket...
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.
SEGY to ASCII: Conversion and Plotting Program
Goldman, Mark R.
1999-01-01
This report documents a computer program to convert standard 4 byte, IBM floating point SEGY files to ASCII xyz format. The program then optionally plots the seismic data using the GMT plotting package. The material for this publication is contained in a standard tar file (of99-126.tar) that is uncompressed and 726 K in size. It can be downloaded by any Unix machine. Move the tar file to the directory you wish to use it in, then type 'tar xvf of99-126.tar' The archive files (and diskette) contain a NOTE file, a README file, a version-history file, source code, a makefile for easy compilation, and an ASCII version of the documentation. The archive files (and diskette) also contain example test files, including a typical SEGY file along with the resulting ASCII xyz and postscript files. Requirements for compiling the source code into an executable are a C++ compiler. The program has been successfully compiled using Gnu's g++ version 2.8.1, and use of other compilers may require modifications to the existing source code. The g++ compiler is a free, high quality C++ compiler and may be downloaded from the ftp site: ftp://ftp.gnu.org/gnu Requirements for plotting the seismic data is the existence of the GMT plotting package. The GMT plotting package may be downloaded from the web site: http://www.soest.hawaii.edu/gmt/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan
MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less
Code of Federal Regulations, 2012 CFR
2012-04-01
... the Federal Power Act, the Director of the Office of Energy Projects will provide the existing... Office of Energy Projects. (c) Any application for surrender must be filed according to the approved... project for which no timely application is filed following a notice of intent not to file. 16.26 Section...
Code of Federal Regulations, 2011 CFR
2011-04-01
... the Federal Power Act, the Director of the Office of Energy Projects will provide the existing... Office of Energy Projects. (c) Any application for surrender must be filed according to the approved... project for which no timely application is filed following a notice of intent not to file. 16.26 Section...
Code of Federal Regulations, 2013 CFR
2013-04-01
... the Federal Power Act, the Director of the Office of Energy Projects will provide the existing... Office of Energy Projects. (c) Any application for surrender must be filed according to the approved... project for which no timely application is filed following a notice of intent not to file. 16.26 Section...
Code of Federal Regulations, 2014 CFR
2014-04-01
... the Federal Power Act, the Director of the Office of Energy Projects will provide the existing... Office of Energy Projects. (c) Any application for surrender must be filed according to the approved... project for which no timely application is filed following a notice of intent not to file. 16.26 Section...
Code of Federal Regulations, 2010 CFR
2010-04-01
... the Federal Power Act, the Director of the Office of Energy Projects will provide the existing... Office of Energy Projects. (c) Any application for surrender must be filed according to the approved... project for which no timely application is filed following a notice of intent not to file. 16.26 Section...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-04
... capacity of 450 kilowatts; (4) an existing 10- foot-wide, 8-foot-deep intake canal; (5) new trash racks... Commission's Web site under the ``eFiling'' link. If unable to be filed electronically, documents may be... information on how to submit these types of filings please go to the Commission's Web site located at http...
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1991-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.
Please Move Inactive Files Off the /projects File System | High-Performance
Computing | NREL Please Move Inactive Files Off the /projects File System Please Move Inactive Files Off the /projects File System January 11, 2018 The /projects file system is a shared resource . This year this has created a space crunch - the file system is now about 90% full and we need your help
Logic Design of a Shared Disk System in a Multi-Micro Computer Environment.
1983-06-01
overall system, is given. An exnaustive description of eacn device can De found in tne cited references. A. INTEL 80S5 Tne INTEL Be86 is a nign...eitner could De accomplished, it was necessary to understand ootn tne existing system arcnitecture ani software. Tne last cnapter addressed tnat...to De adapted: tne loader program and tne Doot ROP program. Tne loader program is a simplified version of CP/M-Bö and contains cniy encu^n file
Silvabase: A flexible data file management system
NASA Technical Reports Server (NTRS)
Lambing, Steven J.; Reynolds, Sandra J.
1991-01-01
The need for a more flexible and efficient data file management system for mission planning in the Mission Operations Laboratory (EO) at MSFC has spawned the development of Silvabase. Silvabase is a new data file structure based on a B+ tree data structure. This data organization allows for efficient forward and backward sequential reads, random searches, and appends to existing data. It also provides random insertions and deletions with reasonable efficiency, utilization of storage space well but not at the expense of speed, and performance of these functions on a large volume of data. Mission planners required that some data be keyed and manipulated in ways not found in a commercial product. Mission planning software is currently being converted to use Silvabase in the Spacelab and Space Station Mission Planning Systems. Silvabase runs on a Digital Equipment Corporation's popular VAX/VMS computers in VAX Fortran. Silvabase has unique features involving time histories and intervals such as in operations research. Because of its flexibility and unique capabilities, Silvabase could be used in almost any government or commercial application that requires efficient reads, searches, and appends in medium to large amounts of almost any kind of data.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-27
... Storage Water Supply, LLC; Notice of Preliminary Permit Application Accepted for Filing and Soliciting...-acre reservoir; (4) a turnout to supply project effluent water to an existing irrigation system; (5) a...,000 megawatt-hours. Applicant Contact: Bart M. O'Keeffe, West Maui Pumped Storage Water Supply, LLC, P...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-30
... and Fairmont) and two existing, 36- and 54- inch diameter steel pipelines; (2) three proposed powerhouses, each to contain a 0.37-megawatt-(MW) turbine-generating unit, with a total capacity of 1.11 MW...-in point of a local power company grid system. The project would produce an estimated average annual...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-01
... time before the opening of trading in the underlying security when the Hybrid System will accept orders... related language in CBOE Rule 4.18 because the concept of leasing memberships no longer exists after the... Holder''). The Exchange is proposing to delete references to the concept of registering a membership for...
77 FR 15026 - Privacy Act of 1974; Farm Records File (Automated) System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-14
... Mining Project, all program data collected and handled by either RMA or FSA will be treated with the full... data warehouse and data mining operation. RMA will use the information to search or ``mine'' existing... fraud, waste, and abuse. The data mining operation is authorized by the Agricultural Risk Protection Act...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-06
...-long underground penstock that would collect water from the Scooteney wasteway; (2) a powerhouse containing one turbine/generator unit with a capacity of 1,110 kilowatts; (3) a 4.2- mile-long, 115-kilovolt.... The wasteway functions as a diversion of surplus water from the irrigation system to the existing...
Solar heating system installed at Troy, Ohio
NASA Technical Reports Server (NTRS)
1980-01-01
The completed system was composed of three basic subsystems: the collector system consisting of 3,264 square feet of Owens Illinois evacuated glass tube collectors; the storage system which included a 5,000 gallon insulated steel tank; and the distribution and control system which included piping, pumping and heat transfer components as well as the solemoid activated valves and control logic for the efficient and safe operation of the entire system. This solar heating system was installed in an existing facility and was, therefore, a retrofit system. Extracts from the site files, specifications, drawings, installation, operation and maintenance instructions are included.
NASA Technical Reports Server (NTRS)
Tinetti, Ana F.; Maglieri, Domenic J.; Driver, Cornelius; Bobbitt, Percy J.
2011-01-01
A detailed geometric description, in wave drag format, has been developed for the Convair B-58 and North American XB-70-1 delta wing airplanes. These descriptions have been placed on electronic files, the contents of which are described in this paper They are intended for use in wave drag and sonic boom calculations. Included in the electronic file and in the present paper are photographs and 3-view drawings of the two airplanes, tabulated geometric descriptions of each vehicle and its components, and comparisons of the electronic file outputs with existing data. The comparisons include a pictorial of the two airplanes based on the present geometric descriptions, and cross-sectional area distributions for both the normal Mach cuts and oblique Mach cuts above and below the vehicles. Good correlation exists between the area distributions generated in the late 1950s and 1960s and the present files. The availability of these electronic files facilitates further validation of sonic boom prediction codes through the use of two existing data bases on these airplanes, which were acquired in the 1960s and have not been fully exploited.
Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E.; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders
2018-01-01
Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. With the publication of Exdir, we invite the scientific community to join the development to create an open specification that will serve as many needs as possible and as a foundation for open access to and exchange of data. PMID:29706879
Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders
2018-01-01
Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. With the publication of Exdir, we invite the scientific community to join the development to create an open specification that will serve as many needs as possible and as a foundation for open access to and exchange of data.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-30
... wide by 50 feet long by 30 feet deep; (3) the existing 50-foot-long by 20-foot-wide by 30-foot- deep... Commission's Web site ( http://www.ferc.gov/docs-filing/ferconline.asp ) under the ``eFiling'' link. For a... 20426. For more information on how to submit these types of filings please go to the Commission's Web...
Distributed Virtual System (DIVIRS) Project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1994-01-01
As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, Clifford B.
1995-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
Distributed Virtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
TagDigger: user-friendly extraction of read counts from GBS and RAD-seq data.
Clark, Lindsay V; Sacks, Erik J
2016-01-01
In genotyping-by-sequencing (GBS) and restriction site-associated DNA sequencing (RAD-seq), read depth is important for assessing the quality of genotype calls and estimating allele dosage in polyploids. However, existing pipelines for GBS and RAD-seq do not provide read counts in formats that are both accurate and easy to access. Additionally, although existing pipelines allow previously-mined SNPs to be genotyped on new samples, they do not allow the user to manually specify a subset of loci to examine. Pipelines that do not use a reference genome assign arbitrary names to SNPs, making meta-analysis across projects difficult. We created the software TagDigger, which includes three programs for analyzing GBS and RAD-seq data. The first script, tagdigger_interactive.py, rapidly extracts read counts and genotypes from FASTQ files using user-supplied sets of barcodes and tags. Input and output is in CSV format so that it can be opened by spreadsheet software. Tag sequences can also be imported from the Stacks, TASSEL-GBSv2, TASSEL-UNEAK, or pyRAD pipelines, and a separate file can be imported listing the names of markers to retain. A second script, tag_manager.py, consolidates marker names and sequences across multiple projects. A third script, barcode_splitter.py, assists with preparing FASTQ data for deposit in a public archive by splitting FASTQ files by barcode and generating MD5 checksums for the resulting files. TagDigger is open-source and freely available software written in Python 3. It uses a scalable, rapid search algorithm that can process over 100 million FASTQ reads per hour. TagDigger will run on a laptop with any operating system, does not consume hard drive space with intermediate files, and does not require programming skill to use.
78 FR 49501 - Combined Notice of Filings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
... Numbers: RP13-1177-000. Applicants: Garden Banks Gas Pipeline, LLC. Description: Compliance with New ACA...: Central Kentucky Transmission Company. Description: ACA 2013 to be effective 10/1/2013. Filed Date: 8/1/13... intervention is necessary to become a party to the proceeding. Filings in Existing Proceedings Docket Numbers...
A program to generate a Fortran interface for a C++ library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Lee
Shroud is a utility to create a Fortran and C interface for a C++ library. An existing C++ library API is described in an input file. Shroud reads the file and creates source files which can be compiled to provide a Fortran API for the library.
75 FR 17707 - Arlington Storage Company, LLC; Notice of Filing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-07
... Storage Company, LLC; Notice of Filing March 30, 2010. Take notice that on March 24, 2010, Arlington Storage Company, LLC (ASC), Two Brush Creek Boulevard, Kansas City, Missouri 64112, filed an application... existing underground natural gas storage facility located in Schuyler County, New York known as the Seneca...
78 FR 77448 - City of Riverside, California; Notice of Filing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-23
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. NJ14-2-000] City of Riverside, California; Notice of Filing Take notice that on December 11, 2013, City of Riverside, California submitted its tariff filing per 35.28(e): 2014 Transmission Revenue Balancing Account Adjustment/Existing...
Web Extensible Display Manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slominski, Ryan; Larrieu, Theodore L.
Jefferson Lab's Web Extensible Display Manager (WEDM) allows staff to access EDM control system screens from a web browser in remote offices and from mobile devices. Native browser technologies are leveraged to avoid installing and managing software on remote clients such as browser plugins, tunnel applications, or an EDM environment. Since standard network ports are used firewall exceptions are minimized. To avoid security concerns from remote users modifying a control system, WEDM exposes read-only access and basic web authentication can be used to further restrict access. Updates of monitored EPICS channels are delivered via a Web Socket using a webmore » gateway. The software translates EDM description files (denoted with the edl suffix) to HTML with Scalable Vector Graphics (SVG) following the EDM's edl file vector drawing rules to create faithful screen renderings. The WEDM server parses edl files and creates the HTML equivalent in real-time allowing existing screens to work without modification. Alternatively, the familiar drag and drop EDM screen creation tool can be used to create optimized screens sized specifically for smart phones and then rendered by WEDM.« less
DOT National Transportation Integrated Search
2001-02-01
The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...
Long-Term file activity patterns in a UNIX workstation environment
NASA Technical Reports Server (NTRS)
Gibson, Timothy J.; Miller, Ethan L.
1998-01-01
As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.
Data collection and preparation of authoritative reviews on space food and nutrition research
NASA Technical Reports Server (NTRS)
1972-01-01
The collection and classification of information for a manually operated information retrieval system on the subject of space food and nutrition research are described. The system as it currently exists is designed for retrieval of documents, either in hard copy or on microfiche, from the technical files of the MSC Food and Nutrition Section by accession number, author, and/or subject. The system could readily be extended to include retrieval by affiliation, report and contract number, and sponsoring agency should the need arise. It can also be easily converted to computerized retrieval. At present the information retrieval system contains nearly 3000 documents which consist of technical papers, contractors' reports, and reprints obtained from the food and nutrition files at MSC, Technical Library, the library at the Texas Medical Center in Houston, the BMI Technical Libraries, Dr. E. B. Truitt at MBI, and the OSU Medical Libraries. Additional work was done to compile 18 selected bibliographies on subjects of immediate interest on the MSC Food and Nutrition Section.
Implementation of a Campuswide Distributed Mass Storage Service: the Dream Versus Reality
NASA Technical Reports Server (NTRS)
Prahst, Stephen; Armstead, Betty Jo
1996-01-01
In 1990, a technical team at NASA Lewis Research Center, Cleveland, Ohio, began defining a Mass Storage Service to pro- wide long-term archival storage, short-term storage for very large files, distributed Network File System access, and backup services for critical data dw resides on workstations and personal computers. Because of software availability and budgets, the total service was phased in over dm years. During the process of building the service from the commercial technologies available, our Mass Storage Team refined the original vision and learned from the problems and mistakes that occurred. We also enhanced some technologies to better meet the needs of users and system administrators. This report describes our team's journey from dream to reality, outlines some of the problem areas that still exist, and suggests some solutions.
Wang, Yanchao; Sunderraman, Rajshekhar
2006-01-01
In this paper, we propose two architectures for curating PDB data to improve its quality. The first one, PDB Data Curation System, is developed by adding two parts, Checking Filter and Curation Engine, between User Interface and Database. This architecture supports the basic PDB data curation. The other one, PDB Data Curation System with XCML, is designed for further curation which adds four more parts, PDB-XML, PDB, OODB, Protin-OODB, into the previous one. This architecture uses XCML language to automatically check errors of PDB data that enables PDB data more consistent and accurate. These two tools can be used for cleaning existing PDB files and creating new PDB files. We also show some ideas how to add constraints and assertions with XCML to get better data. In addition, we discuss the data provenance that may affect data accuracy and consistency.
BOREAS AFM-5 Level-1 Upper Air Network Data
NASA Technical Reports Server (NTRS)
Barr, Alan; Hrynkiw, Charmaine; Newcomer, Jeffrey A. (Editor); Hall, Forrest G. (Editor); Smith, David E. (Technical Monitor)
2000-01-01
The Boreal Ecosystem-Atmosphere Study (BOREAS) Airborne Fluxes and Meteorology (AFM)-5 team collected and processed data from the numerous radiosonde flights during the project. The goals of the AFM-05 team were to provide large-scale definition of the atmosphere by supplementing the existing Atmospheric Environment Service (AES) aerological network, both temporally and spatially. This data set includes basic upper-air parameters collected from the network of upper-air stations during the 1993, 1994, and 1996 field campaigns over the entire study region. The data are contained in tabular ASCII files. The level-1 upper-air network data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files also are available on a CD-ROM (see document number 20010000884).
Code of Federal Regulations, 2010 CFR
2010-01-01
... identity when filing documents and serving participants electronically through the E-Filing system, and... transmitted electronically from the E-Filing system to the submitter confirming receipt of electronic filing... presentation of the docket and a link to its files. E-Filing System means an electronic system that receives...
NASA CDDIS: Next Generation System
NASA Astrophysics Data System (ADS)
Michael, B. P.; Noll, C. E.; Woo, J. Y.; Limbacher, R. I.
2017-12-01
The Crustal Dynamics Data Information System (CDDIS) supports data archiving and distribution activities for the space geodesy and geodynamics community. The main objectives of the system are to make space geodesy and geodynamics related data and derived products available in a central archive, to maintain information about the archival of these data, to disseminate these data and information in a timely manner to a global scientific research community, and to provide user based tools for the exploration and use of the archive. As the techniques and data volume have increased, the CDDIS has evolved to offer a broad range of data ingest services, from data upload, quality control, documentation, metadata extraction, and ancillary information. As a major step taken to improve services, the CDDIS has transitioned to a new hardware system and implemented incremental upgrades to a new software system to meet these goals while increasing automation. This new system increases the ability of the CDDIS to consistently track errors and issues associated with data and derived product files uploaded to the system and to perform post-ingest checks on all files received for the archive. In addition, software to process new data sets and changes to existing data sets have been implemented to handle new formats and any issues identified during the ingest process. In this poster, we will discuss the CDDIS archive in general as well as review and contrast the system structures and quality control measures employed before and after the system upgrade. We will also present information about new data sets and changes to existing data and derived products archived at the CDDIS.
Yang, Guo-Liang; Lim, C C Tchoyoson
2006-08-01
Radiology education is heavily dependent on visual images, and case-based teaching files comprising medical images can be an important tool for teaching diagnostic radiology. Currently, hardcopy film is being rapidly replaced by digital radiological images in teaching hospitals, and an electronic teaching file (ETF) library would be desirable. Furthermore, a repository of ETFs deployed on the World Wide Web has the potential for e-learning applications to benefit a larger community of learners. In this paper, we describe a Singapore National Medical Image Resource Centre (SN.MIRC) that can serve as a World Wide Web resource for teaching diagnostic radiology. On SN.MIRC, ETFs can be created using a variety of mechanisms including file upload and online form-filling, and users can search for cases using the Medical Image Resource Center (MIRC) query schema developed by the Radiological Society of North America (RSNA). The system can be improved with future enhancements, including multimedia interactive teaching files and distance learning for continuing professional development. However, significant challenges exist when exploring the potential of using the World Wide Web for radiology education.
DOT National Transportation Integrated Search
2001-02-01
The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...
78 FR 61995 - Combined Notice of Filings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-10
.... Comments Due: 5 p.m. ET 10/9/13. Docket Numbers: RP13-1357-000. Applicants: Young Gas Storage Company, Ltd. Description: Annual Operational Purchases and Sales Report of Young Gas Storage Company, Ltd.. Filed Date: 9... necessary to become a party to the proceeding. Filings in Existing Proceedings Docket Numbers: PR13-62-001...
77 FR 46561 - Amendments to Adjudicatory Process Rules and Related Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-03
... eight late-filed factors, especially not for late-filed hearing requests or intervention petitions. The... current three Sec. 2.309(f)(2) factors. As the NRC explained in the proposed rule, whether filings after... the existence of good cause, not the other factors. The commenter has not supported its assertion that...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-07
... prior registration, using the eComment system at http://www.ferc.gov/docs-filing/ecomment.asp . You must... square feet) with 20 tie cleats placed for a total of 10 boat slips. The application also requests... addition, the application includes an existing dock with 10 boat slips and 20 tie cleats (2802 square feet...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-12
... Reservoir--City of New York Aqueduct, lower part of the Catskill Water Distribution System, which is owned... turbine generator units each with a rated capacity of 1,000 kilowatts installed in the existing bays... intent to cease the delivery of water through a portion of the Catskill Aqueduct required by the NYPA's...
DMFS: A Data Migration File System for NetBSD
NASA Technical Reports Server (NTRS)
Studenmund, William
2000-01-01
I have recently developed DMFS, a Data Migration File System, for NetBSD. This file system provides kernel support for the data migration system being developed by my research group at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal metadata in a flat file, which resides on a separate file system. This paper will first describe our data migration system to provide a context for DMFS, then it will describe DMFS. It also will describe the changes to NetBSD needed to make DMFS work. Then it will give an overview of the file archival and restoration procedures, and describe how some typical user actions are modified by DMFS. Lastly, the paper will present simple performance measurements which indicate that there is little performance loss due to the use of the DMFS layer.
Adopting Internet Standards for Orbital Use
NASA Technical Reports Server (NTRS)
Wood, Lloyd; Ivancic, William; da Silva Curiel, Alex; Jackson, Chris; Stewart, Dave; Shell, Dave; Hodgson, Dave
2005-01-01
After a year of testing and demonstrating a Cisco mobile access router intended for terrestrial use onboard the low-Earth-orbiting UK-DMC satellite as part of a larger merged ground/space IP-based internetwork, we reflect on and discuss the benefits and drawbacks of integration and standards reuse for small satellite missions. Benefits include ease of operation and the ability to leverage existing systems and infrastructure designed for general use, as well as reuse of existing, known, and well-understood security and operational models. Drawbacks include cases where integration work was needed to bridge the gaps in assumptions between different systems, and where performance considerations outweighed the benefits of reuse of pre-existing file transfer protocols. We find similarities with the terrestrial IP networks whose technologies we have adopted and also some significant differences in operational models and assumptions that must be considered.
NASA Technical Reports Server (NTRS)
Duggan, Brian
2012-01-01
Downloading and organizing large amounts of files is challenging, and often done using ad hoc methods. This software is capable of downloading and organizing files as an OpenSearch client. It can subscribe to RSS (Really Simple Syndication) feeds and Atom feeds containing arbitrary metadata, and maintains a local content addressable data store. It uses existing standards for obtaining the files, and uses efficient techniques for storing the files. Novel features include symbolic links to maintain a sane directory structure, checksums for validating file integrity during transfer and storage, and flexible use of server-provided metadata.
NASA Technical Reports Server (NTRS)
Sherman, Mark; Kodis, John; Bedet, Jean-Jacques; Wacker, Chris; Woytek, Joanne; Lynnes, Chris
1996-01-01
The Goddard Space Flight Center (GSFC) version 0 Distributed Active Archive Center (DAAC) has been developed to support existing and pre Earth Observing System (EOS) Earth science datasets, facilitate the scientific research, and test EOS data and information system (EOSDIS) concepts. To ensure that no data is ever lost, each product received at GSFC DAAC is archived on two different media, VHS and digital linear tape (DLT). The first copy is made on VHS tape and is under the control of UniTree. The second and third copies are made to DLT and VHS media under a custom built software package named 'Archer'. While Archer provides only a subset of the functions available with commercial software like UniTree, it supports migration between near-line and off-line media and offers much greater performance and flexibility to satisfy the specific needs of a data center. Archer is specifically designed to maximize total system throughput, rather than focusing on the turn-around time for individual files. The commercial off the shelf software (COTS) hierarchical storage management (HSM) products evaluated were mainly concerned with transparent, interactive, file access to the end-user, rather than a batch-orientated, optimizable (based on known data file characteristics) data archive and retrieval system. This is critical to the distribution requirements of the GSFC DAAC where orders for 5000 or more files at a time are received. Archer has the ability to queue many thousands of file requests and to sort these requests into internal processing schedules that optimize overall throughput. Specifically, mount and dismount, tape load and unload cycles, and tape motion are minimized. This feature did not seem to be available in many COTS pacages. Archer also uses a generic tar tape format that allows tapes to be read by many different systems rather than the proprietary format found in most COTS packages. This paper discusses some of the specific requirements at GSFC DAAC, the motivations for implementing the Archer system, and presents a discussion of the Archer design that resulted.
Metadata and Service at the GFZ ISDC Portal
NASA Astrophysics Data System (ADS)
Ritschel, B.
2008-05-01
The online service portal of the GFZ Potsdam Information System and Data Center (ISDC) is an access point for all manner of geoscientific geodata, its corresponding metadata, scientific documentation and software tools. At present almost 2000 national and international users and user groups have the opportunity to request Earth science data from a portfolio of 275 different products types and more than 20 Million single data files with an added volume of approximately 12 TByte. The majority of the data and information, the portal currently offers to the public, are global geomonitoring products such as satellite orbit and Earth gravity field data as well as geomagnetic and atmospheric data for the exploration. These products for Earths changing system are provided via state-of-the art retrieval techniques. The data product catalog system behind these techniques is based on the extensive usage of standardized metadata, which are describing the different geoscientific product types and data products in an uniform way. Where as all ISDC product types are specified by NASA's Directory Interchange Format (DIF), Version 9.0 Parent XML DIF metadata files, the individual data files are described by extended DIF metadata documents. Depending on the beginning of the scientific project, one part of data files are described by extended DIF, Version 6 metadata documents and the other part are specified by data Child XML DIF metadata documents. Both, the product type dependent parent DIF metadata documents and the data file dependent child DIF metadata documents are derived from a base-DIF.xsd xml schema file. The ISDC metadata philosophy defines a geoscientific product as a package consisting of mostly one or sometimes more than one data file plus one extended DIF metadata file. Because NASA's DIF metadata standard has been developed in order to specify a collection of data only, the extension of the DIF standard consists of new and specific attributes, which are necessary for an explicit identification of single data files and the set-up of a comprehensive Earth science data catalog. The huge ISDC data catalog is realized by product type dependent tables filled with data file related metadata, which have relations to corresponding metadata tables. The product type describing parent DIF XML metadata documents are stored and managed in ORACLE's XML storage structures. In order to improve the interoperability of the ISDC service portal, the existing proprietary catalog system will be extended by an ISO 19115 based web catalog service. In addition to this development there is ISDC related concerning semantic network of different kind of metadata resources, like different kind of standardized and not-standardized metadata documents and literature as well as Web 2.0 user generated information derived from tagging activities and social navigation data.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the pre-filing review of any pipeline or other natural gas facilities, including facilities not... from the subject LNG terminal facilities to the existing natural gas pipeline infrastructure. (b) Other... and review process for LNG terminal facilities and other natural gas facilities prior to filing of...
31 CFR 501.705 - Service and filing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... house or usual place of abode with a person at least 18 years of age then residing therein; or with any.... Papers filed in connection with any proceeding shall: (i) Be on one grade of unglazed white paper... after reasonable inquiry, the filing is well grounded in fact and is warranted by existing law or a good...
31 CFR 501.705 - Service and filing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... house or usual place of abode with a person at least 18 years of age then residing therein; or with any.... Papers filed in connection with any proceeding shall: (i) Be on one grade of unglazed white paper... after reasonable inquiry, the filing is well grounded in fact and is warranted by existing law or a good...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-05
... Hydropower, LLC; Notice of Application Accepted for Filing and Soliciting Motions To Intervene and Protests... No.: P-12783-003. c. Date filed: July 22, 2009. d. Applicant: Inglis Hydropower, LLC. e. Name of Project: Inglis Hydropower Project. f. Location: The proposed project would be located at the existing...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-20
... SECURITIES AND EXCHANGE COMMISSION [Release No. 68443; File No. SR-DTC-2012-09] Self-Regulatory Organizations; The Depository Trust Company; Notice of Filing and Immediate Effectiveness of Proposed Rule Change To Make Ministerial Changes to the Existing Reorganization Service Guide December 14, 2012. Pursuant to Section 19(b)(1) of the Securitie...
Design and Development of a Prototype Organizational Effectiveness Information System
1984-11-01
information from a large number of people. The existing survey support process for the GOQ is not satisfac- * tory. Most OESOs elect not to use it, because...reporting process uses screen queries and menus to simplify data entry, it is estimated that only 4-6 hours of data entry time would be required for ...description for the file named EVEDIR. The Resource System allows users of the Event Directory to select from the following processing options. o Add a new
Nuclear propulsion technology development - A joint NASA/Department of Energy project
NASA Technical Reports Server (NTRS)
Clark, John S.
1992-01-01
NASA-Lewis has undertaken the conceptual development of spacecraft nuclear propulsion systems with DOE support, in order to establish the bases for Space Exploration Initiative lunar and Mars missions. This conceptual evolution project encompasses nuclear thermal propulsion (NTP) and nuclear electric propulsion (NEP) systems. A technology base exists for NTP in the NERVA program files; more fundamental development efforts are entailed in the case of NEP, but this option is noted to offer greater advantages in the long term.
Dwyer, John L.; Schmidt, Gail L.; Qu, J.J.; Gao, W.; Kafatos, M.; Murphy , R.E.; Salomonson, V.V.
2006-01-01
The MODIS Reprojection Tool (MRT) is designed to help individuals work with MODIS Level-2G, Level-3, and Level-4 land data products. These products are referenced to a global tiling scheme in which each tile is approximately 10° latitude by 10° longitude and non-overlapping (Fig. 9.1). If desired, the user may reproject only selected portions of the product (spatial or parameter subsetting). The software may also be used to convert MODIS products to file formats (generic binary and GeoTIFF) that are more readily compatible with existing software packages. The MODIS land products distributed by the Land Processes Distributed Active Archive Center (LP DAAC) are in the Hierarchical Data Format - Earth Observing System (HDF-EOS), developed by the National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign for the NASA EOS Program. Each HDF-EOS file is comprised of one or more science data sets (SDSs) corresponding to geophysical or biophysical parameters. Metadata are embedded in the HDF file as well as contained in a .met file that is associated with each HDF-EOS file. The MRT supports 8-bit, 16-bit, and 32-bit integer data (both signed and unsigned), as well as 32-bit float data. The data type of the output is the same as the data type of each corresponding input SDS.
Riis, Viivi; Jaglal, Susan; Boschen, Kathryn; Walker, Jan; Verrier, Molly
2011-01-01
Rehabilitation costs for spinal-cord injury (SCI) are increasingly borne by Canada's private health system. Because of poor outcomes, payers are questioning the value of their expenditures, but there is a paucity of data informing analysis of rehabilitation costs and outcomes. This study evaluated the feasibility of using administrative claim file review to extract rehabilitation payment data and functional status for a sample of persons with work-related SCI. Researchers reviewed 28 administrative e-claim files for persons who sustained a work-related SCI between 1996 and 2000. Payment data were extracted for physical therapy (PT), occupational therapy (OT), and psychology services. Functional Independence Measure (FIM) scores were targeted as a surrogate measure for functional outcome. Feasibility was tested using an existing approach for evaluating health services data. The process of administrative e-claim file review was not practical for extraction of the targeted data. While administrative claim files contain some rehabilitation payment and outcome data, in their present form the data are not suitable to inform rehabilitation services research. A new strategy to standardize collection, recording, and sharing of data in the rehabilitation industry should be explored as a means of promoting best practices.
Small file aggregation in a parallel computing system
Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang
2014-09-02
Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.
75 FR 63869 - Notice: Existing Collection; Comment Requested
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-18
... that all filers of Forms 10-K and 20-F would file an auditor's attestation report. The filers that were... filing the auditor attestation report, including the burden attributed to the related disclosure in the...
ERIC Educational Resources Information Center
Holmer, Freeman
The Oregon Board of Higher Education approved a revision of its existing budgeting procedure, the result of nearly two years' work. The effort was undertaken because of deeply held concern about both the adequacy of the resources provided and the equity of the distribution of the available funds to the several institutions. It was determined that…
Collective operations in a file system based execution model
Shinde, Pravin; Van Hensbergen, Eric
2013-02-12
A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.
Collective operations in a file system based execution model
Shinde, Pravin; Van Hensbergen, Eric
2013-02-19
A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.
Design and Implementation of a Metadata-rich File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ames, S; Gokhale, M B; Maltzahn, C
2010-01-19
Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less
Fast probabilistic file fingerprinting for big data
2013-01-01
Background Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. Results We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Conclusions Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff. PMID:23445565
Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron
2015-10-20
Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.
TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.
Reciproc versus Twisted file for root canal filling removal: assessment of apically extruded debris.
Altunbas, Demet; Kutuk, Betul; Toyoglu, Mustafa; Kutlu, Gizem; Kustarci, Alper; Er, Kursat
2016-01-01
The aim of this study was to evaluate the amount of apically extruded debris during endodontic retreatment with different file systems. Sixty extracted human mandibular premolar teeth were used in this study. Root canals of the teeth were instrumented and filled before being randomly assigned to three groups. Guttapercha was removed using the Reciproc system, the Twisted File system (TF), and Hedström-files (H-file). Apically extruded debris was collected and dried in pre-weighed Eppendorf tubes. The amount of extruded debris was assessed with an electronic balance. Data were statistically analyzed using one-way ANOVA, Kruskal-Wallis, and Mann-Whitney U tests. The Reciproc and TF systems extruded significantly less debris than the H-file (p<0.05). However, no significant difference was found between the Reciproc and TF systems. All tested file systems caused apical extrusion of debris. Both the rotary file (TF) and the reciprocating single-file (Reciproc) systems were associated with less apical extrusion compared with the H-file.
Bent, John M.; Faibish, Sorin; Grider, Gary
2016-04-19
Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.
25 CFR 15.203 - What information must Tribes provide BIA to complete the probate file?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false What information must Tribes provide BIA to complete the... the Probate File § 15.203 What information must Tribes provide BIA to complete the probate file... pending probate matter, and a copy of Tribal probate orders where they exist. [76 FR 7505, Feb. 10, 2011] ...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-13
... Harvey Gap 400 Project would be located on the existing Grass Valley Canal irrigation pipeline in... will be located. g. Filed Pursuant to: Federal Power Act 16 U.S.C. 791a-825r. h. Applicant Contacts... address in item h above. n. Development Application--Any qualified applicant desiring to file a competing...
Toward information management in corporations (2)
NASA Astrophysics Data System (ADS)
Shibata, Mitsuru
If construction of inhouse information management systems in an advanced information society should be positioned along with the social information management, its base making begins with reviewing current paper filing systems. Since the problems which inhere in inhouse information management systems utilizing OA equipments also inhere in paper filing systems, the first step toward full scale inhouse information management should be to grasp and solve the fundamental problems in current filing systems. This paper describes analysis of fundamental problems in filing systems, making new type of offices and analysis of improvement needs in filing systems, and some points in improving filing systems.
An object-oriented class library for medical software development.
O'Kane, K C; McColligan, E E
1996-12-01
The objective of this research is the development of a Medical Object Library (MOL) consisting of reusable, inheritable, portable, extendable C++ classes that facilitate rapid development of medical software at reduced cost and increased functionality. The result of this research is a library of class objects that range in function from string and hierarchical file handling entities to high level, procedural agents that perform increasingly complex, integrated tasks. A system built upon these classes is compatible with any other system similarly constructed with respect to data definitions, semantics, data organization and storage. As new objects are built, they can be added to the class library for subsequent use. The MOL is a toolkit of software objects intended to support a common file access methodology, a unified medical record structure, consistent message processing, standard graphical display facilities and uniform data collection procedures. This work emphasizes the relationship that potentially exists between the structure of a hierarchical medical record and procedural language components by means of a hierarchical class library and tree structured file access facility. In doing so, it attempts to establish interest in and demonstrate the practicality of the hierarchical medical record model in the modern context of object oriented programming.
Activate/Inhibit KGCS Gateway via Master Console EIC Pad-B Display
NASA Technical Reports Server (NTRS)
Ferreira, Pedro Henrique
2014-01-01
My internship consisted of two major projects for the Launch Control System.The purpose of the first project was to implement the Application Control Language (ACL) to Activate Data Acquisition (ADA) and to Inhibit Data Acquisition (IDA) the Kennedy Ground Control Sub-Systems (KGCS) Gateway, to update existing Pad-B End Item Control (EIC) Display to program the ADA and IDA buttons with new ACL, and to test and release the ACL Display.The second project consisted of unit testing all of the Application Services Framework (ASF) by March 21st. The XmlFileReader was unit tested and reached 100 coverage. The XmlFileReader class is used to grab information from XML files and use them to initialize elements in the other framework elements by using the Xerces C++ XML Parser; which is open source commercial off the shelf software. The ScriptThread was also tested. ScriptThread manages the creation and activation of script threads. A large amount of the time was used in initializing the environment and learning how to set up unit tests and getting familiar with the specific segments of the project that were assigned to us.
Recent Updates to the CFD General Notation System (CGNS)
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Wedan, Bruce; Hauser, Thomas; Poinot, Marc
2012-01-01
The CFD General Notation System (CGNS) - a general, portable, and extensible standard for the storage and retrieval of computational fluid dynamics (CFD) analysis data has been in existence for more than a decade (Version 1.0 was released in May 1998). Both structured and unstructured CFD data are covered by the standard, and CGNS can be easily extended to cover any sort of data imaginable, while retaining backward compatibility with existing CGNS data files and software. Although originally designed for CFD, it is readily extendable to any field of computational analysis. In early 2011, CGNS Version 3.1 was released, which added significant capabilities. This paper describes these recent enhancements and highlights the continued usefulness of the CGNS methodology.
PCACE- PERSONAL COMPUTER AIDED CABLING ENGINEERING
NASA Technical Reports Server (NTRS)
Billitti, J. W.
1994-01-01
A computerized interactive harness engineering program has been developed to provide an inexpensive, interactive system which is designed for learning and using an engineering approach to interconnection systems. PCACE is basically a database system that stores information as files of individual connectors and handles wiring information in circuit groups stored as records. This directly emulates the typical manual engineering methods of data handling, thus making the user interface to the program very natural. Data files can be created, viewed, manipulated, or printed in real time. The printed ouput is in a form ready for use by fabrication and engineering personnel. PCACE also contains a wide variety of error-checking routines including connector contact checks during hardcopy generation. The user may edit existing harness data files or create new files. In creating a new file, the user is given the opportunity to insert all the connector and harness boiler plate data which would be part of a normal connector wiring diagram. This data includes the following: 1) connector reference designator, 2) connector part number, 3) backshell part number, 4) cable reference designator, 5) cable part number, 6) drawing revision, 7) relevant notes, 8) standard wire gauge, and 9) maximum circuit count. Any item except the maximum circuit count may be left blank, and any item may be changed at a later time. Once a file is created and organized, the user is directed to the main menu and has access to the file boiler plate, the circuit wiring records, and the wiring records index list. The organization of a file is such that record zero contains the connector/cable boiler plate, and all other records contain circuit wiring data. Each wiring record will handle a circuit with as many as nine wires in the interface. The record stores the circuit name and wire count and the following data for each wire: 1) wire identifier, 2) contact, 3) splice, 4) wire gauge if different from standard, 5) wire/group type, 6) wire destination, and 7) note number. The PCACE record structure allows for a wide variety of wiring forms using splices and shields, yet retains sufficient structure to maintain ease of use. PCACE is written in TURBO Pascal 3.0 and has been implemented on IBM PC, XT, and AT systems under DOS 3.1 with a memory of 512K of 8 bit bytes, two floppy disk drives, an RGB monitor, and a printer with ASCII control characters. PCACE was originally developed in 1983, and the IBM version was released in 1986.
NASA Astrophysics Data System (ADS)
Sangaline, E.; Lauret, J.
2014-06-01
The quantity of information produced in Nuclear and Particle Physics (NPP) experiments necessitates the transmission and storage of data across diverse collections of computing resources. Robust solutions such as XRootD have been used in NPP, but as the usage of cloud resources grows, the difficulties in the dynamic configuration of these systems become a concern. Hadoop File System (HDFS) exists as a possible cloud storage solution with a proven track record in dynamic environments. Though currently not extensively used in NPP, HDFS is an attractive solution offering both elastic storage and rapid deployment. We will present the performance of HDFS in both canonical I/O tests and for a typical data analysis pattern within the RHIC/STAR experimental framework. These tests explore the scaling with different levels of redundancy and numbers of clients. Additionally, the performance of FUSE and NFS interfaces to HDFS were evaluated as a way to allow existing software to function without modification. Unfortunately, the complicated data structures in NPP are non-trivial to integrate with Hadoop and so many of the benefits of the MapReduce paradigm could not be directly realized. Despite this, our results indicate that using HDFS as a distributed filesystem offers reasonable performance and scalability and that it excels in its ease of configuration and deployment in a cloud environment.
Management of scientific information with Google Drive.
Kubaszewski, Łukasz; Kaczmarczyk, Jacek; Nowakowski, Andrzej
2013-09-20
The amount and diversity of scientific publications requires a modern management system. By "management" we mean the process of gathering interesting information for the purpose of reading and archiving for quick access in future clinical practice and research activity. In the past, such system required physical existence of a library, either institutional or private. Nowadays in an era dominated by electronic information, it is natural to migrate entire systems to a digital form. In the following paper we describe the structure and functions of an individual electronic library system (IELiS) for the management of scientific publications based on the Google Drive service. Architecture of the system. Architecture system consists of a central element and peripheral devices. Central element of the system is virtual Google Drive provided by Google Inc. Physical elements of the system include: tablet with Android operating system and a personal computer, both with internet access. Required software includes a program to view and edit files in PDF format for mobile devices and another to synchronize the files. Functioning of the system. The first step in creating a system is collection of scientific papers in PDF format and their analysis. This step is performed most frequently on a tablet. At this stage, after being read, the papers are cataloged in a system of folders and subfolders, according to individual demands. During this stage, but not exclusively, the PDF files are annotated by the reader. This allows the user to quickly track down interesting information in review or research process. Modification of the document title is performed at this stage, as well. Second element of the system is creation of a mirror database in the Google Drive virtual memory. Modified and cataloged papers are synchronized with Google Drive. At this stage, a fully functional scientific information electronic library becomes available online. The third element of the system is a periodic two-way synchronization of data between Google Drive and tablet, as occasional modification of the files with annotation or recataloging may be performed at both locations. The system architecture is designed to gather, catalog and analyze scientific publications. All steps are electronic, eliminating paper forms. Indexed files are available for re-reading and modification. The system allows for fast access to full-text search with additional features making research easier. Team collaboration is also possible with full control of user privileges. Particularly important is the safety of collected data. In our opinion, the system exceeds many commercially available applications in terms of functionality and versatility.
American Telephone and Telegraph System V/MLS Release 1.1.2 Running on Unix System V Release 3.1.1
1989-10-18
Evaluation Report AT&T System V/MLS SYSTEM OVERVIEW what is specified in the /mls/ passwd file. For a complete description of how this works, see page 62...from the publicly readable files /etc/ passwd and /etclgroup, to the protected files /mlslpasswd and /mls/group. These protected files are ASCII...files which are referred to as "shadow files". October 18, 1989 62 Final Evaluation Report AT&T System V/MLS SYSTEM OVERVIEW Imls/ passwd contains the
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-12
... Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application Document, and Approving Use of the Traditional Licensing Process a. Type of Filing: Notice of Intent to File License...: November 11, 2012. d. Submitted by: Aquenergy Systems, Inc., a fully owned subsidiaries of Enel Green Power...
Storing files in a parallel computing system based on user-specified parser function
Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron
2014-10-21
Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.
Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Torres, Aaron
2015-02-03
Techniques are provided for storing files in a parallel computing system using sub-files with semantically meaningful boundaries. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a plurality of sub-files. The method comprises the steps of obtaining a user specification of semantic information related to the file; providing the semantic information as a data structure description to a data formatting library write function; and storing the semantic information related to the file with one or more of the sub-files in one or more storage nodes of the parallel computing system. The semantic information provides a description of data in the file. The sub-files can be replicated based on semantically meaningful boundaries.
Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF
NASA Technical Reports Server (NTRS)
Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.
2001-01-01
The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.
Storing files in a parallel computing system using list-based index to identify replica files
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faibish, Sorin; Bent, John M.; Tzelnic, Percy
Improved techniques are provided for storing files in a parallel computing system using a list-based index to identify file replicas. A file and at least one replica of the file are stored in one or more storage nodes of the parallel computing system. An index for the file comprises at least one list comprising a pointer to a storage location of the file and a storage location of the at least one replica of the file. The file comprises one or more of a complete file and one or more sub-files. The index may also comprise a checksum value formore » one or more of the file and the replica(s) of the file. The checksum value can be evaluated to validate the file and/or the file replica(s). A query can be processed using the list.« less
Determining the Completeness of the Nimbus Meteorological Data Archive
NASA Technical Reports Server (NTRS)
Johnson, James; Moses, John; Kempler, Steven; Zamkoff, Emily; Al-Jazrawi, Atheer; Gerasimov, Irina; Trivedi, Bhagirath
2011-01-01
NASA launched the Nimbus series of meteorological satellites in the 1960s and 70s. These satellites carried instruments for making observations of the Earth in the visible, infrared, ultraviolet, and microwave wavelengths. The original data archive consisted of a combination of digital data written to 7-track computer tapes and on various film media. Many of these data sets are now being migrated from the old media to the GES DISC modern online archive. The process involves recovering the digital data files from tape as well as scanning images of the data from film strips. Some of the challenges of archiving the Nimbus data include the lack of any metadata from these old data sets. Metadata standards and self-describing data files did not exist at that time, and files were written on now obsolete hardware systems and outdated file formats. This requires creating metadata by reading the contents of the old data files. Some digital data files were corrupted over time, or were possibly improperly copied at the time of creation. Thus there are data gaps in the collections. The film strips were stored in boxes and are now being scanned as JPEG-2000 images. The only information describing these images is what was written on them when they were originally created, and sometimes this information is incomplete or missing. We have the ability to cross-reference the scanned images against the digital data files to determine which of these best represents the data set from the various missions, or to see how complete the data sets are. In this presentation we compared data files and scanned images from the Nimbus-2 High-Resolution Infrared Radiometer (HRIR) for September 1966 to determine whether the data and images are properly archived with correct metadata.
Bringing Control System User Interfaces to the Web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xihui; Kasemir, Kay
With the evolution of web based technologies, especially HTML5 [1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY [3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY and provides the convenience of re-using existing OPI files. On the other hand, it uses generic JavaScript and a generic communication mechanism between the web browser andmore » web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA) [4]. It is a protocol that provides efficient control system data communication using WebSocket [5], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. WebPDA is control system independent, potentially supporting any type of control system.« less
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-06-30
Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.
Storing files in a parallel computing system based on user or application specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.
2016-03-29
Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on onemore » or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.« less
New Web Server - the Java Version of Tempest - Produced
NASA Technical Reports Server (NTRS)
York, David W.; Ponyik, Joseph G.
2000-01-01
A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.
The computerized OMAHA system in microsoft office excel.
Lai, Xiaobin; Wong, Frances K Y; Zhang, Peiqiang; Leung, Carenx W Y; Lee, Lai H; Wong, Jessica S Y; Lo, Yim F; Ching, Shirley S Y
2014-01-01
The OMAHA System was adopted as the documentation system in an interventional study. To systematically record client care and facilitate data analysis, two Office Excel files were developed. The first Excel file (File A) was designed to record problems, care procedure, and outcomes for individual clients according to the OMAHA System. It was used by the intervention nurses in the study. The second Excel file (File B) was the summary of all clients that had been automatically extracted from File A. Data in File B can be analyzed directly in Excel or imported in PASW for further analysis. Both files have four parts to record basic information and the three parts of the OMAHA System. The computerized OMAHA System simplified the documentation procedure and facilitated the management and analysis of data.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... with an existing 56-kilovolt transmission line, maintained by Southern California Edison, which runs... intent, and competing applications may be filed electronically via the Internet. See 18 CFR 385.2001(a)(1...
Permanent-File-Validation Utility Computer Program
NASA Technical Reports Server (NTRS)
Derry, Stephen D.
1988-01-01
Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.
76 FR 66695 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-27
.... DWHS P04 System name: Reduction-In-Force Case Files (February 11, 2011, 76 FR 7825). Changes....'' * * * * * DWHS P04 System name: Reduction-In-Force Case Files. System location: Human Resources Directorate... system: Storage: Paper file folders. Retrievability: Filed alphabetically by last name. Safeguards...
Insider Threat Indicator Ontology
2016-05-25
sometimes a subtle and debatable offense. The activities of employees or other insiders, such as reading the newspaper, playing games , or chatting in the...intent. We again relied on existing schema for the human domain [48] and also consulted theories of human intent [49]. We also drew inspiration for our...medical history None ModificationAc tion DigitalAction To change a file or system None MoneyAsset FinancialAsset An officially issued legal tender
Evaluation of the Military Criminal Investigative Organizations Sexual Assault Investigations
2013-07-09
reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite...direct the collection of clothing articles that a victim or suspect might have placed on themselves shortly after the assault, if different from the...NCIS policy. Once the field office confirmed the existence of the digitized files in the NCIS case management system , they destroyed the local copies
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-03
... book were $10.00 x $10.05, and a market participant entered a Post-Only Order to buy at $10.05, the.... Thus, if a sell order is on the book at $10 and a Post-Only Order to buy at $10.01 is entered, the... order's current behavior when crossing an existing order on the System. PHLX also notes that NASDAQ has...
Personal File Management for the Health Sciences.
ERIC Educational Resources Information Center
Apostle, Lynne
Written as an introduction to the concepts of creating a personal or reprint file, this workbook discusses both manual and computerized systems, with emphasis on the preliminary groundwork that needs to be done before starting any filing system. A file assessment worksheet is provided; considerations in developing a personal filing system are…
47 CFR 1.10008 - What are IBFS file numbers?
Code of Federal Regulations, 2013 CFR
2013-10-01
... Random Selection International Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign...) For a description of file number information, see The International Bureau Filing System File Number... 47 Telecommunication 1 2013-10-01 2013-10-01 false What are IBFS file numbers? 1.10008 Section 1...
47 CFR 1.10008 - What are IBFS file numbers?
Code of Federal Regulations, 2010 CFR
2010-10-01
... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2010-10-01 2010-10-01 false What are IBFS file numbers? 1.10008 Section 1...
47 CFR 1.10008 - What are IBFS file numbers?
Code of Federal Regulations, 2012 CFR
2012-10-01
... Random Selection International Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign...) For a description of file number information, see The International Bureau Filing System File Number... 47 Telecommunication 1 2012-10-01 2012-10-01 false What are IBFS file numbers? 1.10008 Section 1...
47 CFR 1.10008 - What are IBFS file numbers?
Code of Federal Regulations, 2011 CFR
2011-10-01
... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2011-10-01 2011-10-01 false What are IBFS file numbers? 1.10008 Section 1...
47 CFR 1.10008 - What are IBFS file numbers?
Code of Federal Regulations, 2014 CFR
2014-10-01
... Random Selection International Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign...) For a description of file number information, see The International Bureau Filing System File Number... 47 Telecommunication 1 2014-10-01 2014-10-01 false What are IBFS file numbers? 1.10008 Section 1...
NASA Technical Reports Server (NTRS)
Fanselow, J. L.; Vavrus, J. L.
1984-01-01
ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.
Automatic image database generation from CAD for 3D object recognition
NASA Astrophysics Data System (ADS)
Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.
1993-06-01
The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.
User interface to administrative DRMS within a distributed environment
NASA Technical Reports Server (NTRS)
Martin, L. D.; Kirk, R. D.
1983-01-01
The implementation of a data base management system (DBMS) into a communications office to control and report on communication leased service contracts is discussed. The system user executes online programs to update five files residing on a UNIVAC 1100/82, through the forms mode features of the Tektronix 4025 terminal and IMSAI 8080 microcomputer. This user can select the appropriate form to the Tektronix 4025 screen, and enter new data, update existing data, or discontinue service. Selective online printing of 40 reports is accomplished by the system user to satisfy management, budget, and bill payment reporting requirements.
A standard format and a graphical user interface for spin system specification.
Biternas, A G; Charnock, G T P; Kuprov, Ilya
2014-03-01
We introduce a simple and general XML format for spin system description that is the result of extensive consultations within Magnetic Resonance community and unifies under one roof all major existing spin interaction specification conventions. The format is human-readable, easy to edit and easy to parse using standard XML libraries. We also describe a graphical user interface that was designed to facilitate construction and visualization of complicated spin systems. The interface is capable of generating input files for several popular spin dynamics simulation packages. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
A Geometry Based Infra-Structure for Computational Analysis and Design
NASA Technical Reports Server (NTRS)
Haimes, Robert
1998-01-01
The computational steps traditionally taken for most engineering analysis suites (computational fluid dynamics (CFD), structural analysis, heat transfer and etc.) are: (1) Surface Generation -- usually by employing a Computer Assisted Design (CAD) system; (2) Grid Generation -- preparing the volume for the simulation; (3) Flow Solver -- producing the results at the specified operational point; (4) Post-processing Visualization -- interactively attempting to understand the results. For structural analysis, integrated systems can be obtained from a number of commercial vendors. These vendors couple directly to a number of CAD systems and are executed from within the CAD Graphical User Interface (GUI). It should be noted that the structural analysis problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go to Initial Graphics Exchange Specification (IGES) or Standard Exchange Program (STEP) files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D data formats). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Specifically the problems with this procedure are:(1) File based -- Information flows from one step to the next via data files with formats specified for that procedure. File standards, when they exist, are wholly inadequate. For example, geometry from CAD systems (transmitted via IGES files) is defined as disjoint surfaces and curves (as well as masses of other information of no interest for the Grid Generator). This is particularly onerous for modern CAD systems based on solid modeling. The part was a proper solid and in the translation to IGES has lost this important characteristic. STEP is another standard for CAD data that exists and supports the concept of a solid. The problem with STEP is that a solid modeling geometry kernel is required to query and manipulate the data within this type of file. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. Adroit multi-block methods are not far behind. This means that a million node steady-state solution can be computed on the order of hours (using current high performance computers) starting from this 'good' geometry. Unfortunately, the geometry usually transmitted from the CAD system is not 'good' in the grid generator sense. The grid generator needs smooth closed solid geometry. It can take a week (or more) of interaction with the CAD output (sometimes by hand) before the process can begin. One way Communication. (3) One-way Communication -- All information travels on from one phase to the next. This makes procedures like node adaptation difficult when attempting to add or move nodes that sit on bounding surfaces (when the actual surface data has been lost after the grid generation phase). Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive. There is also no way to easily deal with this system in a modular manner. One can only replace the grid generator, for example, if the software reads and writes the same files. Instead of the serial approach to analysis as described above, CAPRI takes a geometry centric approach. This makes the actual geometry (not a discretized version) accessible to all phases of the analysis. The connection to the geometry is made through an Application Programming Interface (API) and NOT a file system. This API isolates the top-level applications (grid generators, solvers and visualization components) from the geometry engine. Also this allows the replacement of one geometry kernel with another, without effecting these top-level applications. For example, if UniGraphics is used as the CAD package then Parasolid (UG's own geometry engine) can be used for all geometric queries so that no solid geometry information is lost in a translation. This is much better than STEP because when the data is queried, the same software is executed as used in the CAD system. Therefore, one analyzes the exact part that is in the CAD system. CAPRI uses the same idea as the commercial structural analysis codes but does not specify control. Software components of the CAD system are used, but the analysis suite, not the CAD operator, specifies the control of the software session. This also means that the license issues (may be) minimized and individuals need not have to know how to operate a CAD system in order to run the suite.
75 FR 65467 - Combined Notice of Filings No. 1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-25
...: Venice Gathering System, L.L.C. Description: Venice Gathering System, L.L.C. submits tariff filing per 154.203: Venice Gathering System Rate Settlement Compliance Filing to be effective 11/1/2010. Filed...
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
48 CFR 304.803-70 - Contract/order file organization and use of checklists.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...
48 CFR 304.803-70 - Contract/order file organization and use of checklists.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...
48 CFR 304.803-70 - Contract/order file organization and use of checklists.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...
48 CFR 304.803-70 - Contract/order file organization and use of checklists.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...
48 CFR 304.803-70 - Contract/order file organization and use of checklists.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...
Design and Optimization of a Dual-HPGe Gamma Spectrometer and Its Cosmic Veto System
NASA Astrophysics Data System (ADS)
Zhang, Weihua; Ro, Hyunje; Liu, Chuanlei; Hoffman, Ian; Ungar, Kurt
2017-03-01
In this paper, a dual high purity germanium (HPGe) gamma spectrometer detection system with an increased solid angle was developed. The detection system consists of a pair of Broad Energy Germanium (BE-5030p) detectors and an XIA LLC digital gamma finder/Pixie-4 data-acquisition system. A data file processor was developed containing five modules that parses Pixie-4 list-mode data output files and classifies detections into anticoincident/coincident events and their specific coincidence types (double/triple/quadruple) for further analysis. A novel cosmic veto system was installed in the detection system. It was designed to be easy to install around an existing system while still providing sufficient cosmic veto shielding comparable to other designs. This paper describes the coverage and efficiency of this cosmic veto and the data processing system. It has been demonstrated that the cosmic veto system can provide a mean background reduction of 66.1%, which results in a mean MDA improvement of 58.3%. The counting time to meet the required MDA for specific radionuclide can be reduced by a factor of 2-3 compared to those using a conventional HPGe system. This paper also provides an initial overview of coincidence timing distributions between an incoming event from a cosmic veto plate and HPGe detector.
Shivanand, Sunita; Patil, Chetan R; Thangala, Venugopal; Kumar, Pabbati Ravi; Sachdeva, Jyoti; Krishna, Akash
2013-05-01
To evaluate and compare the efficacy, cleaning ability of hand and two rotary systems in root canal retreatment. Sixty extracted premolars were retreated with following systems: Group -ProTaper Universal retreatment files, Group 2-ProFile system, Group 3-H-file. Specimens were split longitudinally and amount of remaining gutta-percha on the canal walls was assessed using direct visual scoring with the aid of stereomicroscope. Results were statistically analyzed using ANOVA test. Completely clean root canal walls were not achieved with any of the techniques investigated. However, all three systems proved to be effective for gutta-percha removal. Significant difference was found between ProTaper universal retreatment file and H-file, and also between ProFile and H-file. Under the conditions of the present study, ProTaper Universal retreatment files left significantly less guttapercha and sealer than ProFile and H-file. Rotary systems in combination with gutta-percha solvents can perform superiorly as compared to the time tested traditional hand instrumentation in root canal retreatment.
FORTRAN Programs for Aerodynamic Analyses on the Microvax/2000 CAD CAE Workstation
1988-09-01
file exists, you must compile the program by typing, FOR DUBLET [Returni The next step is to link the program by entering, LINK DUBLET [Return] The...files DUBLET.EXE and DUBLET.OBJ will now exist and you will be able to run the program. Running the Program To run the program, type DUBLET [Return...by entering 0.1 [Return] Now enter the number of intervals you desire the doublet distribution to have by enter- ing 10 [Return] The screen should now
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-19
...The Media Bureau grants a Counterproposal filed by Grenax Broadcasting II, LLC, for a new FM allotment on Channel 246C2 at Munds Park, Arizona, over a conflicting Petition for Rule Making and hybrid application filed by Univision Radio License Corporation for an increase in existing service by Station KHOV-FM, Wickenburg, Arizona. The Bureau also dismisses a Petition for Rule Making filed by Rocket Radio, Inc. for a new allotment at Williams, Arizona, because no continuing expression of interest was filed.
NASA Technical Reports Server (NTRS)
Ferrara, Jeffrey; Calk, William; Atwell, William; Tsui, Tina
2013-01-01
MPISS is an automatic file transfer system that implements a combination of standard and mission-unique transfer protocols required by the Global Precipitation Measurement Mission (GPM) Precipitation Processing System (PPS) to control the flow of data between the MOC and the PPS. The primary features of MPISS are file transfers (both with and without PPS specific protocols), logging of file transfer and system events to local files and a standard messaging bus, short term storage of data files to facilitate retransmissions, and generation of file transfer accounting reports. The system includes a graphical user interface (GUI) to control the system, allow manual operations, and to display events in real time. The PPS specific protocols are an enhanced version of those that were developed for the Tropical Rainfall Measuring Mission (TRMM). All file transfers between the MOC and the PPS use the SSH File Transfer Protocol (SFTP). For reports and data files generated within the MOC, no additional protocols are used when transferring files to the PPS. For observatory data files, an additional handshaking protocol of data notices and data receipts is used. MPISS generates and sends to the PPS data notices containing data start and stop times along with a checksum for the file for each observatory data file transmitted. MPISS retrieves the PPS generated data receipts that indicate the success or failure of the PPS to ingest the data file and/or notice. MPISS retransmits the appropriate files as indicated in the receipt when required. MPISS also automatically retrieves files from the PPS. The unique feature of this software is the use of both standard and PPS specific protocols in parallel. The advantage of this capability is that it supports users that require the PPS protocol as well as those that do not require it. The system is highly configurable to accommodate the needs of future users.
Topçuoğlu, Hüseyin Sinan; Düzgün, Salih; Akpek, Firdevs; Topçuoğlu, Gamze
2016-11-01
This study evaluated the effect of creating a glide path and apical preparation size on the incidence of apical cracks during canal preparation in mandibular molar teeth with curved canals. One hundred and forty extracted teeth were used. The teeth were randomly assigned to one control group or six experimental groups (n = 20 per group) for canal preparation. No preparation was performed on teeth in the control group. In three of the six experimental groups, a glide path was not created; a glide path was created on the curved mesial canals of all teeth in the remaining three experimental groups. All teeth in experimental groups were then instrumented with the following systems: Reciproc, WaveOne (WO), and ProTaper Next (PTN). Digital images of the apical root surfaces of these teeth were recorded before preparation, after instrumentation with size 25 files, and after instrumentation with size 40 files. The images were then inspected for the presence of any new apical cracks and propagation. There was no significant difference between the experimental groups during canal preparation using size 25 files (p > 0.05). Reciproc and WO caused more new apical cracks than did PTN during canal preparation using size 40 files (p < 0.05). However, canal preparation using size 40 files did not cause propagation of existing cracks (p > 0.05). Performing a glide path prior to canal preparation did not change the incidence of apical crack during preparation. Additionally, increasing apical preparation size may increase the incidence of apical crack during canal preparation. SCANNING 38:585-590, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
15. Photocopy of engineering drawing F790 in files of Utilities ...
15. Photocopy of engineering drawing F-790 in files of Utilities Engineering files in Cleveland of the Allis-Chambers steam engine. This side elevation of the engine in the Division Avenue plant is the last remaining drawing of them in existence. The engine was dismantled. Date of drawing is 1914. - Division Avenue Pumping Station & Filtration Plant, West 45th Street and Division Avenue, Cleveland, Cuyahoga County, OH
Discovery in a World of Mashups
NASA Astrophysics Data System (ADS)
King, T. A.; Ritschel, B.; Hourcle, J. A.; Moon, I. S.
2014-12-01
When the first digital information was stored electronically, discovery of what existed was through file names and the organization of the file system. With the advent of networks, digital information was shared on a wider scale, but discovery remained based on file and folder names. With a growing number of information sources, named based discovery quickly became ineffective. The keyword based search engine was one of the first types of a mashup in the world of Web 1.0. Embedded links from one document to another with prescribed relationships between files and the world of Web 2.0 was formed. Search engines like Google used the links to improve search results and a worldwide mashup was formed. While a vast improvement, the need for semantic (meaning rich) discovery was clear, especially for the discovery of scientific data. In response, every science discipline defined schemas to describe their type of data. Some core schemas where shared, but most schemas are custom tailored even though they share many common concepts. As with the networking of information sources, science increasingly relies on data from multiple disciplines. So there is a need to bring together multiple sources of semantically rich information. We explore how harvesting, conceptual mapping, facet based search engines, search term promotion, and style sheets can be combined to create the next generation of mashups in the emerging world of Web 3.0. We use NASA's Planetary Data System and NASA's Heliophysics Data Environment to illustrate how to create a multi-discipline mash-up.
An On-Line Nutrition Information System for the Clinical Dietitian
Petot, Grace J.; Houser, Harold B.; Uhrich, Roberta V.
1980-01-01
A university based computerized nutrient data base has been integrated into an on-line nutrition information system in a large acute care hospital. Key elements described in the design and installation of the system are the addition of hospital menu items to the existing nutrient data base, the creation of a unique recipe file in the computer, production of a customized menu/nutrient handbook, preparation of forms and establishment of output formats. Standardization of nutrient calculations in the clinical and food production areas, variety and purposes of various format options, the advantages of timesharing and plans for expansion of the system are discussed.
47 CFR 1.10006 - Is electronic filing mandatory?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Is electronic filing mandatory? 1.10006 Section... International Bureau Filing System § 1.10006 Is electronic filing mandatory? Electronic filing is mandatory for... System (IBFS) form is available. Applications for which an electronic form is not available must be filed...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... turbine where the pressure would be reduced. The applicant proposes to interconnect with an existing 56... applications may be filed electronically via the Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions...
Checkpoint-Restart in User Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
CRUISE implements a user-space file system that stores data in main memory and transparently spills over to other storage, like local flash memory or the parallel file system, as needed. CRUISE also exposes file contents fo remote direct memory access, allowing external tools to copy files to the parallel file system in the background with reduced CPU interruption.
An Ephemeral Burst-Buffer File System for Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Teng; Moody, Adam; Yu, Weikuan
BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.
5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...
5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...
5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...
5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...
5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...
Bürklein, S; Benten, S; Schäfer, E
2014-05-01
To assess in a laboratory setting the amount of apically extruded debris associated with different single-file nickel-titanium instrumentation systems compared to one multiple-file rotary system. Eighty human mandibular central incisors were randomly assigned to four groups (n = 20 teeth per group). The root canals were instrumented according to the manufacturers' instructions using the reciprocating single-file system Reciproc, the single-file rotary systems F360 and OneShape and the multiple-file rotary Mtwo instruments. The apically extruded debris was collected and dried in pre-weighed glass vials. The amount of debris was assessed with a micro balance and statistically analysed using anova and post hoc Student-Newman-Keuls test. The time required to prepare the canals with the different instruments was also recorded. Reciproc produced significantly more debris compared to all other systems (P < 0.05). No significant difference was noted between the two single-file rotary systems and the multiple-file rotary system (P > 0.05). Instrumentation with the three single-file systems was significantly faster than with Mtwo (P < 0.05). Under the condition of this study, all systems caused apical debris extrusion. Rotary instrumentation was associated with less debris extrusion compared to reciprocal instrumentation. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Usage analysis of user files in UNIX
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.; Iyer, Ravishankar K.
1987-01-01
Presented is a user-oriented analysis of short term file usage in a 4.2 BSD UNIX environment. The key aspect of this analysis is a characterization of users and files, which is a departure from the traditional approach of analyzing file references. Two characterization measures are employed: accesses-per-byte (combining fraction of a file referenced and number of references) and file size. This new approach is shown to distinguish differences in files as well as users, which cam be used in efficient file system design, and in creating realistic test workloads for simulations. A multi-stage gamma distribution is shown to closely model the file usage measures. Even though overall file sharing is small, some files belonging to a bulletin board system are accessed by many users, simultaneously and otherwise. Over 50% of users referenced files owned by other users, and over 80% of all files were involved in such references. Based on the differences in files and users, suggestions to improve the system performance were also made.
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.
Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha
2017-04-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.
SIPSMetGen: It's Not Just For Aircraft Data and ECS Anymore.
NASA Astrophysics Data System (ADS)
Schwab, M.
2015-12-01
The SIPSMetGen utility, developed for the NASA EOSDIS project, under the EED contract, simplified the creation of file level metadata for the ECS System. The utility has been enhanced for ease of use, efficiency, speed and increased flexibility. The SIPSMetGen utility was originally created as a means of generating file level spatial metadata for Operation IceBridge. The first version created only ODL metadata, specific for ingest into ECS. The core strength of the utility was, and continues to be, its ability to take complex shapes and patterns of data collection point clouds from aircraft flights and simplify them to a relatively simple concave hull geo-polygon. It has been found to be a useful and easy to use tool for creating file level metadata for many other missions, both aircraft and satellite. While the original version was useful it had its limitations. In 2014 Raytheon was tasked to make enhancements to SIPSMetGen, this resulted a new version of SIPSMetGen which can create ISO Compliant XML metadata; provides optimization and streamlining of the algorithm for creating the spatial metadata; a quicker runtime with more consistent results; a utility that can be configured to run multi-threaded on systems with multiple processors. The utility comes with a java based graphical user interface to aid in configuration and running of the utility. The enhanced SIPSMetGen allows more diverse data sets to be archived with file level metadata. The advantage of archiving data with file level metadata is that it makes it easier for data users, and scientists to find relevant data. File level metadata unlocks the power of existing archives and metadata repositories such as ECS and CMR and search and discovery utilities like Reverb and Earth Data Search. Current missions now using SIPSMetGen include: Aquarius, Measures, ARISE, and Nimbus.
48 CFR 204.802 - Contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...
48 CFR 204.802 - Contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...
48 CFR 204.802 - Contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...
48 CFR 204.802 - Contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...
48 CFR 204.802 - Contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official contract...
Twin-tailed fail-over for fileservers maintaining full performance in the presence of a failure
Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.
2008-02-12
A method for maintaining full performance of a file system in the presence of a failure is provided. The file system having N storage devices, where N is an integer greater than zero and N primary file servers where each file server is operatively connected to a corresponding storage device for accessing files therein. The file system further having a secondary file server operatively connected to at least one of the N storage devices. The method including: switching the connection of one of the N storage devices to the secondary file server upon a failure of one of the N primary file servers; and switching the connections of one or more of the remaining storage devices to a primary file server other than the failed file server as necessary so as to prevent a loss in performance and to provide each storage device with an operating file server.
Comparing image search behaviour in the ARRS GoldMiner search engine and a clinical PACS/RIS.
De-Arteaga, Maria; Eggel, Ivan; Do, Bao; Rubin, Daniel; Kahn, Charles E; Müller, Henning
2015-08-01
Information search has changed the way we manage knowledge and the ubiquity of information access has made search a frequent activity, whether via Internet search engines or increasingly via mobile devices. Medical information search is in this respect no different and much research has been devoted to analyzing the way in which physicians aim to access information. Medical image search is a much smaller domain but has gained much attention as it has different characteristics than search for text documents. While web search log files have been analysed many times to better understand user behaviour, the log files of hospital internal systems for search in a PACS/RIS (Picture Archival and Communication System, Radiology Information System) have rarely been analysed. Such a comparison between a hospital PACS/RIS search and a web system for searching images of the biomedical literature is the goal of this paper. Objectives are to identify similarities and differences in search behaviour of the two systems, which could then be used to optimize existing systems and build new search engines. Log files of the ARRS GoldMiner medical image search engine (freely accessible on the Internet) containing 222,005 queries, and log files of Stanford's internal PACS/RIS search called radTF containing 18,068 queries were analysed. Each query was preprocessed and all query terms were mapped to the RadLex (Radiology Lexicon) terminology, a comprehensive lexicon of radiology terms created and maintained by the Radiological Society of North America, so the semantic content in the queries and the links between terms could be analysed, and synonyms for the same concept could be detected. RadLex was mainly created for the use in radiology reports, to aid structured reporting and the preparation of educational material (Lanlotz, 2006) [1]. In standard medical vocabularies such as MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) specific terms of radiology are often underrepresented, therefore RadLex was considered to be the best option for this task. The results show a surprising similarity between the usage behaviour in the two systems, but several subtle differences can also be noted. The average number of terms per query is 2.21 for GoldMiner and 2.07 for radTF, the used axes of RadLex (anatomy, pathology, findings, …) have almost the same distribution with clinical findings being the most frequent and the anatomical entity the second; also, combinations of RadLex axes are extremely similar between the two systems. Differences include a longer length of the sessions in radTF than in GoldMiner (3.4 and 1.9 queries per session on average). Several frequent search terms overlap but some strong differences exist in the details. In radTF the term "normal" is frequent, whereas in GoldMiner it is not. This makes intuitive sense, as in the literature normal cases are rarely described whereas in clinical work the comparison with normal cases is often a first step. The general similarity in many points is likely due to the fact that users of the two systems are influenced by their daily behaviour in using standard web search engines and follow this behaviour in their professional search. This means that many results and insights gained from standard web search can likely be transferred to more specialized search systems. Still, specialized log files can be used to find out more on reformulations and detailed strategies of users to find the right content. Copyright © 2015 Elsevier Inc. All rights reserved.
RAMA: A file system for massively parallel computers
NASA Technical Reports Server (NTRS)
Miller, Ethan L.; Katz, Randy H.
1993-01-01
This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.
10 CFR 110.89 - Filing and service.
Code of Federal Regulations, 2010 CFR
2010-01-01
...: Rulemakings and Adjudications Staff or via the E-Filing system, following the procedure set forth in 10 CFR 2.302. Filing by mail is complete upon deposit in the mail. Filing via the E-Filing system is completed... residence with some occupant of suitable age and discretion; (2) Following the requirements for E-Filing in...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-22
... protests: June 14, 2013. All documents may be filed electronically via the Internet. See, 18 CFR 385.2001(a... existing boat dock into a floating pavilion; (4) install a boat ramp; (5) construct a parking area and...
75 FR 65942 - Credit Reforms in Organized Wholesale Electric Markets
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
...Pursuant to section 206 of the Federal Power Act, the Federal Energy Regulatory Commission amends its regulations to improve the management of risk and the subsequent use of credit in the organized wholesale electric markets. Each Regional Transmission Organization (RTO) and Independent System Operator (ISO) will be required to submit a compliance filing including tariff revisions to comply with the amended regulations or to demonstrate that its existing tariff already satisfies the regulations.
Using Context to Assist in Personal File Retrieval
2006-08-25
of this work, filled in many of the gaps in my knowledge , and helped steer me toward solutions. Anind Dey was also invaluable in helping me design...like a personal assistant. Unfortunately, we are far from this ideal today. In fact, information management is one of the largest problems in...world wide web The world wide web is, perhaps, the largest distributed naming system in existence. To help manage this namespace, the web combines a
Interdisciplinary Research Scenario Testing of EOSDIS
NASA Technical Reports Server (NTRS)
Emmitt, G. D.
1999-01-01
During the reporting period, the Principle Investigator (PI) has continued to serve on numerous review panels, task forces and committees with the goal of providing input and guidance for the Earth Observing System Data and Information System (EOSDIS) program at NASA Headquarters and NASA GSFC. In addition, the PI has worked together with personnel at the University of Virginia and the subcontractor (Simpson Weather Associates (SWA)) to continue to evaluate the latest releases of various versions of the user interfaces to the EOSDIS. Finally, as part of the subcontract, SWA has created an on-line Hierarchial Data Format (HDF) tutorial for non-HDF experts, particularly those that will be using EOSDIS and future EOS data products. A summary of these three activities is provided. The topics include: 1) Participation on EODIS Panels and Committees; 2) Evaluation and Tire Kicking of EODIS User Interfaces; and 3) An On-line HDF Tutorial. The report also includes attachments A, B, and C. Attachment A: Report From the May 1999 Science Data Panel. The topics include: 1) Summary of Data Panel Meeting; and 2) Panel's Comments/Recommendations. Attachment B: Survey Requesting Integrated Design Systems (IDS) Teams Input on the Descoping and Rescoping of the EODIS; and Attachment C: An HDF Tutorial for Beginners: EODIS Users and Small Data Providers (HTML Version). The topics include: 1) Tutorial Overview; 2) An introduction to HDF; 3) The HDF Library: Software and Hardware; 4) Methods of Working with HDF Files; 5) Scientific Data API; 6) Attributes and Metadata; 7) Writing a SDS to an HDF file; 8) Obtaining Information on Existing HDF Files; 9) Reading a Scientific Data Set from an HDF file: 10) Example Programs; 11) Browsing and Visualizing HDF Data; and 12) Laboratory (Question and Answer).
75 FR 27986 - Electronic Filing System-Web (EFS-Web) Contingency Option
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-19
...] Electronic Filing System--Web (EFS-Web) Contingency Option AGENCY: United States Patent and Trademark Office... availability of its patent electronic filing system, Electronic Filing System--Web (EFS-Web) by providing a new contingency option when the primary portal to EFS-Web has an unscheduled outage. Previously, the entire EFS...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Regional Office Files (NLRB-25), Regional Advice and Injunction Litigation System (RAILS) and Associated Headquarters Files (NLRB-28), and Appeals Case Tracking System (ACTS) and Associated Headquarters Files (NLRB... Judicial Case Management Systems-Pending Case List (JCMS-PCL) and Associated Headquarters Files (NLRB-21...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Regional Office Files (NLRB-25), Regional Advice and Injunction Litigation System (RAILS) and Associated Headquarters Files (NLRB-28), and Appeals Case Tracking System (ACTS) and Associated Headquarters Files (NLRB... Judicial Case Management Systems-Pending Case List (JCMS-PCL) and Associated Headquarters Files (NLRB-21...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Regional Office Files (NLRB-25), Regional Advice and Injunction Litigation System (RAILS) and Associated Headquarters Files (NLRB-28), and Appeals Case Tracking System (ACTS) and Associated Headquarters Files (NLRB... Judicial Case Management Systems-Pending Case List (JCMS-PCL) and Associated Headquarters Files (NLRB-21...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Regional Office Files (NLRB-25), Regional Advice and Injunction Litigation System (RAILS) and Associated Headquarters Files (NLRB-28), and Appeals Case Tracking System (ACTS) and Associated Headquarters Files (NLRB... Judicial Case Management Systems-Pending Case List (JCMS-PCL) and Associated Headquarters Files (NLRB-21...
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
The Galley Parallel File System
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David
1996-01-01
As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.
Deceit: A flexible distributed file system
NASA Technical Reports Server (NTRS)
Siegel, Alex; Birman, Kenneth; Marzullo, Keith
1989-01-01
Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.
48 CFR 204.805 - Disposal of contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files. (1...
48 CFR 204.804 - Closeout of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Closeout of contract files. 204.804 Section 204.804 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.804 Closeout of contract files. (1...
48 CFR 204.804 - Closeout of contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Closeout of contract files. 204.804 Section 204.804 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.804 Closeout of contract files...
48 CFR 204.805 - Disposal of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files. (1...
48 CFR 204.805 - Disposal of contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files. (1...
48 CFR 204.805 - Disposal of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files. (1...
48 CFR 204.804 - Closeout of contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Closeout of contract files. 204.804 Section 204.804 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.804 Closeout of contract files...
48 CFR 204.804 - Closeout of contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Closeout of contract files. 204.804 Section 204.804 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.804 Closeout of contract files. (1...
48 CFR 204.805 - Disposal of contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files. (1...
48 CFR 204.804 - Closeout of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Closeout of contract files. 204.804 Section 204.804 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.804 Closeout of contract files. (1...
An XML-based Generic Tool for Information Retrieval in Solar Databases
NASA Astrophysics Data System (ADS)
Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain
This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.
12 CFR 349.4 - Filing procedures.
Code of Federal Regulations, 2013 CFR
2013-01-01
... FOREIGN EXCHANGE TRANSACTIONS § 349.4 Filing procedures. (a) General. Before commencing a retail forex... institution's proposed retail forex business and the manner in which it will be conducted; (ii) The amount of the institution's existing or proposed direct or indirect investment in the retail forex business as...
12 CFR 349.4 - Filing procedures.
Code of Federal Regulations, 2012 CFR
2012-01-01
... FOREIGN EXCHANGE TRANSACTIONS § 349.4 Filing procedures. (a) General. Before commencing a retail forex... institution's proposed retail forex business and the manner in which it will be conducted; (ii) The amount of the institution's existing or proposed direct or indirect investment in the retail forex business as...
12 CFR 349.4 - Filing procedures.
Code of Federal Regulations, 2014 CFR
2014-01-01
... FOREIGN EXCHANGE TRANSACTIONS § 349.4 Filing procedures. (a) General. Before commencing a retail forex... institution's proposed retail forex business and the manner in which it will be conducted; (ii) The amount of the institution's existing or proposed direct or indirect investment in the retail forex business as...
75 FR 39230 - Combined Notice of Filings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-08
.... Take notice that the Commission has received the following Natural Gas Pipeline Rate and Refund Report..., July 12, 2010. Docket Numbers: RP10-908-000. Applicants: Natural Gas Pipeline Company of America. Description: Natural Gas Pipeline Company of America LLC submits an existing negotiated rate agreement. Filed...
Forney, William; Raumann, Christian G.; Minor, T.B.; Smith, J. LaRue; Vogel, John; Vitales, Robert
2002-01-01
As part of the requirements for the Geographic Research and Applications Prospectus grants, this Open-File Report is the second of two that resulted from the first year of the project. The first Open-File Report (OFR 01-418) introduced the project, reviewed the existing body of literature, and outlined the research approach. This document will present an update of the research approach and offer some preliminary results from multiple efforts, specifically, the production of historical digital orthophoto quadrangles, the development of the land use/land cover (LULC) classification system, the development of a temporal transportation layer, the classification of anthropogenic cover types from the IKONOS imagery, a preliminary evaluation of landscape ecology metrics (quantification of spatial and temporal patterns of ecosystem structure and function with appropriate indices) and their utility in comparing two LULC systems, and a new initiative in community-based science and facilitation.
A sophisticated cad tool for the creation of complex models for electromagnetic interaction analysis
NASA Astrophysics Data System (ADS)
Dion, Marc; Kashyap, Satish; Louie, Aloisius
1991-06-01
This report describes the essential features of the MS-DOS version of DIDEC-DREO, an interactive program for creating wire grid, surface patch, and cell models of complex structures for electromagnetic interaction analysis. It uses the device-independent graphics library DIGRAF and the graphics kernel system HALO, and can be executed on systems with various graphics devices. Complicated structures can be created by direct alphanumeric keyboard entry, digitization of blueprints, conversion form existing geometric structure files, and merging of simple geometric shapes. A completed DIDEC geometric file may then be converted to the format required for input to a variety of time domain and frequency domain electromagnetic interaction codes. This report gives a detailed description of the program DIDEC-DREO, its installation, and its theoretical background. Each available interactive command is described. The associated program HEDRON which generates simple geometric shapes, and other programs that extract the current amplitude data from electromagnetic interaction code outputs, are also discussed.
pcircle - A Suite of Scalable Parallel File System Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
WANG, FEIYI
2015-10-01
Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
... existing brick and masonry powerhouse with two new turbine generating units with a total installed capacity... existing 650-foot-long, 50-foot-wide intake canal; (6) an existing brick and masonry powerhouse with two...
Optimizing Input/Output Using Adaptive File System Policies
NASA Technical Reports Server (NTRS)
Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.
1996-01-01
Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.
Apically extruded dentin debris by reciprocating single-file and multi-file rotary system.
De-Deus, Gustavo; Neves, Aline; Silva, Emmanuel João; Mendonça, Thais Accorsi; Lourenço, Caroline; Calixto, Camila; Lima, Edson Jorge Moreira
2015-03-01
This study aims to evaluate the apical extrusion of debris by the two reciprocating single-file systems: WaveOne and Reciproc. Conventional multi-file rotary system was used as a reference for comparison. The hypotheses tested were (i) the reciprocating single-file systems extrude more than conventional multi-file rotary system and (ii) the reciprocating single-file systems extrude similar amounts of dentin debris. After solid selection criteria, 80 mesial roots of lower molars were included in the present study. The use of four different instrumentation techniques resulted in four groups (n = 20): G1 (hand-file technique), G2 (ProTaper), G3 (WaveOne), and G4 (Reciproc). The apparatus used to evaluate the collection of apically extruded debris was typical double-chamber collector. Statistical analysis was performed for multiple comparisons. No significant difference was found in the amount of the debris extruded between the two reciprocating systems. In contrast, conventional multi-file rotary system group extruded significantly more debris than both reciprocating groups. Hand instrumentation group extruded significantly more debris than all other groups. The present results yielded favorable input for both reciprocation single-file systems, inasmuch as they showed an improved control of apically extruded debris. Apical extrusion of debris has been studied extensively because of its clinical relevance, particularly since it may cause flare-ups, originated by the introduction of bacteria, pulpal tissue, and irrigating solutions into the periapical tissues.
DOT National Transportation Integrated Search
2001-02-01
The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...
On-Board File Management and Its Application in Flight Operations
NASA Technical Reports Server (NTRS)
Kuo, N.
1998-01-01
In this paper, the author presents the minimum functions required for an on-board file management system. We explore file manipulation processes and demonstrate how the file transfer along with the file management system will be utilized to support flight operations and data delivery.
47 CFR 1.10006 - Is electronic filing mandatory?
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Is electronic filing mandatory? 1.10006 Section... Random Selection International Bureau Filing System § 1.10006 Is electronic filing mandatory? Electronic... International Bureau Filing System (IBFS) form is available. Applications for which an electronic form is not...
47 CFR 1.10006 - Is electronic filing mandatory?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Is electronic filing mandatory? 1.10006 Section... Random Selection International Bureau Filing System § 1.10006 Is electronic filing mandatory? Electronic... International Bureau Filing System (IBFS) form is available. Applications for which an electronic form is not...
47 CFR 1.10006 - Is electronic filing mandatory?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Is electronic filing mandatory? 1.10006 Section... Random Selection International Bureau Filing System § 1.10006 Is electronic filing mandatory? Electronic... International Bureau Filing System (IBFS) form is available. Applications for which an electronic form is not...
10 CFR 2.302 - Filing of documents.
Code of Federal Regulations, 2010 CFR
2010-01-01
... this part shall be electronically transmitted through the E-Filing system, unless the Commission or... all methods of filing have been completed. (e) For filings by electronic transmission, the filer must... digital ID certificates, the NRC permits participants in the proceeding to access the E-Filing system to...
48 CFR 1404.805 - Disposal of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall be...
48 CFR 1404.802 - Contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...
48 CFR 1404.802 - Contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...
48 CFR 1404.802 - Contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...
48 CFR 1404.805 - Disposal of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall be...
48 CFR 1404.805 - Disposal of contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall be...
48 CFR 1404.805 - Disposal of contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall be...
48 CFR 1404.802 - Contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...
48 CFR 1404.805 - Disposal of contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall be...
48 CFR 1404.802 - Contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files shall...
The Galley Parallel File System
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David
1996-01-01
Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.
An expert system shell for inferring vegetation characteristics: Prototype help system (Task 1)
NASA Technical Reports Server (NTRS)
1993-01-01
The NASA Vegetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. A prototype of the VEG subgoal HELP.SYSTEM has been completed and the Help System has been added to the VEG system. It is loaded when the user first clicks on the HELP.SYSTEM option in the Tool Box Menu. The Help System provides a user tool to support needed user information. It also provides interactive tools the scientist may use to develop new help messages and to modify existing help messages that are attached to VEG screens. The system automatically manages system and file operations needed to preserve new or modified help messages. The Help System was tested both as a help system development and a help system user tool.
ERIC Educational Resources Information Center
East Texas State Univ., Commerce. Occupational Curriculum Lab.
Nineteen units on filing, office machines, and general office clerical occupations are presented in this teacher's guide. The unit topics include indexing, alphabetizing, and filing (e.g., business names); labeling and positioning file folders and guides; establishing a correspondence filing system; utilizing charge-out and follow-up file systems;…
12 CFR 747.7 - Good faith certification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Good faith certification. 747.7 Section 747.7... of Practice and Procedure § 747.7 Good faith certification. (a) General requirement. Every filing or... good faith argument for the extension, modification, or reversal of existing law; and the filing or...
12 CFR 747.7 - Good faith certification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Good faith certification. 747.7 Section 747.7... of Practice and Procedure § 747.7 Good faith certification. (a) General requirement. Every filing or... good faith argument for the extension, modification, or reversal of existing law; and the filing or...
12 CFR 747.7 - Good faith certification.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Good faith certification. 747.7 Section 747.7... of Practice and Procedure § 747.7 Good faith certification. (a) General requirement. Every filing or... good faith argument for the extension, modification, or reversal of existing law; and the filing or...
12 CFR 747.7 - Good faith certification.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Good faith certification. 747.7 Section 747.7... of Practice and Procedure § 747.7 Good faith certification. (a) General requirement. Every filing or... good faith argument for the extension, modification, or reversal of existing law; and the filing or...
12 CFR 747.7 - Good faith certification.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Good faith certification. 747.7 Section 747.7... of Practice and Procedure § 747.7 Good faith certification. (a) General requirement. Every filing or... good faith argument for the extension, modification, or reversal of existing law; and the filing or...
75 FR 22540 - Review of Arbitration Awards; Miscellaneous and General Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-29
... the Authority's existing practice for calculating the date for filing timely exceptions, so that the... general rules regarding calculating filing periods; and Sec. 2429.22 to specify that the rules set forth..., investment, productivity, innovation, or on the ability of United States-based companies to compete with...
NASIS data base management system - IBM 360/370 OS MVT implementation. 6: NASIS message file
NASA Technical Reports Server (NTRS)
1973-01-01
The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.
NASIS data base management system: IBM 360 TSS implementation. Volume 6: NASIS message file
NASA Technical Reports Server (NTRS)
1973-01-01
The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.
Operating a terrestrial Internet router onboard and alongside a small satellite
NASA Astrophysics Data System (ADS)
Wood, L.; da Silva Curiel, A.; Ivancic, W.; Hodgson, D.; Shell, D.; Jackson, C.; Stewart, D.
2006-07-01
After twenty months of flying, testing and demonstrating a Cisco mobile access router, originally designed for terrestrial use, onboard the low-Earth-orbiting UK-DMC satellite as part of a larger merged ground/space IP-based internetwork, we use our experience to examine the benefits and drawbacks of integration and standards reuse for small satellite missions. Benefits include ease of operation and the ability to leverage existing systems and infrastructure designed for general use with a large set of latent capabilities to draw on when needed, as well as the familiarity that comes from reuse of existing, known, and well-understood security and operational models. Drawbacks include cases where integration work was needed to bridge the gaps in assumptions between different systems, and where performance considerations outweighed the benefits of reuse of pre-existing file transfer protocols. We find similarities with the terrestrial IP networks whose technologies have been taken to small satellites—and also some significant differences between the two in operational models and assumptions that must be borne in mind.
A Filing System for Medical Literature
Cumming, Millie
1988-01-01
The author reviews the types of systems available for personal literature files and makes specific recommendations for filing systems for family physicians. A personal filing system can be an integral part of family practice, and need not require time out of proportion to the worth of the system. Because it is a personal system, different types will suit different users; some systems, however, are more reliable than others for use in family practice. (Can Fam Physician 1988; 34:425-433.) PMID:21253062
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Mahmood, S.; Ngwira, C.; Balch, C.; Lordan, R.; Fugate, D.; Jacobs, W.; Honkonen, I.
2015-01-01
A NASA Goddard Space Flight Center Heliophysics Science Division-led team that includes NOAA Space Weather Prediction Center, the Catholic University of America, Electric Power Research Institute (EPRI), and Electric Research and Management, Inc., recently partnered with the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) to better understand the impact of Geomagnetically Induced Currents (GIC) on the electric power industry. This effort builds on a previous NASA-sponsored Applied Sciences Program for predicting GIC, known as Solar Shield. The focus of the new DHS S&T funded effort is to revise and extend the existing Solar Shield system to enhance its forecasting capability and provide tailored, timely, actionable information for electric utility decision makers. To enhance the forecasting capabilities of the new Solar Shield, a key undertaking is to extend the prediction system coverage across Contiguous United States (CONUS), as the previous version was only applicable to high latitudes. The team also leverages the latest enhancements in space weather modeling capacity residing at Community Coordinated Modeling Center to increase the Technological Readiness Level, or Applications Readiness Level of the system http://www.nasa.gov/sites/default/files/files/ExpandedARLDefinitions4813.pdf.
An overview of the National Space Science data Center Standard Information Retrieval System (SIRS)
NASA Technical Reports Server (NTRS)
Shapiro, A.; Blecher, S.; Verson, E. E.; King, M. L. (Editor)
1974-01-01
A general overview is given of the National Space Science Data Center (NSSDC) Standard Information Retrieval System. A description, in general terms, the information system that contains the data files and the software system that processes and manipulates the files maintained at the Data Center. Emphasis is placed on providing users with an overview of the capabilities and uses of the NSSDC Standard Information Retrieval System (SIRS). Examples given are taken from the files at the Data Center. Detailed information about NSSDC data files is documented in a set of File Users Guides, with one user's guide prepared for each file processed by SIRS. Detailed information about SIRS is presented in the SIRS Users Guide.
29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4), (d)(1...
29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4), (d)(1...
29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4), (d)(1...
29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4), (d)(1...
29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4), (d)(1...
75 FR 61741 - Combined Notice of Filings No. 1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-06
... Refund Report filings: Docket Numbers: RP10-1305-000. Applicants: Venice Gathering System, L.L.C. Description: Venice Gathering System, L.L.C. submits tariff filing per 154.203: NAESB 1.9 Compliance Filing to...
Giuliani, Valentina; Cocchetti, Roberto; Pagavino, Gabriella
2008-11-01
The aim of this study was to evaluate the efficacy of the ProTaper Universal System rotary retreatment system and of Profile 0.06 and hand instruments (K-file) in the removal of root filling materials. Forty-two extracted single-rooted anterior teeth were selected. The root canals were enlarged with nickel-titanium (NiTi) rotary files, filled with gutta-percha and sealer, and randomly divided into 3 experimental groups. The filling materials were removed with solvent in conjunction with one of the following devices and techniques: the ProTaper Universal System for retreatment, ProFile 0.06, and hand instruments (K-file). The roots were longitudinally sectioned, and the image of the root surface was photographed. The images were captured in JPEG format; the areas of the remaining filling materials and the time required for removing the gutta-percha and sealer were calculated by using the nonparametric one-way Kruskal-Wallis test and Tukey-Kramer tests, respectively. The group that showed better results for removing filling materials was the ProTaper Universal System for retreatment files, whereas the group of ProFile rotary instruments yielded better root canal cleanliness than the hand instruments, even though there was no statistically significant difference. The ProTaper Universal System for retreatment and ProFile rotary instruments worked significantly faster than the K-file. The ProTaper Universal System for retreatment files left cleaner root canal walls than the K-file hand instruments and the ProFile Rotary instruments, although none of the devices used guaranteed complete removal of the filling materials. The rotary NiTi system proved to be faster than hand instruments in removing root filling materials.
Improving File System Performance by Striping
NASA Technical Reports Server (NTRS)
Lam, Terance L.; Kutler, Paul (Technical Monitor)
1998-01-01
This document discusses the performance and advantages of striped file systems on the SGI AD workstations. Performance of several striped file system configurations are compared and guidelines for optimal striping are recommended.
The Iranian National Geodata Revision Strategy and Realization Based on Geodatabase
NASA Astrophysics Data System (ADS)
Haeri, M.; Fasihi, A.; Ayazi, S. M.
2012-07-01
In recent years, using of spatial database for storing and managing spatial data has become a hot topic in the field of GIS. Accordingly National Cartographic Center of Iran (NCC) produces - from time to time - some spatial data which is usually included in some databases. One of the NCC major projects was designing National Topographic Database (NTDB). NCC decided to create National Topographic Database of the entire country-based on 1:25000 coverage maps. The standard of NTDB was published in 1994 and its database was created at the same time. In NTDB geometric data was stored in MicroStation design format (DGN) which each feature has a link to its attribute data (stored in Microsoft Access file). Also NTDB file was produced in a sheet-wise mode and then stored in a file-based style. Besides map compilation, revision of existing maps has already been started. Key problems of NCC are revision strategy, NTDB file-based style storage and operator challenges (NCC operators are almost preferred to edit and revise geometry data in CAD environments). A GeoDatabase solution for national Geodata, based on NTDB map files and operators' revision preferences, is introduced and released herein. The proposed solution extends the traditional methods to have a seamless spatial database which it can be revised in CAD and GIS environment, simultaneously. The proposed system is the common data framework to create a central data repository for spatial data storage and management.
Adding Data Management Services to Parallel File Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Scott
2015-03-04
The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file-based ecosystem; (3) common optimizations, e.g., indexing and caching, are readily supported across several file formats, avoiding effort duplication; and (4) performance improves significantly, as data processing is integrated more tightly with data storage. Our key contributions are: SciHadoop which explores changes to MapReduce assumption by taking advantage of semantics of structured data while preserving MapReduce’s failure and resource management; DataMods which extends common abstractions of parallel file systems so they become programmable such that they can be extended to natively support a variety of data models and can be hooked into emerging distributed runtimes such as Stanford’s Legion; and Miso which combines Hadoop and relational data warehousing to minimize time to insight, taking into account the overhead of ingesting data into data warehousing.« less
Representation of thermal infrared imaging data in the DICOM using XML configuration files.
Ruminski, Jacek
2007-01-01
The DICOM standard has become a widely accepted and implemented format for the exchange and storage of medical imaging data. Different imaging modalities are supported however there is not a dedicated solution for thermal infrared imaging in medicine. In this article we propose new ideas and improvements to final proposal of the new DICOM Thermal Infrared Imaging structures and services. Additionally, we designed, implemented and tested software packages for universal conversion of existing thermal imaging files to the DICOM format using XML configuration files. The proposed solution works fast and requires minimal number of user interactions. The XML configuration file enables to compose a set of attributes for any source file format of thermal imaging camera.
Flexibility and Performance of Parallel File Systems
NASA Technical Reports Server (NTRS)
Kotz, David; Nieuwejaar, Nils
1996-01-01
As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applications difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (API's). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.
Musical examination to bridge audio data and sheet music
NASA Astrophysics Data System (ADS)
Pan, Xunyu; Cross, Timothy J.; Xiao, Liangliang; Hei, Xiali
2015-03-01
The digitalization of audio is commonly implemented for the purpose of convenient storage and transmission of music and songs in today's digital age. Analyzing digital audio for an insightful look at a specific musical characteristic, however, can be quite challenging for various types of applications. Many existing musical analysis techniques can examine a particular piece of audio data. For example, the frequency of digital sound can be easily read and identified at a specific section in an audio file. Based on this information, we could determine the musical note being played at that instant, but what if you want to see a list of all the notes played in a song? While most existing methods help to provide information about a single piece of the audio data at a time, few of them can analyze the available audio file on a larger scale. The research conducted in this work considers how to further utilize the examination of audio data by storing more information from the original audio file. In practice, we develop a novel musical analysis system Musicians Aid to process musical representation and examination of audio data. Musicians Aid solves the previous problem by storing and analyzing the audio information as it reads it rather than tossing it aside. The system can provide professional musicians with an insightful look at the music they created and advance their understanding of their work. Amateur musicians could also benefit from using it solely for the purpose of obtaining feedback about a song they were attempting to play. By comparing our system's interpretation of traditional sheet music with their own playing, a musician could ensure what they played was correct. More specifically, the system could show them exactly where they went wrong and how to adjust their mistakes. In addition, the application could be extended over the Internet to allow users to play music with one another and then review the audio data they produced. This would be particularly useful for teaching music lessons on the web. The developed system is evaluated with songs played with guitar, keyboard, violin, and other popular musical instruments (primarily electronic or stringed instruments). The Musicians Aid system is successful at both representing and analyzing audio data and it is also powerful in assisting individuals interested in learning and understanding music.
The application of remote sensing techniques to inter and intra urban analysis
NASA Technical Reports Server (NTRS)
Horton, F. E.
1972-01-01
This is an effort to assess the applicability of air and spaceborne photography toward providing data inputs to urban and regional planning, management, and research. Through evaluation of remote sensing inputs to urban change detection systems, analyzing an effort to replicate an existing urban land use data file using remotely sensed data, estimating population and dwelling units from imagery, and by identifying and evaluating a system of urban places ultilizing space photography, it was determined that remote sensing can provide data concerning land use, changes in commercial structure, data for transportation planning, housing quality, residential dynamics, and population density.
Reliable file sharing in distributed operating system using web RTC
NASA Astrophysics Data System (ADS)
Dukiya, Rajesh
2017-12-01
Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.
NAFFS: network attached flash file system for cloud storage on portable consumer electronics
NASA Astrophysics Data System (ADS)
Han, Lin; Huang, Hao; Xie, Changsheng
Cloud storage technology has become a research hotspot in recent years, while the existing cloud storage services are mainly designed for data storage needs with stable high speed Internet connection. Mobile Internet connections are often unstable and the speed is relatively low. These native features of mobile Internet limit the use of cloud storage in portable consumer electronics. The Network Attached Flash File System (NAFFS) presented the idea of taking the portable device built-in NAND flash memory as the front-end cache of virtualized cloud storage device. Modern portable devices with Internet connection have built-in more than 1GB NAND Flash, which is quite enough for daily data storage. The data transfer rate of NAND flash device is much higher than mobile Internet connections[1], and its non-volatile feature makes it very suitable as the cache device of Internet cloud storage on portable device, which often have unstable power supply and intermittent Internet connection. In the present work, NAFFS is evaluated with several benchmarks, and its performance is compared with traditional network attached file systems, such as NFS. Our evaluation results indicate that the NAFFS achieves an average accessing speed of 3.38MB/s, which is about 3 times faster than directly accessing cloud storage by mobile Internet connection, and offers a more stable interface than that of directly using cloud storage API. Unstable Internet connection and sudden power off condition are tolerable, and no data in cache will be lost in such situation.
Do, Bao H; Wu, Andrew; Biswal, Sandip; Kamaya, Aya; Rubin, Daniel L
2010-11-01
Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses natural language processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material. ©RSNA, 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beauchamp, R.O. Jr.
A preliminary examination of chemical-substructure analysis (CSA) demonstrates the effective use of the Chemical Abstracts compound connectivity file in conjunction with the bibliographic file for relating chemical structures to biological activity. The importance of considering the role of metabolic intermediates under a variety of conditions is illustrated, suggesting structures that should be examined that may exhibit potential activity. This CSA technique, which utilizes existing large files accessible with online personal computers, is recommended for use as another tool in examining chemicals in drugs. 2 refs., 4 figs.
Continuation of research into language concepts for the mission support environment: Source code
NASA Technical Reports Server (NTRS)
Barton, Timothy J.; Ratner, Jeremiah M.
1991-01-01
Research into language concepts for the Mission Control Center is presented. A computer code for source codes is presented. The file contains the routines which allow source code files to be created and compiled. The build process assumes that all elements and the COMP exist in the current directory. The build process places as much code generation as possible on the preprocessor as possible. A summary is given of the source files as used and/or manipulated by the build routine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orrell, S.; Ralstin, S.
1992-04-01
Many computer security plans specify that only a small percentage of the data processed will be classified. Thus, the bulk of the data on secure systems must be unclassified. Secure limited access sites operating approved classified computing systems sometimes also have a system ostensibly containing only unclassified files but operating within the secure environment. That system could be networked or otherwise connected to a classified system(s) in order that both be able to use common resources for file storage or computing power. Such a system must operate under the same rules as the secure classified systems. It is in themore » nature of unclassified files that they either came from, or will eventually migrate to, a non-secure system. Today, unclassified files are exported from systems within the secure environment typically by loading transport media and carrying them to an open system. Import of unclassified files is handled similarly. This media transport process, sometimes referred to as sneaker net, often is manually logged and controlled only by administrative procedures. A comprehensive system for secure bi-directional transfer of unclassified files between secure and open environments has yet to be developed. Any such secure file transport system should be required to meet several stringent criteria. It is the purpose of this document to begin a definition of these criteria.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orrell, S.; Ralstin, S.
1992-01-01
Many computer security plans specify that only a small percentage of the data processed will be classified. Thus, the bulk of the data on secure systems must be unclassified. Secure limited access sites operating approved classified computing systems sometimes also have a system ostensibly containing only unclassified files but operating within the secure environment. That system could be networked or otherwise connected to a classified system(s) in order that both be able to use common resources for file storage or computing power. Such a system must operate under the same rules as the secure classified systems. It is in themore » nature of unclassified files that they either came from, or will eventually migrate to, a non-secure system. Today, unclassified files are exported from systems within the secure environment typically by loading transport media and carrying them to an open system. Import of unclassified files is handled similarly. This media transport process, sometimes referred to as sneaker net, often is manually logged and controlled only by administrative procedures. A comprehensive system for secure bi-directional transfer of unclassified files between secure and open environments has yet to be developed. Any such secure file transport system should be required to meet several stringent criteria. It is the purpose of this document to begin a definition of these criteria.« less
29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.
Code of Federal Regulations, 2013 CFR
2013-07-01
...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2013-07-01 2013-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued) EQUAL...
29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.
Code of Federal Regulations, 2011 CFR
2011-07-01
...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2011-07-01 2011-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued) EQUAL...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-22
... System of Records; EPA Parking Control Office File (EPA-10) and EPA Transit and Guaranteed Ride Home Program Files (EPA-35) AGENCY: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY: The Environmental Protection Agency (EPA) is deleting the systems of records for EPA Parking Control Office File...
29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.
Code of Federal Regulations, 2012 CFR
2012-07-01
...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2012-07-01 2012-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued) EQUAL...
29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.
Code of Federal Regulations, 2014 CFR
2014-07-01
...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2014-07-01 2014-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued) EQUAL...
29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.
Code of Federal Regulations, 2010 CFR
2010-07-01
...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2010-07-01 2010-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued) EQUAL...
77 FR 14507 - Privacy Act of 1974 System of Records Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-12
... Complaint and Reasonable Accommodation Files, from its inventory of record systems because the relevant... Employment Discrimination Complaint and Reasonable Accommodation Files. The Commission is retiring the system... government-wide system of records notice entitled OPM/GOV-10, Employee Medical File System Records (71 FR...
48 CFR 4.804 - Closeout of contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Closeout of contract files. 4.804 Section 4.804 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Government Contract Files 4.804 Closeout of contract files. ...
48 CFR 1404.804 - Closeout of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Closeout of contract files. 1404.804 Section 1404.804 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.804 Closeout of contract files. ...
48 CFR 904.804 - Closeout of contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Closeout of contract files. 904.804 Section 904.804 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL ADMINISTRATIVE MATTERS Government Contract Files 904.804 Closeout of contract files. ...
48 CFR 904.804 - Closeout of contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Closeout of contract files. 904.804 Section 904.804 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL ADMINISTRATIVE MATTERS Government Contract Files 904.804 Closeout of contract files. ...
48 CFR 904.804 - Closeout of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Closeout of contract files. 904.804 Section 904.804 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL ADMINISTRATIVE MATTERS Government Contract Files 904.804 Closeout of contract files. ...
48 CFR 1504.804 - Closeout of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Closeout of contract files. 1504.804 Section 1504.804 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL ADMINISTRATIVE MATTERS Contract Files 1504.804 Closeout of contract files. ...
48 CFR 1304.804 - Closeout of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Closeout of contract files. 1304.804 Section 1304.804 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE GENERAL ADMINISTRATIVE MATTERS Government Contract Files 1304.804 Closeout of contract files. ...
48 CFR 1504.804 - Closeout of contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Closeout of contract files. 1504.804 Section 1504.804 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL ADMINISTRATIVE MATTERS Contract Files 1504.804 Closeout of contract files. ...
48 CFR 4.804 - Closeout of contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Closeout of contract files. 4.804 Section 4.804 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Government Contract Files 4.804 Closeout of contract files. ...
48 CFR 4.804 - Closeout of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Closeout of contract files. 4.804 Section 4.804 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Government Contract Files 4.804 Closeout of contract files. ...
48 CFR 4.804 - Closeout of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Closeout of contract files. 4.804 Section 4.804 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Government Contract Files 4.804 Closeout of contract files. ...
48 CFR 1304.804 - Closeout of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Closeout of contract files. 1304.804 Section 1304.804 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE GENERAL ADMINISTRATIVE MATTERS Government Contract Files 1304.804 Closeout of contract files. ...
48 CFR 1404.804 - Closeout of contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Closeout of contract files. 1404.804 Section 1404.804 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.804 Closeout of contract files. ...
48 CFR 1304.804 - Closeout of contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Closeout of contract files. 1304.804 Section 1304.804 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE GENERAL ADMINISTRATIVE MATTERS Government Contract Files 1304.804 Closeout of contract files. ...
48 CFR 1404.804 - Closeout of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Closeout of contract files. 1404.804 Section 1404.804 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.804 Closeout of contract files. ...
48 CFR 1504.804 - Closeout of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Closeout of contract files. 1504.804 Section 1504.804 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL ADMINISTRATIVE MATTERS Contract Files 1504.804 Closeout of contract files. ...
48 CFR 1304.804 - Closeout of contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Closeout of contract files. 1304.804 Section 1304.804 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE GENERAL ADMINISTRATIVE MATTERS Government Contract Files 1304.804 Closeout of contract files. ...
48 CFR 1504.804 - Closeout of contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Closeout of contract files. 1504.804 Section 1504.804 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL ADMINISTRATIVE MATTERS Contract Files 1504.804 Closeout of contract files. ...
48 CFR 4.804 - Closeout of contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Closeout of contract files. 4.804 Section 4.804 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Government Contract Files 4.804 Closeout of contract files. ...
48 CFR 1504.804 - Closeout of contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Closeout of contract files. 1504.804 Section 1504.804 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL ADMINISTRATIVE MATTERS Contract Files 1504.804 Closeout of contract files. ...
48 CFR 1304.804 - Closeout of contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Closeout of contract files. 1304.804 Section 1304.804 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE GENERAL ADMINISTRATIVE MATTERS Government Contract Files 1304.804 Closeout of contract files. ...
48 CFR 904.804 - Closeout of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Closeout of contract files. 904.804 Section 904.804 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL ADMINISTRATIVE MATTERS Government Contract Files 904.804 Closeout of contract files. ...
48 CFR 1404.804 - Closeout of contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Closeout of contract files. 1404.804 Section 1404.804 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.804 Closeout of contract files. ...
48 CFR 1404.804 - Closeout of contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Closeout of contract files. 1404.804 Section 1404.804 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.804 Closeout of contract files. ...
48 CFR 904.804 - Closeout of contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Closeout of contract files. 904.804 Section 904.804 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL ADMINISTRATIVE MATTERS Government Contract Files 904.804 Closeout of contract files. ...
File concepts for parallel I/O
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1989-01-01
The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.
Construction of the radiation oncology teaching files system for charged particle radiotherapy.
Masami, Mukai; Yutaka, Ando; Yasuo, Okuda; Naoto, Takahashi; Yoshihisa, Yoda; Hiroshi, Tsuji; Tadashi, Kamada
2013-01-01
Our hospital started the charged particle therapy since 1996. New institutions for charged particle therapy are planned in the world. Our hospital are accepting many visitors from those newly planned medical institutions and having many opportunities to provide with the training to them. Based upon our experiences, we have developed the radiation oncology teaching files system for charged particle therapy. We adopted the PowerPoint of Microsoft as a basic framework of our teaching files system. By using our export function of the viewer any physician can create teaching files easily and effectively. Now our teaching file system has 33 cases for clinical and physics contents. We expect that we can improve the safety and accuracy of charged particle therapy by using our teaching files system substantially.
Characterizing parallel file-access patterns on a large-scale multiprocessor
NASA Technical Reports Server (NTRS)
Purakayastha, A.; Ellis, Carla; Kotz, David; Nieuwejaar, Nils; Best, Michael L.
1995-01-01
High-performance parallel file systems are needed to satisfy tremendous I/O requirements of parallel scientific applications. The design of such high-performance parallel file systems depends on a comprehensive understanding of the expected workload, but so far there have been very few usage studies of multiprocessor file systems. This paper is part of the CHARISMA project, which intends to fill this void by measuring real file-system workloads on various production parallel machines. In particular, we present results from the CM-5 at the National Center for Supercomputing Applications. Our results are unique because we collect information about nearly every individual I/O request from the mix of jobs running on the machine. Analysis of the traces leads to various recommendations for parallel file-system design.
12 CFR 390.36 - Good faith certification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Good faith certification. 390.36 Section 390.36... Proceedings § 390.36 Good faith certification. (a) General requirement. Every filing or submission of record... filing or submission of record is well-grounded in fact and is warranted by existing law or a good faith...
12 CFR 390.36 - Good faith certification.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Good faith certification. 390.36 Section 390.36... Proceedings § 390.36 Good faith certification. (a) General requirement. Every filing or submission of record... filing or submission of record is well-grounded in fact and is warranted by existing law or a good faith...
12 CFR 390.36 - Good faith certification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Good faith certification. 390.36 Section 390.36... Proceedings § 390.36 Good faith certification. (a) General requirement. Every filing or submission of record... filing or submission of record is well-grounded in fact and is warranted by existing law or a good faith...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-24
.... Name of Project: Rollins Transmission Line Project. f. Location: The Rollins Transmission Line Project...: Mary Greene, (202) 502-8865 or [email protected] . j. Deadline for filing motions to intervene and..., three-phase, 60-kilovolt (kV), wood-pole transmission line extending from the existing Rollins...
76 FR 17991 - Proposed Collection; Comment Request for Regulation Project
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-31
... consideration. ADDRESSES: Direct all written comments to Yvette B. Lawrence, Internal Revenue Service, room 6129.... Yvette B. Lawrence, IRS Reports Clearance Officer. [FR Doc. 2011-7524 Filed 3-30-11; 8:45 am] BILLING... comments concerning an existing final regulation, T.D. 8706, Electronic Filing of Form W-4 (Sec. 31.3402(f...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-16
... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-70366; File No. SR-OCC-2013-805] Self... Existing Interpretation and Policy To Give OCC Discretion Not To Grant a Particular Clearing Member Margin... Payment, Clearing, and Settlement Supervision Act of 2010 (``Clearing Supervision Act'') \\1\\ and Rule 19b...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-27
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No.: 12478-003] Gibson Dam... Commission and is available for public inspection. a. Type of Application: Major Project--Existing Dam. b. Project No.: P-12478-003. c. Date filed: August 28, 2009. d. Applicant: Gibson Dam Hydroelectric Company...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-29
... financial incentives, the Exchange is rewarding aggressive liquidity providers in the market. The Exchange... intends to file a proposal to adopt the financial incentives for the Competitive Liquidity Provider... seeking to provide incentives for quoting and to add competition to the existing group of liquidity...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
... Organization's Statement of the Terms of Substance of the Proposed Rule Change The Exchange proposes to renew... Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and Immediate Effectiveness of a Proposed Rule Change To Renew an Existing Pilot Program for an Additional Fourteen Months February 14, 2013...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-05
... feasibility of the George W. Andrews Hydroelectric Project located at the existing George W. Andrews Lock and.... Andrews Lock and Dam Hydroelectric Project by Brookfield Power (Project No. 13077-000, filed on November... have an average annual generation of 89 gigawatt-hours. The proposed George W. Andrews Hydroelectric...
Galileo SSI/Ida Radiometrically Calibrated Images V1.0
NASA Astrophysics Data System (ADS)
Domingue, D. L.
2016-05-01
This data set includes Galileo Orbiter SSI radiometrically calibrated images of the asteroid 243 Ida, created using ISIS software and assuming nadir pointing. This is an original delivery of radiometrically calibrated files, not an update to existing files. All images archived include the asteroid within the image frame. Calibration was performed in 2013-2014.
16. Photocopy of drawing # F1103 in files of Utilities ...
16. Photocopy of drawing # F-1103 in files of Utilities Engineering Department in Cleveland showing water flow diagram in the Division Avenue Plant. Drawing dated March 11, 1921. Flow is still in existence. - Division Avenue Pumping Station & Filtration Plant, West 45th Street and Division Avenue, Cleveland, Cuyahoga County, OH
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-09-04
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-06-01
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Reddy, Krishna R; Xie, Tao; Dastgheibi, Sara
2014-01-01
In recent years, several best management practices have been developed for the removal of different types of pollutants from stormwater runoff that lead to effective stormwater management. Filter materials that remove a wide range of contaminants have great potential for extensive use in filtration systems. In this study, four filter materials (calcite, zeolite, sand, and iron filings) were investigated for their adsorption and efficiency in the removal of nutrients and heavy metals when they exist individually versus when they co-exist. Laboratory batch experiments were conducted separately under individual and mixed contaminants conditions at different initial concentrations. Adsorption capacities varied under the individual and mixed contaminant conditions due to different removal mechanisms. Most filter materials showed lower removal efficiency under mixed contaminant conditions. In general, iron filings were found effective in the removal of nutrients and heavy metals simultaneously to the maximum levels. Freundlich and Langmuir isotherms were used to model the batch adsorption results and the former better fitted the experimental results. Overall, the results indicate that the filter materials used in this study have the potential to be effective media for the treatment of nutrients and heavy metals commonly found in urban stormwater runoff.
NASA Technical Reports Server (NTRS)
Berg, R. F.; Holcomb, J. E.; Kelroy, E. A.; Levine, D. A.; Mee, C., III
1970-01-01
Generalized information storage and retrieval system capable of generating and maintaining a file, gathering statistics, sorting output, and generating final reports for output is reviewed. File generation and file maintenance programs written for the system are general purpose routines.
Parallel file system with metadata distributed across partitioned key-value store c
Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron
2017-09-19
Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).
Final Report for File System Support for Burst Buffers on HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, W.; Mohror, K.
Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respectivemore » efforts are elaborated further in this report.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-28
... existing paragraph (b)(4) of the Rule, entitled ``Numerical Guidelines Applicable to Volatile Market Opens... existing paragraph (b)(2), which provides flexibility to FINRA to use different Numerical Guidelines or... of paragraph (b)(4) (``Numerical Guidelines Applicable to Volatile Market Opens'') of the existing...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, G
2014-06-01
Purpose: In order to receive DICOM files from treatment planning system and generate patient isocenter positioning parameter file for CT laser system automatically, this paper presents a method for communication with treatment planning system and calculation of isocenter parameter for each radiation field. Methods: Coordinate transformation and laser positioning file formats were analyzed, isocenter parameter was calculated via data from DICOM CT Data and DICOM RTPLAN file. An in-house software-DicomGenie was developed based on the object-oriented program platform-Qt with DCMTK SDK (Germany OFFIS company DICOM SDK) . DicomGenie was tested for accuracy using Philips CT simulation plan system (Tumor LOC,more » Philips) and A2J CT positioning laser system (Thorigny Sur Marne, France). Results: DicomGenie successfully established DICOM communication between treatment planning system, DICOM files were received by DicomGenie and patient laser isocenter information was generated accurately. Patient laser parameter data files can be used for for CT laser system directly. Conclusion: In-house software DicomGenie received and extracted DICOM data, isocenter laser positioning data files were created by DicomGenie and can be use for A2J laser positioning system.« less
Modeling Tools for Propulsion Analysis and Computational Fluid Dynamics on the Internet
NASA Technical Reports Server (NTRS)
Muss, J. A.; Johnson, C. W.; Gotchy, M. B.
2000-01-01
The existing RocketWeb(TradeMark) Internet Analysis System (httr)://www.iohnsonrockets.com/rocketweb) provides an integrated set of advanced analysis tools that can be securely accessed over the Internet. Since these tools consist of both batch and interactive analysis codes, the system includes convenient methods for creating input files and evaluating the resulting data. The RocketWeb(TradeMark) system also contains many features that permit data sharing which, when further developed, will facilitate real-time, geographically diverse, collaborative engineering within a designated work group. Adding work group management functionality while simultaneously extending and integrating the system's set of design and analysis tools will create a system providing rigorous, controlled design development, reducing design cycle time and cost.
Globus Identity, Access, and Data Management: Platform Services for Collaborative Science
NASA Astrophysics Data System (ADS)
Ananthakrishnan, R.; Foster, I.; Wagner, R.
2016-12-01
Globus is software-as-a-service for research data management, developed at, and operated by, the University of Chicago. Globus, accessible at www.globus.org, provides high speed, secure file transfer; file sharing directly from existing storage systems; and data publication to institutional repositories. 40,000 registered users have used Globus to transfer tens of billions of files totaling hundreds of petabytes between more than 10,000 storage systems within campuses and national laboratories in the US and internationally. Web, command line, and REST interfaces support both interactive use and integration into applications and infrastructures. An important component of the Globus system is its foundational identity and access management (IAM) platform service, Globus Auth. Both Globus research data management and other applications use Globus Auth for brokering authentication and authorization interactions between end-users, identity providers, resource servers (services), and a range of clients, including web, mobile, and desktop applications, and other services. Compliant with important standards such as OAuth, OpenID, and SAML, Globus Auth provides mechanisms required for an extensible, integrated ecosystem of services and clients for the research and education community. It underpins projects such as the US National Science Foundation's XSEDE system, NCAR's Research Data Archive, and the DOE Systems Biology Knowledge Base. Current work is extending Globus services to be compliant with FEDRAMP standards for security assessment, authorization, and monitoring for cloud services. We will present Globus IAM solutions and give examples of Globus use in various projects for federated access to resources. We will also describe how Globus Auth and Globus research data management capabilities enable rapid development and low-cost operations of secure data sharing platforms that leverage Globus services and integrate them with local policy and security.
Zinge, Priyanka Ramdas; Patil, Jayaprakash
2017-01-01
The aim of this study is to evaluate and compare the effect of one shape, Neolix rotary single-file systems and WaveOne, Reciproc reciprocating single-file systems on pericervical dentin (PCD) using cone-beam computed tomography (CBCT). A total of 40 freshly extracted mandibular premolars were collected and divided into two groups, namely, Group A - Rotary: A 1 - Neolix and A 2 - OneShape and Group B - Reciprocating: B 1 - WaveOne and B 2 - Reciproc. Preoperative scans of each were taken followed by conventional access cavity preparation and working length determination with 10-k file. Instrumentation of the canal was done according to the respective file system, and postinstrumentation CBCT scans of teeth were obtained. 90 μm thick slices were obtained 4 mm apical and coronal to the cementoenamel junction. The PCD thickness was calculated as the shortest distance from the canal outline to the closest adjacent root surface, which was measured in four surfaces, i.e., facial, lingual, mesial, and distal for all the groups in the two obtained scans. There was no significant difference found between rotary single-file systems and reciprocating single-file systems in their effect on PCD, but in Group B 2 , there was most significant loss of tooth structure in the mesial, lingual, and distal surface ( P < 0.05). Reciproc single-file system removes more PCD as compared to other experimental groups, whereas Neolix single file system had the least effect on PCD.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-21
...), Rockville, Maryland 20852 and is accessible from the NRC's Agencywide Documents Access and Management System... receipt of the document. The E-Filing system also distributes an e-mail notice that provides access to the... intervene is filed so that they can obtain access to the document via the E-Filing system. A person filing...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-21
... NRC's Agencywide Documents Access and Management System (ADAMS) Public Electronic Reading Room on the... receipt of the document. The E-Filing system also distributes an e-mail notice that provides access to the... intervene is filed so that they can obtain access to the document via the E-Filing system. A person filing...
ERIC Educational Resources Information Center
Prineau, J. P.
The data system and its branches, computerized in 1970, provide information from the following: student records file, accountancy file, an experimental-stage personnel file, and a planning-stage facilities file. The files not only cope with the university's daily management duties but also supply the French Ministry with statistics. Two types of…
Generation of animation sequences of three dimensional models
NASA Technical Reports Server (NTRS)
Poi, Sharon (Inventor); Bell, Brad N. (Inventor)
1990-01-01
The invention is directed toward a method and apparatus for generating an animated sequence through the movement of three-dimensional graphical models. A plurality of pre-defined graphical models are stored and manipulated in response to interactive commands or by means of a pre-defined command file. The models may be combined as part of a hierarchical structure to represent physical systems without need to create a separate model which represents the combined system. System motion is simulated through the introduction of translation, rotation and scaling parameters upon a model within the system. The motion is then transmitted down through the system hierarchy of models in accordance with hierarchical definitions and joint movement limitations. The present invention also calls for a method of editing hierarchical structure in response to interactive commands or a command file such that a model may be included, deleted, copied or moved within multiple system model hierarchies. The present invention also calls for the definition of multiple viewpoints or cameras which may exist as part of a system hierarchy or as an independent camera. The simulated movement of the models and systems is graphically displayed on a monitor and a frame is recorded by means of a video controller. Multiple movement and hierarchy manipulations are then recorded as a sequence of frames which may be played back as an animation sequence on a video cassette recorder.
Onshore industrial wind turbine locations for the United States
Diffendorfer, Jay E.; Compton, Roger; Kramer, Louisa; Ancona, Zach; Norton, Donna
2017-01-01
This dataset provides industrial-scale onshore wind turbine locations in the United States, corresponding facility information, and turbine technical specifications. The database has wind turbine records that have been collected, digitized, locationally verified, and internally quality controlled. Turbines from the Federal Aviation Administration Digital Obstacles File, through product release date July 22, 2013, were used as the primary source of turbine data points. The dataset was subsequently revised and reposted as described in the revision histories for the report. Verification of the turbine positions was done by visual interpretation using high-resolution aerial imagery in Environmental Systems Research Institute (Esri) ArcGIS Desktop. Turbines without Federal Aviation Administration Obstacles Repository System numbers were visually identified and point locations were added to the collection. We estimated a locational error of plus or minus 10 meters for turbine locations. Wind farm facility names were identified from publicly available facility datasets. Facility names were then used in a Web search of additional industry publications and press releases to attribute additional turbine information (such as manufacturer, model, and technical specifications of wind turbines). Wind farm facility location data from various wind and energy industry sources were used to search for and digitize turbines not in existing databases. Technical specifications for turbines were assigned based on the wind turbine make and model as described in literature, specifications listed in the Federal Aviation Administration Digital Obstacles File, and information on the turbine manufacturer’s Web site. Some facility and turbine information on make and model did not exist or was difficult to obtain. Thus, uncertainty may exist for certain turbine specifications. That uncertainty was rated and a confidence was recorded for both location and attribution data quality.
76 FR 11465 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... separate systems of records: ``FHFA-OIG Audit Files Database,'' ``FHFA-OIG Investigative & Evaluative Files Database,'' ``FHFA-OIG Investigative & Evaluative MIS Database,'' and ``FHFA-OIG Hotline Database.'' These... Audit Files Database. FHFA-OIG-2: FHFA-OIG Investigative & Evaluative Files Database. FHFA-OIG-3: FHFA...
Generation and use of the Goddard trajectory determination system SLP ephemeris files
NASA Technical Reports Server (NTRS)
Armstrong, M. G.; Tomaszewski, I. B.
1973-01-01
Information is presented to acquaint users of the Goddard Trajectory Determination System Solar/Lunar/Planetary ephemeris files with the details connected with the generation and use of these files. In particular, certain sections constitute a user's manual for the ephemeris files.
DOT National Transportation Integrated Search
2008-01-01
Comparing the 1988-2007 files with files from years prior to 1988 is not recommended. The : principal attributes of the NASS CDS 1988-2007 files include: focusing on crashes involving : automobiles and automobile derivatives, light trucks and vans wi...
Digital Libraries: The Next Generation in File System Technology.
ERIC Educational Resources Information Center
Bowman, Mic; Camargo, Bill
1998-01-01
Examines file sharing within corporations that use wide-area, distributed file systems. Applications and user interactions strongly suggest that the addition of services typically associated with digital libraries (content-based file location, strongly typed objects, representation of complex relationships between documents, and extrinsic…
Digital data for the geology of the Southern Brooks Range, Alaska
Till, Alison B.; Dumoulin, Julie A.; Harris, Anita G.; Moore, Thomas E.; Bleick, Heather A.; Siwiec, Benjamin; Labay, Keith A.; Wilson, Frederic H.; Shew, Nora B.
2008-01-01
The growth in the use of Geographic Information Systems (GIS) has highlighted the need for digital geologic maps that have been attributed with information about age and lithology. Such maps can be conveniently used to generate derivative maps for manifold special purposes such as mineral-resource assessment, metallogenic studies, tectonic studies, and environmental research. This report is part of a series of integrated geologic map databases that cover the entire United States. Three national-scale geologic maps that portray most or all of the United States already exist; for the conterminous U.S., King and Beikman (1974a,b) compiled a map at a scale of 1:2,500,000, Beikman (1980) compiled a map for Alaska at 1:2,500,000 scale, and for the entire U.S., Reed and others (2005a,b) compiled a map at a scale of 1:5,000,000. A digital version of the King and Beikman map was published by Schruben and others (1994). Reed and Bush (2004) produced a digital version of the Reed and others (2005a) map for the conterminous U.S. The present series of maps is intended to provide the next step in increased detail. State geologic maps that range in scale from 1:100,000 to 1:1,000,000 are available for most of the country, and digital versions of these state maps are the basis of this product. The digital geologic maps presented here are in a standardized format as ARC/INFO export files and as ArcView shape files. The files named __geol contain geologic polygons and line (contact) attributes; files named __fold contain fold axes; files named __lin contain lineaments; and files named __dike contain dikes as lines. Data tables that relate the map units to detailed lithologic and age information accompany these GIS files. The map is delivered as a set 1:250,000-scale quadrangle files. To the best of our ability, these quadrangle files are edge-matched with respect to geology. When the maps are merged, the combined attribute tables can be used directly with the merged maps to make derivative maps.
Wilson, Frederic H.; Hults, Chad P.; Labay, Keith A.; Shew, Nora B.
2007-01-01
The growth in the use of Geographic Information Systems (GIS) has highlighted the need for digital geologic maps that have been attributed with information about age and lithology. Such maps can be conveniently used to generate derivative maps for manifold special purposes such as mineral-resource assessment, metallogenic studies, tectonic studies, and environmental research. This report is part of a series of integrated geologic map databases that cover the entire United States. Three national-scale geologic maps that portray most or all of the United States already exist; for the conterminous U.S., King and Beikman (1974a,b) compiled a map at a scale of 1:2,500,000, Beikman (1980) compiled a map for Alaska at 1:2,500,000 scale, and for the entire U.S., Reed and others (2005a,b) compiled a map at a scale of 1:5,000,000. A digital version of the King and Beikman map was published by Schruben and others (1994). Reed and Bush (2004) produced a digital version of the Reed and others (2005a) map for the conterminous U.S. The present series of maps is intended to provide the next step in increased detail. State geologic maps that range in scale from 1:100,000 to 1:1,000,000 are available for most of the country, and digital versions of these state maps are the basis of this product. The digital geologic maps presented here are in a standardized format as ARC/INFO export files and as ArcView shape files. The files named __geol contain geologic polygons and line (contact) attributes; files named __fold contain fold axes; files named __lin contain lineaments; and files named __dike contain dikes as lines. Data tables that relate the map units to detailed lithologic and age information accompany these GIS files. The map is delivered as a set 1:250,000-scale quadrangle files. To the best of our ability, these quadrangle files are edge-matched with respect to geology. When the maps are merged, the combined attribute tables can be used directly with the merged maps to make derivative maps.
NASA Astrophysics Data System (ADS)
Prasad, U.; Rahabi, A.
2001-05-01
The following utilities developed for HDF-EOS format data dump are of special use for Earth science data for NASA's Earth Observation System (EOS). This poster demonstrates their use and application. The first four tools take HDF-EOS data files as input. HDF-EOS Metadata Dumper - metadmp Metadata dumper extracts metadata from EOS data granules. It operates by simply copying blocks of metadata from the file to the standard output. It does not process the metadata in any way. Since all metadata in EOS granules is encoded in the Object Description Language (ODL), the output of metadmp will be in the form of complete ODL statements. EOS data granules may contain up to three different sets of metadata (Core, Archive, and Structural Metadata). HDF-EOS Contents Dumper - heosls Heosls dumper displays the contents of HDF-EOS files. This utility provides detailed information on the POINT, SWATH, and GRID data sets. in the files. For example: it will list, the Geo-location fields, Data fields and objects. HDF-EOS ASCII Dumper - asciidmp The ASCII dump utility extracts fields from EOS data granules into plain ASCII text. The output from asciidmp should be easily human readable. With minor editing, asciidmp's output can be made ingestible by any application with ASCII import capabilities. HDF-EOS Binary Dumper - bindmp The binary dumper utility dumps HDF-EOS objects in binary format. This is useful for feeding the output of it into existing program, which does not understand HDF, for example: custom software and COTS products. HDF-EOS User Friendly Metadata - UFM The UFM utility tool is useful for viewing ECS metadata. UFM takes an EOSDIS ODL metadata file and produces an HTML report of the metadata for display using a web browser. HDF-EOS METCHECK - METCHECK METCHECK can be invoked from either Unix or Dos environment with a set of command line options that a user might use to direct the tool inputs and output . METCHECK validates the inventory metadata in (.met file) using The Descriptor file (.desc) as the reference. The tool takes (.desc), and (.met) an ODL file as inputs, and generates a simple output file contains the results of the checking process.
A new approach to the film library: time-unit filing.
Palmucci, J A
2000-01-01
The installation of a new radiology information system (RIS) at Children's Hospital Medical Center of Akron in Akron, Ohio, took the radiology department into a new world of technology, but raised issues we never anticipated. The major problem the new RIS forced the department to overcome was how to eliminate the film file's reliance on a proprietary radiology numbering system. Previously, the department had used its own numbering system--a proprietary x-ray number--to file film jackets and had used the hospital-issued medical record number to access patient and payer information from the hospital information system. It became clear that we should use a single number--the medical record number--to access all data, but we wondered how that would affect our film file room. An RIS consultant suggested that we consider filing films by last date of service, a system called "time-unit filing." Time-unit filing means keeping the most recent two-weeks worth of films in the main file room. They are organized by gender in blue or pink jackets and marked alphabetically by the patient's last name in a way that makes mis-files easy to see. If a patient's film jacket is activated again, it is refiled in the current two-week time unit. Inactive jackets remain in their two-week time unit indefinitely. Time-unit filing has had many benefits for the radiology department at Children's Hospital Medical Center of Akron: fewer mis-files, less time needed for filing and searching, and successful implementation of the new RIS.
78 FR 64488 - Combined Notice of Filings #2
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-29
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 2 Take notice that the Commission received the following electric rate filings: Docket Numbers: ER13-1999-001. Applicants: Midcontinent Independent System Operator, Inc. Description: Midcontinent Independent System Operator, Inc. submits tariff filing per 35: 10...
48 CFR 3004.804-5 - Procedures for closing out contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Procedures for closing out contract files. 3004.804-5 Section 3004.804-5 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... Contract Files 3004.804-5 Procedures for closing out contract files. ...
48 CFR 904.803 - Contents of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Contents of contract files. 904.803 Section 904.803 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL ADMINISTRATIVE MATTERS Government Contract Files 904.803 Contents of contract files. (a) (29) The record copy of...
48 CFR 3004.804-5 - Procedures for closing out contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Procedures for closing out contract files. 3004.804-5 Section 3004.804-5 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... Contract Files 3004.804-5 Procedures for closing out contract files. ...
48 CFR 4.803 - Contents of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Contents of contract files. 4.803 Section 4.803 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Government Contract Files 4.803 Contents of contract files. The following are examples of...
48 CFR 3004.804-5 - Procedures for closing out contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Procedures for closing out contract files. 3004.804-5 Section 3004.804-5 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... Contract Files 3004.804-5 Procedures for closing out contract files. ...
48 CFR 4.803 - Contents of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Contents of contract files. 4.803 Section 4.803 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Government Contract Files 4.803 Contents of contract files. The following are examples of...
48 CFR 904.803 - Contents of contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Contents of contract files. 904.803 Section 904.803 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL ADMINISTRATIVE MATTERS Government Contract Files 904.803 Contents of contract files. (a) (29) The record copy of...
48 CFR 3004.804-5 - Procedures for closing out contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Procedures for closing out contract files. 3004.804-5 Section 3004.804-5 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... Contract Files 3004.804-5 Procedures for closing out contract files. ...
48 CFR 3004.804-5 - Procedures for closing out contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Procedures for closing out contract files. 3004.804-5 Section 3004.804-5 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND... Contract Files 3004.804-5 Procedures for closing out contract files. ...
48 CFR 904.803 - Contents of contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Contents of contract files. 904.803 Section 904.803 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL ADMINISTRATIVE MATTERS Government Contract Files 904.803 Contents of contract files. (a) (29) The record copy of...
Geographic Information Systems and Web Page Development
NASA Technical Reports Server (NTRS)
Reynolds, Justin
2004-01-01
The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIs. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. At the outset, I was given goals and expectations from my branch and from my mentor with regards to the further implementation of GIs. Those goals are as follows: (1) Continue the development of GIS for the underground structures. (2) Extract and export annotated data from AutoCAD drawing files and construct a database (to serve as a prototype for future work). (3) Examine existing underground record drawings to determine existing and non-existing underground tanks. Once this data was collected and analyzed, I set out on the task of creating a user-friendly database that could be assessed by all members of the branch. It was important that the database be built using programs that most employees already possess, ruling out most AutoCAD-based viewers. Therefore, I set out to create an Access database that translated onto the web using Internet Explorer as the foundation. After some programming, it was possible to view AutoCAD files and other GIS-related applications on Internet Explorer, while providing the user with a variety of editing commands and setting options. I was also given the task of launching a divisional website using Macromedia Flash and other web- development programs.
Tuning HDF5 subfiling performance on parallel file systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey
Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less
77 FR 43592 - System Energy Resources, Inc.; Notice of Filing
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-25
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. EL12-52-001] System Energy Resources, Inc.; Notice of Filing Take notice that on July 18, 2012, System Energy Resources, Inc. (System Energy Resources), submitted a supplement to its petition filed on March 28, 2012 (March 28 petition...
Ontology for Vector Surveillance and Management
LOZANO-FUENTES, SAUL; BANDYOPADHYAY, ARITRA; COWELL, LINDSAY G.; GOLDFAIN, ALBERT; EISEN, LARS
2013-01-01
Ontologies, which are made up by standardized and defined controlled vocabulary terms and their interrelationships, are comprehensive and readily searchable repositories for knowledge in a given domain. The Open Biomedical Ontologies (OBO) Foundry was initiated in 2001 with the aims of becoming an “umbrella” for life-science ontologies and promoting the use of ontology development best practices. A software application (OBO-Edit; *.obo file format) was developed to facilitate ontology development and editing. The OBO Foundry now comprises over 100 ontologies and candidate ontologies, including the NCBI organismal classification ontology (NCBITaxon), the Mosquito Insecticide Resistance Ontology (MIRO), the Infectious Disease Ontology (IDO), the IDOMAL malaria ontology, and ontologies for mosquito gross anatomy and tick gross anatomy. We previously developed a disease data management system for dengue and malaria control programs, which incorporated a set of information trees built upon ontological principles, including a “term tree” to promote the use of standardized terms. In the course of doing so, we realized that there were substantial gaps in existing ontologies with regards to concepts, processes, and, especially, physical entities (e.g., vector species, pathogen species, and vector surveillance and management equipment) in the domain of surveillance and management of vectors and vector-borne pathogens. We therefore produced an ontology for vector surveillance and management, focusing on arthropod vectors and vector-borne pathogens with relevance to humans or domestic animals, and with special emphasis on content to support operational activities through inclusion in databases, data management systems, or decision support systems. The Vector Surveillance and Management Ontology (VSMO) includes >2,200 unique terms, of which the vast majority (>80%) were newly generated during the development of this ontology. One core feature of the VSMO is the linkage, through the has_vector relation, of arthropod species to the pathogenic microorganisms for which they serve as biological vectors. We also recognized and addressed a potential roadblock for use of the VSMO by the vector-borne disease community: the difficulty in extracting information from OBO-Edit ontology files (*.obo files) and exporting the information to other file formats. A novel ontology explorer tool was developed to facilitate extraction and export of information from the VSMO *.obo file into lists of terms and their associated unique IDs in *.txt or *.csv file formats. These lists can then be imported into a database or data management system for use as select lists with predefined terms. This is an important step to ensure that the knowledge contained in our ontology can be put into practical use. PMID:23427646
Ontology for vector surveillance and management.
Lozano-Fuentes, Saul; Bandyopadhyay, Aritra; Cowell, Lindsay G; Goldfain, Albert; Eisen, Lars
2013-01-01
Ontologies, which are made up by standardized and defined controlled vocabulary terms and their interrelationships, are comprehensive and readily searchable repositories for knowledge in a given domain. The Open Biomedical Ontologies (OBO) Foundry was initiated in 2001 with the aims of becoming an "umbrella" for life-science ontologies and promoting the use of ontology development best practices. A software application (OBO-Edit; *.obo file format) was developed to facilitate ontology development and editing. The OBO Foundry now comprises over 100 ontologies and candidate ontologies, including the NCBI organismal classification ontology (NCBITaxon), the Mosquito Insecticide Resistance Ontology (MIRO), the Infectious Disease Ontology (IDO), the IDOMAL malaria ontology, and ontologies for mosquito gross anatomy and tick gross anatomy. We previously developed a disease data management system for dengue and malaria control programs, which incorporated a set of information trees built upon ontological principles, including a "term tree" to promote the use of standardized terms. In the course of doing so, we realized that there were substantial gaps in existing ontologies with regards to concepts, processes, and, especially, physical entities (e.g., vector species, pathogen species, and vector surveillance and management equipment) in the domain of surveillance and management of vectors and vector-borne pathogens. We therefore produced an ontology for vector surveillance and management, focusing on arthropod vectors and vector-borne pathogens with relevance to humans or domestic animals, and with special emphasis on content to support operational activities through inclusion in databases, data management systems, or decision support systems. The Vector Surveillance and Management Ontology (VSMO) includes >2,200 unique terms, of which the vast majority (>80%) were newly generated during the development of this ontology. One core feature of the VSMO is the linkage, through the has vector relation, of arthropod species to the pathogenic microorganisms for which they serve as biological vectors. We also recognized and addressed a potential roadblock for use of the VSMO by the vector-borne disease community: the difficulty in extracting information from OBO-Edit ontology files (*.obo files) and exporting the information to other file formats. A novel ontology explorer tool was developed to facilitate extraction and export of information from the VSMO*.obo file into lists of terms and their associated unique IDs in *.txt or *.csv file formats. These lists can then be imported into a database or data management system for use as select lists with predefined terms. This is an important step to ensure that the knowledge contained in our ontology can be put into practical use.
P2P watch: personal health information detection in peer-to-peer file-sharing networks.
Sokolova, Marina; El Emam, Khaled; Arbuckle, Luk; Neri, Emilio; Rose, Sean; Jonker, Elizabeth
2012-07-09
Users of peer-to-peer (P2P) file-sharing networks risk the inadvertent disclosure of personal health information (PHI). In addition to potentially causing harm to the affected individuals, this can heighten the risk of data breaches for health information custodians. Automated PHI detection tools that crawl the P2P networks can identify PHI and alert custodians. While there has been previous work on the detection of personal information in electronic health records, there has been a dearth of research on the automated detection of PHI in heterogeneous user files. To build a system that accurately detects PHI in files sent through P2P file-sharing networks. The system, which we call P2P Watch, uses a pipeline of text processing techniques to automatically detect PHI in files exchanged through P2P networks. P2P Watch processes unstructured texts regardless of the file format, document type, and content. We developed P2P Watch to extract and analyze PHI in text files exchanged on P2P networks. We labeled texts as PHI if they contained identifiable information about a person (eg, name and date of birth) and specifics of the person's health (eg, diagnosis, prescriptions, and medical procedures). We evaluated the system's performance through its efficiency and effectiveness on 3924 files gathered from three P2P networks. P2P Watch successfully processed 3924 P2P files of unknown content. A manual examination of 1578 randomly selected files marked by the system as non-PHI confirmed that these files indeed did not contain PHI, making the false-negative detection rate equal to zero. Of 57 files marked by the system as PHI, all contained both personally identifiable information and health information: 11 files were PHI disclosures, and 46 files contained organizational materials such as unfilled insurance forms, job applications by medical professionals, and essays. PHI can be successfully detected in free-form textual files exchanged through P2P networks. Once the files with PHI are detected, affected individuals or data custodians can be alerted to take remedial action.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-30
... Mills, Inc.; Notice of Application Accepted for Filing, Soliciting Motions To Intervene and Protests... Mills, Inc. e. Name of Project: Monadnock Hydroelectric Project. f. Location: The existing project is..., Monadnock Paper Mills, Inc.; Antrim Road, P.O. Box 339, Bennington, NH 03442; (603) 588-3311 or [email protected
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-17
... conditions, and prescriptions. k. Deadline for filing responsive documents: Due to the small size of the... proposed 80-foot-long, 16-inch-diameter intake pipe; (3) a proposed 18- foot by 18-foot powerhouse..., 50-foot discharge pipe, connecting to existing 42-inch diameter and 10- inch diameter pipes conveying...
26 CFR 301.6323(c)-3 - Protection for obligatory disbursement agreements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... valid with respect to a security interest which: (1) Comes into existence after the tax lien filing, (2...(h)-1 for definitions of the terms “security interest” and “tax lien filing.” For purposes of this... business, to make disbursements. An agreement is treated as an obligatory disbursement agreement only with...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-09
... market conditions. The text of the proposed rule change is attached as Exhibit 5.\\3\\ \\3\\ The Commission... event of unusual market conditions. This is a competitive filing that is based on two recently approved... expiration if unusual market conditions exist. The exchanges amended their rules to permit the opening of...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-07
... operator subject to the provisions of this part shall maintain a file of these measurements, and retain the file for at least five years following the date of such measurements, maintain reports and records...: There is a decrease in the number of affected facilities due to a more accurate accounting of existing...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-22
... feasibility of the proposed Coralville Dam Hydroelectric Project No. 14388, to be located at the existing Coralville Dam on the Iowa River, near Iowa City in Johnson County, Iowa. The Coralville Dam is owned by the... Competing Applications; Coralville Energy, LLC On April 18, 2012, the Coralville Energy, LLC filed an...
Can ASCII data files be standardized for Earth Science?
NASA Astrophysics Data System (ADS)
Evans, K. D.; Chen, G.; Wilson, A.; Law, E.; Olding, S. W.; Krotkov, N. A.; Conover, H.
2015-12-01
NASA's Earth Science Data Systems Working Groups (ESDSWG) was created over 10 years ago. The role of the ESDSWG is to make recommendations relevant to NASA's Earth science data systems from user experiences. Each group works independently focusing on a unique topic. Participation in ESDSWG groups comes from a variety of NASA-funded science and technology projects, such as MEaSUREs, NASA information technology experts, affiliated contractor, staff and other interested community members from academia and industry. Recommendations from the ESDSWG groups will enhance NASA's efforts to develop long term data products. Each year, the ESDSWG has a face-to-face meeting to discuss recommendations and future efforts. Last year's (2014) ASCII for Science Data Working Group (ASCII WG) completed its goals and made recommendations on a minimum set of information that is needed to make ASCII files at least human readable and usable for the foreseeable future. The 2014 ASCII WG created a table of ASCII files and their components as a means for understanding what kind of ASCII formats exist and what components they have in common. Using this table and adding information from other ASCII file formats, we will discuss the advantages and disadvantages of a standardized format. For instance, Space Geodesy scientists have been using the same RINEX/SINEX ASCII format for decades. Astronomers mostly archive their data in the FITS format. Yet Earth scientists seem to have a slew of ASCII formats, such as ICARTT, netCDF (an ASCII dump) and the IceBridge ASCII format. The 2015 Working Group is focusing on promoting extendibility and machine readability of ASCII data. Questions have been posed, including, Can we have a standardized ASCII file format? Can it be machine-readable and simultaneously human-readable? We will present a summary of the current used ASCII formats in terms of advantages and shortcomings, as well as potential improvements.
78 FR 38710 - Combined Notice of Filings #2
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-27
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 2 Take notice that the Commission received the following electric rate filings: Docket Numbers: ER12-360-003. Applicants: New York Independent System Operator, Inc. Description: New York Independent System Operator, Inc. submits NYISO compliance filing in...
78 FR 67137 - Combined Notice of Filings #2
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-08
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 2 Take notice that the Commission received the following electric rate filings: Docket Numbers: ER14-263-000. Applicants: Midcontinent Independent System Operator, Inc. Description: Midcontinent Independent System Operator, Inc. submits tariff filing per 35.13(a...
48 CFR 2904.800-70 - Contents of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Contents of contract files. 2904.800-70 Section 2904.800-70 Federal Acquisition Regulations System DEPARTMENT OF LABOR GENERAL ADMINISTRATIVE MATTERS Government Contract Files 2904.800-70 Contents of contract files. (a) The reports listed...
48 CFR 2904.800-70 - Contents of contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Contents of contract files. 2904.800-70 Section 2904.800-70 Federal Acquisition Regulations System DEPARTMENT OF LABOR GENERAL ADMINISTRATIVE MATTERS Government Contract Files 2904.800-70 Contents of contract files. (a) The reports listed...
48 CFR 2904.800-70 - Contents of contract files.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Contents of contract files. 2904.800-70 Section 2904.800-70 Federal Acquisition Regulations System DEPARTMENT OF LABOR GENERAL ADMINISTRATIVE MATTERS Government Contract Files 2904.800-70 Contents of contract files. (a) The reports listed...
47 CFR 76.1716 - Subscriber records and public inspection file.
Code of Federal Regulations, 2014 CFR
2014-10-01
... § 76.1716 Subscriber records and public inspection file. The operator of a cable television system shall make the system, its public inspection file, and its records of subscribers available for... 47 Telecommunication 4 2014-10-01 2014-10-01 false Subscriber records and public inspection file...
47 CFR 76.1716 - Subscriber records and public inspection file.
Code of Federal Regulations, 2013 CFR
2013-10-01
... § 76.1716 Subscriber records and public inspection file. The operator of a cable television system shall make the system, its public inspection file, and its records of subscribers available for... 47 Telecommunication 4 2013-10-01 2013-10-01 false Subscriber records and public inspection file...
48 CFR 2904.800-70 - Contents of contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Contents of contract files. 2904.800-70 Section 2904.800-70 Federal Acquisition Regulations System DEPARTMENT OF LABOR GENERAL ADMINISTRATIVE MATTERS Government Contract Files 2904.800-70 Contents of contract files. (a) The reports listed...
48 CFR 2904.800-70 - Contents of contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Contents of contract files. 2904.800-70 Section 2904.800-70 Federal Acquisition Regulations System DEPARTMENT OF LABOR GENERAL ADMINISTRATIVE MATTERS Government Contract Files 2904.800-70 Contents of contract files. (a) The reports listed...
Mynodbcsv: lightweight zero-config database solution for handling very large CSV files.
Adaszewski, Stanisław
2014-01-01
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach--data stay mostly in the CSV files; "zero configuration"--no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Mynodbcsv: Lightweight Zero-Config Database Solution for Handling Very Large CSV Files
Adaszewski, Stanisław
2014-01-01
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: “no copy” approach – data stay mostly in the CSV files; “zero configuration” – no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results. PMID:25068261
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service
Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha
2017-01-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169
Using Cloud-based Storage Technologies for Earth Science Data
NASA Astrophysics Data System (ADS)
Michaelis, A.; Readey, J.; Votava, P.
2016-12-01
Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.
28 CFR 16.91 - Exemption of Criminal Division Systems-limited access, as indicated.
Code of Federal Regulations, 2013 CFR
2013-07-01
...: (1) Central Criminal Division, Index File and Associated Records System of Records (JUSTICE/CRM-001... Security File System of Records(JUSTICE/CRM-002). These exemptions apply to the extent that information in.... The records in these systems contain the names of the subjects of the files in question and the system...
28 CFR 16.91 - Exemption of Criminal Division Systems-limited access, as indicated.
Code of Federal Regulations, 2011 CFR
2011-07-01
...: (1) Central Criminal Division, Index File and Associated Records System of Records (JUSTICE/CRM-001... Security File System of Records(JUSTICE/CRM-002). These exemptions apply to the extent that information in.... The records in these systems contain the names of the subjects of the files in question and the system...
28 CFR 16.91 - Exemption of Criminal Division Systems-limited access, as indicated.
Code of Federal Regulations, 2012 CFR
2012-07-01
...: (1) Central Criminal Division, Index File and Associated Records System of Records (JUSTICE/CRM-001... Security File System of Records(JUSTICE/CRM-002). These exemptions apply to the extent that information in.... The records in these systems contain the names of the subjects of the files in question and the system...
28 CFR 16.91 - Exemption of Criminal Division Systems-limited access, as indicated.
Code of Federal Regulations, 2014 CFR
2014-07-01
...: (1) Central Criminal Division, Index File and Associated Records System of Records (JUSTICE/CRM-001... Security File System of Records(JUSTICE/CRM-002). These exemptions apply to the extent that information in.... The records in these systems contain the names of the subjects of the files in question and the system...
Electronic hand-drafting and picture management system.
Yang, Tsung-Han; Ku, Cheng-Yuan; Yen, David C; Hsieh, Wen-Huai
2012-08-01
The Department of Health of Executive Yuan in Taiwan (R.O.C.) is implementing a five-stage project entitled Electronic Medical Record (EMR) converting all health records from written to electronic form. Traditionally, physicians record patients' symptoms, related examinations, and suggested treatments on paper medical records. Currently when implementing the EMR, all text files and image files in the Hospital Information System (HIS) and Picture Archiving and Communication Systems (PACS) are kept separate. The current medical system environment is unable to combine text files, hand-drafted files, and photographs in the same system, so it is difficult to support physicians with the recording of medical data. Furthermore, in surgical and other related departments, physicians need immediate access to medical records in order to understand the details of a patient's condition. In order to address these problems, the Department of Health has implemented an EMR project, with the primary goal of building an electronic hand-drafting and picture management system (HDP system) that can be used by medical personnel to record medical information in a convenient way. This system can simultaneously edit text files, hand-drafted files, and image files and then integrate these data into Portable Document Format (PDF) files. In addition, the output is designed to fit a variety of formats in order to meet various laws and regulations. By combining the HDP system with HIS and PACS, the applicability can be enhanced to fit various scenarios and can assist the medical industry in moving into the final phase of EMR.
The distributed production system of the SuperB project: description and results
NASA Astrophysics Data System (ADS)
Brown, D.; Corvo, M.; Di Simone, A.; Fella, A.; Luppi, E.; Paoloni, E.; Stroili, R.; Tomassetti, L.
2011-12-01
The SuperB experiment needs large samples of MonteCarlo simulated events in order to finalize the detector design and to estimate the data analysis performances. The requirements are beyond the capabilities of a single computing farm, so a distributed production model capable of exploiting the existing HEP worldwide distributed computing infrastructure is needed. In this paper we describe the set of tools that have been developed to manage the production of the required simulated events. The production of events follows three main phases: distribution of input data files to the remote site Storage Elements (SE); job submission, via SuperB GANGA interface, to all available remote sites; output files transfer to CNAF repository. The job workflow includes procedures for consistency checking, monitoring, data handling and bookkeeping. A replication mechanism allows storing the job output on the local site SE. Results from 2010 official productions are reported.
NASA Technical Reports Server (NTRS)
Banks, David C.
1994-01-01
This talk features two simple and useful tools for digital image processing in the UNIX environment. They are xv and pbmplus. The xv image viewer which runs under the X window system reads images in a number of different file formats and writes them out in different formats. The view area supports a pop-up control panel. The 'algorithms' menu lets you blur an image. The xv control panel also activates the color editor which displays the image's color map (if one exists). The xv image viewer is available through the internet. The pbmplus package is a set of tools designed to perform image processing from within a UNIX shell. The acronym 'pbm' stands for portable bit map. Like xv, the pbm plus tool can convert images from and to many different file formats. The source code and manual pages for pbmplus are also available through the internet. This software is in the public domain.
Development and application of CATIA-GDML geometry builder
NASA Astrophysics Data System (ADS)
Belogurov, S.; Berchun, Yu; Chernogorov, A.; Malzacher, P.; Ovcharenko, E.; Schetinin, V.
2014-06-01
Due to conceptual difference between geometry descriptions in Computer-Aided Design (CAD) systems and particle transport Monte Carlo (MC) codes direct conversion of detector geometry in either direction is not feasible. The paper presents an update on functionality and application practice of the CATIA-GDML geometry builder first introduced at CHEP2010. This set of CATIAv5 tools has been developed for building a MC optimized GEANT4/ROOT compatible geometry based on the existing CAD model. The model can be exported via Geometry Description Markup Language (GDML). The builder allows also import and visualization of GEANT4/ROOT geometries in CATIA. The structure of a GDML file, including replicated volumes, volume assemblies and variables, is mapped into a part specification tree. A dedicated file template, a wide range of primitives, tools for measurement and implicit calculation of parameters, different types of multiple volume instantiation, mirroring, positioning and quality check have been implemented. Several use cases are discussed.
Medical image informatics infrastructure design and applications.
Huang, H K; Wong, S T; Pietka, E
1997-01-01
Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.
48 CFR 4.802 - Contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Contract files. 4.802 Section 4.802 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE... locator system should be established to ensure the ability to locate promptly any contract files. (e...
48 CFR 4.802 - Contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Contract files. 4.802 Section 4.802 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE... locator system should be established to ensure the ability to locate promptly any contract files. (e...
48 CFR 4.802 - Contract files.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Contract files. 4.802 Section 4.802 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE... locator system should be established to ensure the ability to locate promptly any contract files. (e...
48 CFR 4.802 - Contract files.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Contract files. 4.802 Section 4.802 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE... locator system should be established to ensure the ability to locate promptly any contract files. (e...
Özkocak, I; Taşkan, M M; Gökt Rk, H; Aytac, F; Karaarslan, E Şirin
2015-01-01
The aim of this study is to evaluate increases in temperature on the external root surface during endodontic treatment with different rotary systems. Fifty human mandibular incisors with a single root canal were selected. All root canals were instrumented using a size 20 Hedstrom file, and the canals were irrigated with 5% sodium hypochlorite solution. The samples were randomly divided into the following three groups of 15 teeth: Group 1: The OneShape Endodontic File no.: 25; Group 2: The Reciproc Endodontic File no.: 25; Group 3: The WaveOne Endodontic File no.: 25. During the preparation, the temperature changes were measured in the middle third of the roots using a noncontact infrared thermometer. The temperature data were transferred from the thermometer to the computer and were observed graphically. Statistical analysis was performed using the Kruskal-Wallis analysis of variance at a significance level of 0.05. The increases in temperature caused by the OneShape file system were lower than those of the other files (P < 0.05). The WaveOne file showed the highest temperature increases. However, there were no significant differences between the Reciproc and WaveOne files. The single file rotary systems used in this study may be recommended for clinical use.
Lessons Learned in Deploying the World s Largest Scale Lustre File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Wang, Feiyi
2010-01-01
The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing themore » file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-02
... proceeding were required to file system security statements under the Commission's rules. (Security systems..., including broadband Internet access and interconnected VoIP providers, must file updates to their systems... Commission's rules, the information in the CALEA security system filings and petitions will not be made...
Accessing and distributing EMBL data using CORBA (common object request broker architecture).
Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P
2000-01-01
The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.
Accessing and distributing EMBL data using CORBA (common object request broker architecture)
Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip
2000-01-01
Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259
Evaluation of a new filing system's ability to maintain canal morphology.
Thompson, Matthew; Sidow, Stephanie J; Lindsey, Kimberly; Chuang, Augustine; McPherson, James C
2014-06-01
The manufacturer of the Hyflex CM endodontic files claims the files remain centered within the canal, and if unwound during treatment, they will regain their original shape after sterilization. The purpose of this study was to evaluate and compare the canal centering ability of the Hyflex CM and the ProFile ISO filing systems after repeated uses in simulated canals, followed by autoclaving. Sixty acrylic blocks with a canal curvature of 45° were stained with methylene blue, photographed, and divided into 2 groups, H (Hyflex CM) and P (ProFile ISO). The groups were further subdivided into 3 subgroups: H1, H2, H3; P1, P2, P3 (n = 10). Groups H1 and P1 were instrumented to 40 (.04) with the respective file system. Used files were autoclaved for 26 minutes at 126°C. After sterilization, the files were used to instrument groups H2 and P2. The same sterilization and instrumentation procedure was repeated for groups H3 and P3. Post-instrumentation digital images were taken and superimposed over the pre-instrumentation images. Changes in the location of the center of the canal at predetermined reference points were recorded and compared within subgroups and between filing systems. Statistical differences in intergroup and intragroup transportation measures were analyzed by using the Kruskal-Wallis analysis of variance of ranks with the Bonferroni post hoc test. There was a difference between Hyflex CM and ProFile ISO groups, although it was not statistically significant. Intragroup differences for both Hyflex CM and ProFile ISO groups were not significant (P < .05). The Hyflex CM and ProFile ISO files equally maintained the original canal's morphology after 2 sterilization cycles. Published by Elsevier Inc.
Planetary image conversion task
NASA Technical Reports Server (NTRS)
Martin, M. D.; Stanley, C. L.; Laughlin, G.
1985-01-01
The Planetary Image Conversion Task group processed 12,500 magnetic tapes containing raw imaging data from JPL planetary missions and produced an image data base in consistent format on 1200 fully packed 6250-bpi tapes. The output tapes will remain at JPL. A copy of the entire tape set was delivered to US Geological Survey, Flagstaff, Ariz. A secondary task converted computer datalogs, which had been stored in project specific MARK IV File Management System data types and structures, to flat-file, text format that is processable on any modern computer system. The conversion processing took place at JPL's Image Processing Laboratory on an IBM 370-158 with existing software modified slightly to meet the needs of the conversion task. More than 99% of the original digital image data was successfully recovered by the conversion task. However, processing data tapes recorded before 1975 was destructive. This discovery is of critical importance to facilities responsible for maintaining digital archives since normal periodic random sampling techniques would be unlikely to detect this phenomenon, and entire data sets could be wiped out in the act of generating seemingly positive sampling results. Reccomended follow-on activities are also included.
Code of Federal Regulations, 2013 CFR
2013-10-01
... exempt from the access provisions of subsection (d). (c) Personnel Background Investigation File System (DHS/TSA 004). The Personnel Background Investigation File System (PBIFS) (DHS/TSA 004) enables TSA to... Traveler Operations Files (DHS/TSA 015). The purpose of this system is to pre-screen and positively...
Code of Federal Regulations, 2014 CFR
2014-10-01
... exempt from the access provisions of subsection (d). (c) Personnel Background Investigation File System (DHS/TSA 004). The Personnel Background Investigation File System (PBIFS) (DHS/TSA 004) enables TSA to... Traveler Operations Files (DHS/TSA 015). The purpose of this system is to pre-screen and positively...
Code of Federal Regulations, 2011 CFR
2011-10-01
... exempt from the access provisions of subsection (d). (c) Personnel Background Investigation File System (DHS/TSA 004). The Personnel Background Investigation File System (PBIFS) (DHS/TSA 004) enables TSA to... Traveler Operations Files (DHS/TSA 015). The purpose of this system is to pre-screen and positively...
Code of Federal Regulations, 2012 CFR
2012-10-01
... exempt from the access provisions of subsection (d). (c) Personnel Background Investigation File System (DHS/TSA 004). The Personnel Background Investigation File System (PBIFS) (DHS/TSA 004) enables TSA to... Traveler Operations Files (DHS/TSA 015). The purpose of this system is to pre-screen and positively...
... Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess ...
48 CFR 904.803 - Contents of contract files.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Contents of contract files. 904.803 Section 904.803 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL ADMINISTRATIVE MATTERS Government Contract Files 904.803 Contents of contract files. (a) (29) The record copy of the Individual Acquisition Action Report...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-22
... Immigration Services, Immigration and Customs Enforcement, Customs and Border Protection--001 Alien File... Alien File, Index, and National File Tracking System of Records'' from certain provisions of the Privacy... administrative enforcement requirements. The system of records is the DHS/USCIS-ICE-CBP-001 Alien File, Index...