Bent, John M.; Faibish, Sorin; Grider, Gary
2016-04-19
Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-06-30
Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.
Small file aggregation in a parallel computing system
Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang
2014-09-02
Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.
Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron
2015-10-20
Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.
Storing files in a parallel computing system based on user-specified parser function
Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron
2014-10-21
Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.
Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Torres, Aaron
2015-02-03
Techniques are provided for storing files in a parallel computing system using sub-files with semantically meaningful boundaries. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a plurality of sub-files. The method comprises the steps of obtaining a user specification of semantic information related to the file; providing the semantic information as a data structure description to a data formatting library write function; and storing the semantic information related to the file with one or more of the sub-files in one or more storage nodes of the parallel computing system. The semantic information provides a description of data in the file. The sub-files can be replicated based on semantically meaningful boundaries.
Permanent-File-Validation Utility Computer Program
NASA Technical Reports Server (NTRS)
Derry, Stephen D.
1988-01-01
Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.
Parallel file system with metadata distributed across partitioned key-value store c
Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron
2017-09-19
Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).
Storing files in a parallel computing system using list-based index to identify replica files
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faibish, Sorin; Bent, John M.; Tzelnic, Percy
Improved techniques are provided for storing files in a parallel computing system using a list-based index to identify file replicas. A file and at least one replica of the file are stored in one or more storage nodes of the parallel computing system. An index for the file comprises at least one list comprising a pointer to a storage location of the file and a storage location of the at least one replica of the file. The file comprises one or more of a complete file and one or more sub-files. The index may also comprise a checksum value formore » one or more of the file and the replica(s) of the file. The checksum value can be evaluated to validate the file and/or the file replica(s). A query can be processed using the list.« less
Storing files in a parallel computing system based on user or application specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.
2016-03-29
Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on onemore » or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.« less
Data Storage and Transfer | High-Performance Computing | NREL
High-Performance Computing (HPC) systems. Photo of computer server wiring and lights, blurred to show data. WinSCP for Windows File Transfers Use to transfer files from a local computer to a remote computer. Robinhood for File Management Use this tool to manage your data files on Peregrine. Best
RAMA: A file system for massively parallel computers
NASA Technical Reports Server (NTRS)
Miller, Ethan L.; Katz, Randy H.
1993-01-01
This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.
Please Move Inactive Files Off the /projects File System | High-Performance
Computing | NREL Please Move Inactive Files Off the /projects File System Please Move Inactive Files Off the /projects File System January 11, 2018 The /projects file system is a shared resource . This year this has created a space crunch - the file system is now about 90% full and we need your help
Cooperative storage of shared files in a parallel computing system with dynamic block size
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-11-10
Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.
1979-07-01
User input data requirements are presented for certain special processors in a nuclear reactor computation system. These processors generally read data in formatted form and generate binary interface data files. Some data processing is done to convert from the user oriented form to the interface file forms. The VENTURE diffusion theory neutronics code and other computation modules in this system use the interface data files which are generated.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-02
...: Paper records are stored in file folders, binders, computer files (eLaw) and computer disks. Electronic records, including computer files, are stored on the Commission's network and other electronic media as... physical security measures. Technical security measures within CFTC include restrictions on computer access...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orrell, S.; Ralstin, S.
1992-04-01
Many computer security plans specify that only a small percentage of the data processed will be classified. Thus, the bulk of the data on secure systems must be unclassified. Secure limited access sites operating approved classified computing systems sometimes also have a system ostensibly containing only unclassified files but operating within the secure environment. That system could be networked or otherwise connected to a classified system(s) in order that both be able to use common resources for file storage or computing power. Such a system must operate under the same rules as the secure classified systems. It is in themore » nature of unclassified files that they either came from, or will eventually migrate to, a non-secure system. Today, unclassified files are exported from systems within the secure environment typically by loading transport media and carrying them to an open system. Import of unclassified files is handled similarly. This media transport process, sometimes referred to as sneaker net, often is manually logged and controlled only by administrative procedures. A comprehensive system for secure bi-directional transfer of unclassified files between secure and open environments has yet to be developed. Any such secure file transport system should be required to meet several stringent criteria. It is the purpose of this document to begin a definition of these criteria.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orrell, S.; Ralstin, S.
1992-01-01
Many computer security plans specify that only a small percentage of the data processed will be classified. Thus, the bulk of the data on secure systems must be unclassified. Secure limited access sites operating approved classified computing systems sometimes also have a system ostensibly containing only unclassified files but operating within the secure environment. That system could be networked or otherwise connected to a classified system(s) in order that both be able to use common resources for file storage or computing power. Such a system must operate under the same rules as the secure classified systems. It is in themore » nature of unclassified files that they either came from, or will eventually migrate to, a non-secure system. Today, unclassified files are exported from systems within the secure environment typically by loading transport media and carrying them to an open system. Import of unclassified files is handled similarly. This media transport process, sometimes referred to as sneaker net, often is manually logged and controlled only by administrative procedures. A comprehensive system for secure bi-directional transfer of unclassified files between secure and open environments has yet to be developed. Any such secure file transport system should be required to meet several stringent criteria. It is the purpose of this document to begin a definition of these criteria.« less
An information retrieval system for research file data
Joan E. Lengel; John W. Koning
1978-01-01
Research file data have been successfully retrieved at the Forest Products Laboratory through a high-speed cross-referencing system involving the computer program FAMULUS as modified by the Madison Academic Computing Center at the University of Wisconsin. The method of data input, transfer to computer storage, system utilization, and effectiveness are discussed....
An Ephemeral Burst-Buffer File System for Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Teng; Moody, Adam; Yu, Weikuan
BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.
GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System
James Menart
2013-06-07
This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.
Dynamic Non-Hierarchical File Systems for Exascale Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Darrell E.; Miller, Ethan L
This constitutes the final report for “Dynamic Non-Hierarchical File Systems for Exascale Storage”. The ultimate goal of this project was to improve data management in scientific computing and high-end computing (HEC) applications, and to achieve this goal we proposed: to develop the first, HEC-targeted, file system featuring rich metadata and provenance collection, extreme scalability, and future storage hardware integration as core design goals, and to evaluate and develop a flexible non-hierarchical file system interface suitable for providing more powerful and intuitive data management interfaces to HEC and scientific computing users. Data management is swiftly becoming a serious problem in themore » scientific community – while copious amounts of data are good for obtaining results, finding the right data is often daunting and sometimes impossible. Scientists participating in a Department of Energy workshop noted that most of their time was spent “...finding, processing, organizing, and moving data and it’s going to get much worse”. Scientists should not be forced to become data mining experts in order to retrieve the data they want, nor should they be expected to remember the naming convention they used several years ago for a set of experiments they now wish to revisit. Ideally, locating the data you need would be as easy as browsing the web. Unfortunately, existing data management approaches are usually based on hierarchical naming, a 40 year-old technology designed to manage thousands of files, not exabytes of data. Today’s systems do not take advantage of the rich array of metadata that current high-end computing (HEC) file systems can gather, including content-based metadata and provenance1 information. As a result, current metadata search approaches are typically ad hoc and often work by providing a parallel management system to the “main” file system, as is done in Linux (the locate utility), personal computers, and enterprise search appliances. These search applications are often optimized for a single file system, making it difficult to move files and their metadata between file systems. Users have tried to solve this problem in several ways, including the use of separate databases to index file properties, the encoding of file properties into file names, and separately gathering and managing provenance data, but none of these approaches has worked well, either due to limited usefulness or scalability, or both. Our research addressed several key issues: High-performance, real-time metadata harvesting: extracting important attributes from files dynamically and immediately updating indexes used to improve search; Transparent, automatic, and secure provenance capture: recording the data inputs and processing steps used in the production of each file in the system; Scalable indexing: indexes that are optimized for integration with the file system; Dynamic file system structure: our approach provides dynamic directories similar to those in semantic file systems, but these are the native organization rather than a feature grafted onto a conventional system. In addition to these goals, our research effort will include evaluating the impact of new storage technologies on the file system design and performance. In particular, the indexing and metadata harvesting functions can potentially benefit from the performance improvements promised by new storage class memories.« less
Adding Data Management Services to Parallel File Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Scott
2015-03-04
The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file-based ecosystem; (3) common optimizations, e.g., indexing and caching, are readily supported across several file formats, avoiding effort duplication; and (4) performance improves significantly, as data processing is integrated more tightly with data storage. Our key contributions are: SciHadoop which explores changes to MapReduce assumption by taking advantage of semantics of structured data while preserving MapReduce’s failure and resource management; DataMods which extends common abstractions of parallel file systems so they become programmable such that they can be extended to natively support a variety of data models and can be hooked into emerging distributed runtimes such as Stanford’s Legion; and Miso which combines Hadoop and relational data warehousing to minimize time to insight, taking into account the overhead of ingesting data into data warehousing.« less
A secure file manager for UNIX
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeVries, R.G.
1990-12-31
The development of a secure file management system for a UNIX-based computer facility with supercomputers and workstations is described. Specifically, UNIX in its usual form does not address: (1) Operation which would satisfy rigorous security requirements. (2) Online space management in an environment where total data demands would be many times the actual online capacity. (3) Making the file management system part of a computer network in which users of any computer in the local network could retrieve data generated on any other computer in the network. The characteristics of UNIX can be exploited to develop a portable, secure filemore » manager which would operate on computer systems ranging from workstations to supercomputers. Implementation considerations making unusual use of UNIX features, rather than requiring extensive internal system changes, are described, and implementation using the Cray Research Inc. UNICOS operating system is outlined.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-12
...., desktop, laptop, handheld or other computer types) containing protected personal identifiers or PHI is... as the National Indian Women's Resource Center, to conduct analytical and evaluation studies. 8... SYSTEM: STORAGE: File folders, ledgers, card files, microfiche, microfilm, computer tapes, disk packs...
Ackermann, Hans D.; Pankratz, Leroy W.; Dansereau, Danny A.
1983-01-01
The computer programs published in Open-File Report 82-1065, A comprehensive system for interpreting seismic-refraction arrival-time data using interactive computer methods (Ackermann, Pankratz, and Dansereau, 1982), have been modified to run on a mini-computer. The new version uses approximately 1/10 of the memory of the initial version, is more efficient and gives the same results.
A Patient Record-Filing System for Family Practice
Levitt, Cheryl
1988-01-01
The efficient storage and easy retrieval of quality records are a central concern of good family practice. Many physicians starting out in practice have difficulty choosing a practical and lasting system for storing their records. Some who have established practices are installing computers in their offices and finding that their filing systems are worn, outdated, and incompatible with computerized systems. This article describes a new filing system installed simultaneously with a new computer system in a family-practice teaching centre. The approach adopted solved all identifiable problems and is applicable in family practices of all sizes.
Lessons Learned in Deploying the World s Largest Scale Lustre File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Wang, Feiyi
2010-01-01
The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing themore » file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.« less
Sharing digital micrographs and other data files between computers.
Entwistle, A
2004-01-01
It ought to be easy to exchange digital micrographs and other computer data files with a colleague even on another continent. In practice, this often is not the case. The advantages and disadvantages of various methods that are available for exchanging data files between computers are discussed. When possible, data should be transferred through computer networking. When data are to be exchanged locally between computers with similar operating systems, the use of a local area network is recommended. For computers in commercial or academic environments that have dissimilar operating systems or are more widely spaced, the use of FTPs is recommended. Failing this, posting the data on a website and transferring by hypertext transfer protocol is suggested. If peer to peer exchange between computers in domestic environments is needed, the use of Messenger services such as Microsoft Messenger or Yahoo Messenger is the method of choice. When it is not possible to transfer the data files over the internet, single use, writable CD ROMs are the best media for transferring data. If for some reason this is not possible, DVD-R/RW, DVD+R/RW, 100 MB ZIP disks and USB flash media are potentially useful media for exchanging data files.
NASA Technical Reports Server (NTRS)
Bingle, Bradford D.; Shea, Anne L.; Hofler, Alicia S.
1993-01-01
Transferable Output ASCII Data (TOAD) computer program (LAR-13755), implements format designed to facilitate transfer of data across communication networks and dissimilar host computer systems. Any data file conforming to TOAD format standard called TOAD file. TOAD Editor is interactive software tool for manipulating contents of TOAD files. Commonly used to extract filtered subsets of data for visualization of results of computation. Also offers such user-oriented features as on-line help, clear English error messages, startup file, macroinstructions defined by user, command history, user variables, UNDO features, and full complement of mathematical statistical, and conversion functions. Companion program, TOAD Gateway (LAR-14484), converts data files from variety of other file formats to that of TOAD. TOAD Editor written in FORTRAN 77.
ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog
NASA Technical Reports Server (NTRS)
Gray, F. P., Jr. (Editor)
1979-01-01
A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.
Long wavelength propagation capacity, version 1.1 (computer diskette)
NASA Astrophysics Data System (ADS)
1994-05-01
File Characteristics: software and data file. (72 files); ASCII character set. Physical Description: 2 computer diskettes; 3 1/2 in.; high density; 1.44 MB. System Requirements: PC compatible; Digital Equipment Corp. VMS; PKZIP (included on diskette). This report describes a revision of the Naval Command, Control and Ocean Surveillance Center RDT&E Division's Long Wavelength Propagation Capability (LWPC). The first version of this capability was a collection of separate FORTRAN programs linked together in operation by a command procedure written in an operating system unique to the Digital Equipment Corporation (Ferguson & Snyder, 1989a, b). A FORTRAN computer program named Long Wavelength Propagation Model (LWPM) was developed to replace the VMS control system (Ferguson & Snyder, 1990; Ferguson, 1990). This was designated version 1 (LWPC-1). This program implemented all the features of the original VMS plus a number of auxiliary programs that provided summaries of the files and graphical displays of the output files. This report describes a revision of the LWPC, designated version 1.1 (LWPC-1.1)
Code of Federal Regulations, 2014 CFR
2014-01-01
... Census Bureau's Foreign Trade Division Computer Security Officer and refrain from using AESDirect until... Bureau's Foreign Trade Division Computer Security Officer that the company's computer systems accessing... threat to national security interests such that its participation in postdeparture filing should be...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Census Bureau's Foreign Trade Division Computer Security Officer and refrain from using AESDirect until... Bureau's Foreign Trade Division Computer Security Officer that the company's computer systems accessing... threat to national security interests such that its participation in postdeparture filing should be...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Census Bureau's Foreign Trade Division Computer Security Officer and refrain from using AESDirect until... Bureau's Foreign Trade Division Computer Security Officer that the company's computer systems accessing... threat to national security interests such that its participation in postdeparture filing should be...
Natural Resource Information System. Volume 2: System operating procedures and instructions
NASA Technical Reports Server (NTRS)
1972-01-01
A total computer software system description is provided for the prototype Natural Resource Information System designed to store, process, and display data of maximum usefulness to land management decision making. Program modules are described, as are the computer file design, file updating methods, digitizing process, and paper tape conversion to magnetic tape. Operating instructions for the system, data output, printed output, and graphic output are also discussed.
File-System Workload on a Scientific Multiprocessor
NASA Technical Reports Server (NTRS)
Kotz, David; Nieuwejaar, Nils
1995-01-01
Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.
78 FR 63196 - Privacy Act System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-23
... Technology Center (ITC) staff and contractors, who maintain the FCC's computer network. Other FCC employees... and Offices (B/ Os); 2. Electronic data, records, and files that are stored in the FCC's computer.... Access to the FACA electronic records, files, and data, which are housed in the FCC's computer network...
pcircle - A Suite of Scalable Parallel File System Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
WANG, FEIYI
2015-10-01
Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.
Dynamic file-access characteristics of a production parallel scientific workload
NASA Technical Reports Server (NTRS)
Kotz, David; Nieuwejaar, Nils
1994-01-01
Multiprocessors have permitted astounding increases in computational performance, but many cannot meet the intense I/O requirements of some scientific applications. An important component of any solution to this I/O bottleneck is a parallel file system that can provide high-bandwidth access to tremendous amounts of data in parallel to hundreds or thousands of processors. Most successful systems are based on a solid understanding of the expected workload, but thus far there have been no comprehensive workload characterizations of multiprocessor file systems. This paper presents the results of a three week tracing study in which all file-related activity on a massively parallel computer was recorded. Our instrumentation differs from previous efforts in that it collects information about every I/O request and about the mix of jobs running in a production environment. We also present the results of a trace-driven caching simulation and recommendations for designers of multiprocessor file systems.
Exploring the use of I/O nodes for computation in a MIMD multiprocessor
NASA Technical Reports Server (NTRS)
Kotz, David; Cai, Ting
1995-01-01
As parallel systems move into the production scientific-computing world, the emphasis will be on cost-effective solutions that provide high throughput for a mix of applications. Cost effective solutions demand that a system make effective use of all of its resources. Many MIMD multiprocessors today, however, distinguish between 'compute' and 'I/O' nodes, the latter having attached disks and being dedicated to running the file-system server. This static division of responsibilities simplifies system management but does not necessarily lead to the best performance in workloads that need a different balance of computation and I/O. Of course, computational processes sharing a node with a file-system service may receive less CPU time, network bandwidth, and memory bandwidth than they would on a computation-only node. In this paper we begin to examine this issue experimentally. We found that high performance I/O does not necessarily require substantial CPU time, leaving plenty of time for application computation. There were some complex file-system requests, however, which left little CPU time available to the application. (The impact on network and memory bandwidth still needs to be determined.) For applications (or users) that cannot tolerate an occasional interruption, we recommend that they continue to use only compute nodes. For tolerant applications needing more cycles than those provided by the compute nodes, we recommend that they take full advantage of both compute and I/O nodes for computation, and that operating systems should make this possible.
The Spider Center Wide File System; From Concept to Reality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shipman, Galen M; Dillow, David A; Oral, H Sarp
2009-01-01
The Leadership Computing Facility (LCF) at Oak Ridge National Laboratory (ORNL) has a diverse portfolio of computational resources ranging from a petascale XT4/XT5 simulation system (Jaguar) to numerous other systems supporting development, visualization, and data analytics. In order to support vastly different I/O needs of these systems Spider, a Lustre-based center wide file system was designed and deployed to provide over 240 GB/s of aggregate throughput with over 10 Petabytes of formatted capacity. A multi-stage InfiniBand network, dubbed as Scalable I/O Network (SION), with over 889 GB/s of bisectional bandwidth was deployed as part of Spider to provide connectivity tomore » our simulation, development, visualization, and other platforms. To our knowledge, while writing this paper, Spider is the largest and fastest POSIX-compliant parallel file system in production. This paper will detail the overall architecture of the Spider system, challenges in deploying and initial testings of a file system of this scale, and novel solutions to these challenges which offer key insights into file system design in the future.« less
Zinge, Priyanka Ramdas; Patil, Jayaprakash
2017-01-01
The aim of this study is to evaluate and compare the effect of one shape, Neolix rotary single-file systems and WaveOne, Reciproc reciprocating single-file systems on pericervical dentin (PCD) using cone-beam computed tomography (CBCT). A total of 40 freshly extracted mandibular premolars were collected and divided into two groups, namely, Group A - Rotary: A 1 - Neolix and A 2 - OneShape and Group B - Reciprocating: B 1 - WaveOne and B 2 - Reciproc. Preoperative scans of each were taken followed by conventional access cavity preparation and working length determination with 10-k file. Instrumentation of the canal was done according to the respective file system, and postinstrumentation CBCT scans of teeth were obtained. 90 μm thick slices were obtained 4 mm apical and coronal to the cementoenamel junction. The PCD thickness was calculated as the shortest distance from the canal outline to the closest adjacent root surface, which was measured in four surfaces, i.e., facial, lingual, mesial, and distal for all the groups in the two obtained scans. There was no significant difference found between rotary single-file systems and reciprocating single-file systems in their effect on PCD, but in Group B 2 , there was most significant loss of tooth structure in the mesial, lingual, and distal surface ( P < 0.05). Reciproc single-file system removes more PCD as compared to other experimental groups, whereas Neolix single file system had the least effect on PCD.
ERIC Educational Resources Information Center
Glantz, Richard S.
Until recently, the emphasis in information storage and retrieval systems has been towards batch-processing of large files. In contrast, SHOEBOX is designed for the unformatted, personal file collection of the computer-naive individual. Operating through display terminals in a time-sharing, interactive environment on the IBM 360, the user can…
Information retrieval and display system
NASA Technical Reports Server (NTRS)
Groover, J. L.; King, W. L.
1977-01-01
Versatile command-driven data management system offers users, through simplified command language, a means of storing and searching data files, sorting data files into specified orders, performing simple or complex computations, effecting file updates, and printing or displaying output data. Commands are simple to use and flexible enough to meet most data management requirements.
ERIC Educational Resources Information Center
Ranade, Sanjay; Schraeder, Jeff
1991-01-01
Presents an overview of the mass storage market and discusses mass storage systems as part of computer networks. Systems for personal computers, workstations, minicomputers, and mainframe computers are described; file servers are explained; system integration issues are raised; and future possibilities are suggested. (LRW)
File concepts for parallel I/O
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1989-01-01
The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.
Parallel checksumming of data chunks of a shared data object using a log-structured file system
Bent, John M.; Faibish, Sorin; Grider, Gary
2016-09-06
Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.
Feasibility of Executing MIMS on Interdata 80.
CDC 6500 computers, CDC 6600 computers, MIMS(Medical Information Management System ), Medical information management system , File structures, Computer...storage managementThe report examines the feasibility of implementing large information management system on mini-computers. The Medical Information ... Management System and the Interdata 80 mini-computer were selected as being representative systems. The FORTRAN programs currently being used in MIMS
1989-10-01
REVIEW MENU PROGRAM (S) CHAPS PURPOSE AND OVERVIEV The Do Review menu allows the user to select which missions to perform detailed analysis on and...input files must be resident on the computer you are running SUPR on. Any interface or file transfer programs must be successfully executed prior to... COMPUTER PROGRAM WAS DEVELOPED BY SYSTEMS CONTROL TECHNOLOGY FOR THE DEPUTY CHIEF OF STAFF/OPERATIONS,HQ USAFE. THE USE OF THE COMPUTER PROGRAM IS
The Scalable Checkpoint/Restart Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.
The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less
Zebra: A striped network file system
NASA Technical Reports Server (NTRS)
Hartman, John H.; Ousterhout, John K.
1992-01-01
The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.
Gergi, Richard; Rjeily, Joe Abou; Sader, Joseph; Naaman, Alfred
2010-05-01
The purpose of this study was to compare canal transportation and centering ability of 2 rotary nickel-titanium (NiTi) systems (Twisted Files [TF] and Pathfile-ProTaper [PP]) with conventional stainless steel K-files. Ninety root canals with severe curvature and short radius were selected. Canals were divided randomly into 3 groups of 30 each. After preparation with TF, PP, and stainless steel files, the amount of transportation that occurred was assessed by using computed tomography. Three sections from apical, mid-root, and coronal levels of the canal were recorded. Amount of transportation and centering ability were assessed. The 3 groups were statistically compared with analysis of variance and Tukey honestly significant difference test. Less transportation and better centering ability occurred with TF rotary instruments (P < .0001). K-files showed the highest transportation followed by PP system. PP system showed significant transportation when compared with TF (P < .0001). The TF system was found to be the best for all variables measured in this study. Copyright (c) 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Remote file inquiry (RFI) system
NASA Technical Reports Server (NTRS)
1975-01-01
System interrogates and maintains user-definable data files from remote terminals, using English-like, free-form query language easily learned by persons not proficient in computer programming. System operates in asynchronous mode, allowing any number of inquiries within limitation of available core to be active concurrently.
Sending Foreign Language Word Processor Files over Networks.
ERIC Educational Resources Information Center
Feustle, Joseph A., Jr.
1992-01-01
Advantages of using online systems are described, and specific techniques for successfully transmitting computer text files are described. Topics covered include Microsoft's Rich TextFile, WordPerfect encoding, text compression, and especially encoding and decoding with UNIX programs. (LB)
Centralized Accounting and Electronic Filing Provides Efficient Receivables Collection.
ERIC Educational Resources Information Center
School Business Affairs, 1983
1983-01-01
An electronic filing system makes financial control manageable at Bowling Green State University, Ohio. The system enables quick access to computer-stored consolidated account data and microfilm images of charges, statements, and other billing documents. (MLF)
BLISS: a computer program for the protection of blood donors. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catsimpoolas, N.; Cooke, C.; Valeri, C.R.
1982-06-28
A BASIC program has been developed for the Hewlett-Packard Model 9845 desk-top computer which allows the creation of blood donor files for subsequent retrieval, update, and correction. A similar modified version was developed for hte HP 9835 Model. This software system has been called BLISS which stands for Blood Information and Security System. In addition to its function as a file management system, BLISS provides warnings before a donation is performed to protect the donor from excessive exposure to radioactivity and DMSO levels, from too frequent of donations of blood, and from adverse reactions. The program can also be usedmore » to select donors who have participated in specific studies and to list the experimental details which have been stored in the file. The BLISS system has been actively utilized at the Naval Blood Research Laboratory in Boston and contains the files of over 750 donors.« less
User's manual for SEDCALC, a computer program for computation of suspended-sediment discharge
Koltun, G.F.; Gray, John R.; McElhone, T.J.
1994-01-01
Sediment-Record Calculations (SEDCALC), a menu-driven set of interactive computer programs, was developed to facilitate computation of suspended-sediment records. The programs comprising SEDCALC were developed independently in several District offices of the U.S. Geological Survey (USGS) to minimize the intensive labor associated with various aspects of sediment-record computations. SEDCALC operates on suspended-sediment-concentration data stored in American Standard Code for Information Interchange (ASCII) files in a predefined card-image format. Program options within SEDCALC can be used to assist in creating and editing the card-image files, as well as to reformat card-image files to and from formats used by the USGS Water-Quality System. SEDCALC provides options for creating card-image files containing time series of equal-interval suspended-sediment concentrations from 1. digitized suspended-sediment-concentration traces, 2. linear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals, and 3. nonlinear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals. Suspended-sediment discharge can be computed from the streamflow and suspended-sediment-concentration data or by application of transport relations derived by regressing log-transformed instantaneous streamflows on log-transformed instantaneous suspended-sediment concentrations or discharges. The computed suspended-sediment discharge data are stored in card-image files that can be either directly imported to the USGS Automated Data Processing System or used to generate plots by means of other SEDCALC options.
Utilizing HDF4 File Content Maps for the Cloud
NASA Technical Reports Server (NTRS)
Lee, Hyokyung Joe
2016-01-01
We demonstrate a prototype study that HDF4 file content map can be used for efficiently organizing data in cloud object storage system to facilitate cloud computing. This approach can be extended to any binary data formats and to any existing big data analytics solution powered by cloud computing because HDF4 file content map project started as long term preservation of NASA data that doesn't require HDF4 APIs to access data.
Network survivability performance (computer diskette)
NASA Astrophysics Data System (ADS)
1993-11-01
File characteristics: Data file; 1 file. Physical description: 1 computer diskette; 3 1/2 in.; high density; 2.0MB. System requirements: Mac; Word. This technical report has been developed to address the survivability of telecommunications networks including services. It responds to the need for a common understanding of, and assessment techniques for network survivability, availability, integrity, and reliability. It provides a basis for designing and operating telecommunication networks to user expectations for network survivability.
Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Dixit, Kratika; Naik, Saraswathi V
2016-01-01
Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. This is an experimental, in vitro study comparing the two groups. A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49.
1995 Joseph E. Whitley, MD, Award. A World Wide Web gateway to the radiologic learning file.
Channin, D S
1995-12-01
Computer networks in general, and the Internet specifically, are changing the way information is manipulated in the world at large and in radiology. The goal of this project was to develop a computer system in which images from the Radiologic Learning File, available previously only via a single-user laser disc, are made available over a generic, high-availability computer network to many potential users simultaneously. Using a networked workstation in our laboratory and freely available distributed hypertext software, we established a World Wide Web (WWW) information server for radiology. Images from the Radiologic Learning File are requested through the WWW client software, digitized from a single laser disc containing the entire teaching file and then transmitted over the network to the client. The text accompanying each image is incorporated into the transmitted document. The Radiologic Learning File is now on-line, and requests to view the cases result in the delivery of the text and images. Image digitization via a frame grabber takes 1/30th of a second. Conversion of the image to a standard computer graphic format takes 45-60 sec. Text and image transmission speed on a local area network varies between 200 and 400 kilobytes (KB) per second depending on the network load. We have made images from a laser disc of the Radiologic Learning File available through an Internet-based hypertext server. The images previously available through a single-user system located in a remote section of our department are now ubiquitously available throughout our department via the department's computer network. We have thus converted a single-user, limited functionality system into a multiuser, widely available resource.
Hwang, Young-Hye; Bae, Kwang-Shik; Baek, Seung-Ho; Kum, Kee-Yeon; Lee, WooCheol; Shon, Won-Jun; Chang, Seok Woo
2014-08-01
This study used micro-computed tomographic imaging to compare the shaping ability of Mtwo (VDW, Munich, Germany), a conventional nickel-titanium file system, and Reciproc (VDW), a reciprocating file system morphologically similar to Mtwo. Root canal shaping was performed on the mesiobuccal and distobuccal canals of extracted maxillary molars. In the RR group (n = 15), Reciproc was used in a reciprocating motion (150° counterclockwise/30° clockwise, 300 rpm); in the MR group, Mtwo was used in a reciprocating motion (150° clockwise/30° counterclockwise, 300 rpm); and in the MC group, Mtwo was used in a continuous rotating motion (300 rpm). Micro-computed tomographic images taken before and after canal shaping were used to analyze canal volume change and the degree of transportation at the cervical, middle, and apical levels. The time required for canal shaping was recorded. Afterward, each file was analyzed using scanning electron microscopy. No statistically significant differences were found among the 3 groups in the time for canal shaping or canal volume change (P > .05). Transportation values of the RR and MR groups were not significantly different at any level. However, the transportation value of the MC group was significantly higher than both the RR and MR groups at the cervical and apical levels (P < .05). In the scanning electron microscopic analysis, file deformation was observed for 1 file in group RR (1/15), 3 files in group MR (3/15), and 5 files in group MC (5/15). In terms of shaping ability, Mtwo used in a reciprocating motion was not significantly different from the Reciproc system. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Final Report for File System Support for Burst Buffers on HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, W.; Mohror, K.
Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respectivemore » efforts are elaborated further in this report.« less
File System Virtual Appliances: Portable File System Implementations
2009-05-01
Mobile Computing Systems and Applications, Santa Cruz, CA, 1994. IEEE. [10] Michael Eisler , Peter Corbett, Michael Kazar, Daniel S. Nydick, and...Gingell, Joseph P. Moran, and William A. Shannon. Virtual Memory Architec- ture in SunOS. In USENIX Summer Conference, pages 81–94, Berkeley, CA, 1987
The 1985 Army Experience Survey. Data Sourcebook and User’s Manual
1986-01-01
on the survey data file produced for the 1985 AES.- 4 The survey data are available in Operating System (OS) as well as Statistical Analysis System ...version of the survey data files was produced using the Statistical Analysis System (SASJ. The survey data were also produced in Operating System (OS...impacts upon future enlistments. In order iThe OS data file was designed to make the survey data accessible on any IBM-compatible computer system . 3 N’ to
23 CFR 1327.5 - Conditions for becoming a participating State.
Code of Federal Regulations, 2013 CFR
2013-04-01
... check, NHTSA will search its computer file and mail the results (i.e., notification of no record found... official of a participating State shall implement the necessary computer system and procedures to respond...) Provide a Driver History Record from its file to the State of Inquiry upon receipt of a request for this...
23 CFR 1327.5 - Conditions for becoming a participating State.
Code of Federal Regulations, 2014 CFR
2014-04-01
... check, NHTSA will search its computer file and mail the results (i.e., notification of no record found... official of a participating State shall implement the necessary computer system and procedures to respond...) Provide a Driver History Record from its file to the State of Inquiry upon receipt of a request for this...
NASA work unit system file maintenance manual
NASA Technical Reports Server (NTRS)
1972-01-01
The NASA Work Unit System is a management information system for research tasks (i.e., work units) performed under NASA grants and contracts. It supplies profiles on research efforts and statistics on fund distribution. The file maintenance operator can add, delete and change records at a remote terminal or can submit punched cards to the computer room for batch update. The system is designed for file maintenance by a person with little or no knowledge of data processing techniques.
Transparency in Distributed File Systems
1989-01-01
ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Computer Science Department AREA & WORK UNIT NUMBERS 734 Comouter Studies Bldc . University of...sistency control , file and director) placement, and file and directory migration in a way that pro- 3 vides full network transparency. This transparency...areas of naming, replication, con- sistency control , file and directory placement, and file and directory migration in a way that pro- 3 vides full
Computer printing and filing of microbiology reports. 1. Description of the system.
Goodwin, C S; Smith, B C
1976-01-01
From March 1974 all reports from this microbiology department have been computer printed and filed. The system was designed to include every medically important microorganism and test. Technicians at the laboratory bench made their results computer-readable using Port-a-punch cards, and specimen details were recorded on paper-tape, allowing the full description of each specimen to appear on the report. A summary form of each microbiology phrase enabled copies of reports to be printed on wide paper with 12 to 18 reports per sheet; such copies, in alphabetical order for one day, and cumulatively for one week were used by staff answering enquiries to the office. This format could also be used for printing allthe reports for one patient. Retrieval of results from the files was easily performed and was useful to medical and laboratory staff and for control-of-infection purposes. The system was written in COBOL and was designed to be as cost-effective as possible without sacrificing accuracy; the cost of a report and its filing was 17-97 pence. Images PMID:939809
Data Processing Aspects of MEDLARS
Austin, Charles J.
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287
DATA PROCESSING ASPECTS OF MEDLARS.
AUSTIN, C J
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.
Program Description: EDIT Program and Vendor Master Update, SWRL Financial System.
ERIC Educational Resources Information Center
Ikeda, Masumi
Computer routines to edit input data for the Southwest Regional Laboratory's (SWRL) Financial System are described. The program is responsible for validating input records, generating records for further system processing, and updating the Vendor Master File--a file containing the information necessary to support the accounts payable and…
Program Description: Financial Master File Processor-SWRL Financial System.
ERIC Educational Resources Information Center
Ideda, Masumi
Computer routines designed to produce various management and accounting reports required by the Southwest Regional Laboratory's (SWRL) Financial System are described. Input data requirements and output report formats are presented together with a discussion of the Financial Master File updating capabilities of the system. This document should be…
2013-06-01
for crop irrigation. The disruptions also 1 idled key industries, led to billions of dollars of lost productivity, and stressed the entire Western...modify the super-system, and to resume the super-system run. 2.2 Requirements An important step in the software development life cycle is to capture...detects the . gms file and associated files in the remote directory that is allocated to the user. 4. If all of the files are present, the files are
Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Naik, Saraswathi V
2016-01-01
ABSTRACT Background: Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. Study design: This is an experimental, in vitro study comparing the two groups. Materials and methods: A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. Results: A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Conclusion: Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49. PMID:27274155
Virtual file system on NoSQL for processing high volumes of HL7 messages.
Kimura, Eizen; Ishihara, Ken
2015-01-01
The Standardized Structured Medical Information Exchange (SS-MIX) is intended to be the standard repository for HL7 messages that depend on a local file system. However, its scalability is limited. We implemented a virtual file system using NoSQL to incorporate modern computing technology into SS-MIX and allow the system to integrate local patient IDs from different healthcare systems into a universal system. We discuss its implementation using the database MongoDB and describe its performance in a case study.
Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Muhaxheri, Edmond
2015-01-01
Background Complete mechanical preparation of the root canal system is rarely achieved. Therefore, the purpose of this study was to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files using micro-computed tomography. Material/Methods Sixty extracted upper second premolars were selected and divided into 2 groups of 30 teeth each. Before preparation, all samples were scanned by micro-computed tomography. Thirty teeth were prepared with the ProTaper system and the other 30 with stainless steel files. After preparation, the untouched surface and root canal straightening were evaluated with micro-computed tomography. The percentage of untouched root canal surface was calculated in the coronal, middle, and apical parts of the canal. We also calculated straightening of the canal after root canal preparation. Results from the 2 groups were statistically compared using the Minitab statistical package. Results ProTaper rotary files left less untouched root canal surface compared with manual preparation in coronal, middle, and apical sector (p<0.001). Similarly, there was a statistically significant difference in root canal straightening after preparation between the techniques (p<0.001). Conclusions Neither manual nor rotary techniques completely prepared the root canal, and both techniques caused slight straightening of the root canal. PMID:26092929
Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Muhaxheri, Edmond
2015-06-20
Complete mechanical preparation of the root canal system is rarely achieved. Therefore, the purpose of this study was to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files using micro-computed tomography. Sixty extracted upper second premolars were selected and divided into 2 groups of 30 teeth each. Before preparation, all samples were scanned by micro-computed tomography. Thirty teeth were prepared with the ProTaper system and the other 30 with stainless steel files. After preparation, the untouched surface and root canal straightening were evaluated with micro-computed tomography. The percentage of untouched root canal surface was calculated in the coronal, middle, and apical parts of the canal. We also calculated straightening of the canal after root canal preparation. Results from the 2 groups were statistically compared using the Minitab statistical package. ProTaper rotary files left less untouched root canal surface compared with manual preparation in coronal, middle, and apical sector (p<0.001). Similarly, there was a statistically significant difference in root canal straightening after preparation between the techniques (p<0.001). Neither manual nor rotary techniques completely prepared the root canal, and both techniques caused slight straightening of the root canal.
The Metadata Cloud: The Last Piece of a Distributed Data System Model
NASA Astrophysics Data System (ADS)
King, T. A.; Cecconi, B.; Hughes, J. S.; Walker, R. J.; Roberts, D.; Thieman, J. R.; Joy, S. P.; Mafi, J. N.; Gangloff, M.
2012-12-01
Distributed data systems have existed ever since systems were networked together. Over the years the model for distributed data systems have evolved from basic file transfer to client-server to multi-tiered to grid and finally to cloud based systems. Initially metadata was tightly coupled to the data either by embedding the metadata in the same file containing the data or by co-locating the metadata in commonly named files. As the sources of data multiplied, data volumes have increased and services have specialized to improve efficiency; a cloud system model has emerged. In a cloud system computing and storage are provided as services with accessibility emphasized over physical location. Computation and data clouds are common implementations. Effectively using the data and computation capabilities requires metadata. When metadata is stored separately from the data; a metadata cloud is formed. With a metadata cloud information and knowledge about data resources can migrate efficiently from system to system, enabling services and allowing the data to remain efficiently stored until used. This is especially important with "Big Data" where movement of the data is limited by bandwidth. We examine how the metadata cloud completes a general distributed data system model, how standards play a role and relate this to the existing types of cloud computing. We also look at the major science data systems in existence and compare each to the generalized cloud system model.
Common Sense Conversion: Can You Read Me?
ERIC Educational Resources Information Center
Crawford, Walt
1988-01-01
Discusses basic approaches and available software for five categories of file conversion: (1) converting between different computers with different operating systems; (2) converting between different computers with the same operating systems; (3) converting between different applications on the same computer; (4) converting between different…
Goodwin, C S
1976-01-01
A manual system of microbiology reporting with a National Cash Register (NCR) form with printed names of bacteria and antiboitics required less time to compose reports than a previous manual system that involved rubber stamps and handwriting on plain report sheets. The NCR report cost 10-28 pence and, compared with a computer system, it had the advantages of simplicity and familarity, and reports were not delayed by machine breakdown, operator error, or data being incorrectly submitted. A computer reporting system for microbiology resulted in more accurate reports costing 17-97 pence each, faster and more accurate filing and recall of reports, and a greater range of analyses of reports that was valued particularly by the control-of-infection staff. Composition of computer-readable reports by technicians on Port-a-punch cards took longer than composing NCR reports. Enquiries for past results were more quickly answered from computer printouts of reports and a day book in alphabetical order. PMID:939810
A convertor and user interface to import CAD files into worldtoolkit virtual reality systems
NASA Technical Reports Server (NTRS)
Wang, Peter Hor-Ching
1996-01-01
Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.
Performance Analysis of the Unitree Central File
NASA Technical Reports Server (NTRS)
Pentakalos, Odysseas I.; Flater, David
1994-01-01
This report consists of two parts. The first part briefly comments on the documentation status of two major systems at NASA#s Center for Computational Sciences, specifically the Cray C98 and the Convex C3830. The second part describes the work done on improving the performance of file transfers between the Unitree Mass Storage System running on the Convex file server and the users workstations distributed over a large georgraphic area.
Recent evolution of the offline computing model of the NOvA experiment
Habig, Alec; Norman, A.; Group, Craig
2015-12-23
The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study ν e appearance in a ν μ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files onmore » either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. In addition, the current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.« less
Recent Evolution of the Offline Computing Model of the NOvA Experiment
NASA Astrophysics Data System (ADS)
Habig, Alec; Norman, A.
2015-12-01
The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study νe appearance in a νμ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files on either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. The current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.
Strategies for Sharing Seismic Data Among Multiple Computer Platforms
NASA Astrophysics Data System (ADS)
Baker, L. M.; Fletcher, J. B.
2001-12-01
Seismic waveform data is readily available from a variety of sources, but it often comes in a distinct, instrument-specific data format. For example, data may be from portable seismographs, such as those made by Refraction Technology or Kinemetrics, from permanent seismograph arrays, such as the USGS Parkfield Dense Array, from public data centers, such as the IRIS Data Center, or from personal communication with other researchers through e-mail or ftp. A computer must be selected to import the data - usually whichever is the most suitable for reading the originating format. However, the computer best suited for a specific analysis may not be the same. When copies of the data are then made for analysis, a proliferation of copies of the same data results, in possibly incompatible, computer-specific formats. In addition, if an error is detected and corrected in one copy, or some other change is made, all the other copies must be updated to preserve their validity. Keeping track of what data is available, where it is located, and which copy is authoritative requires an effort that is easy to neglect. We solve this problem by importing waveform data to a shared network file server that is accessible to all our computers on our campus LAN. We use a Network Appliance file server running Sun's Network File System (NFS) software. Using an NFS client software package on each analysis computer, waveform data can then be read by our MatLab or Fortran applications without first copying the data. Since there is a single copy of the waveform data in a single location, the NFS file system hierarchy provides an implicit complete waveform data catalog and the single copy is inherently authoritative. Another part of our solution is to convert the original data into a blocked-binary format (known historically as USGS DR100 or VFBB format) that is interpreted by MatLab or Fortran library routines available on each computer so that the idiosyncrasies of each machine are not visible to the user. Commercial software packages, such as MatLab, also have the ability to share data in their own formats across multiple computer platforms. Our Fortran applications can create plot files in Adobe PostScript, Illustrator, and Portable Document Format (PDF) formats. Vendor support for reading these files is readily available on multiple computer platforms. We will illustrate by example our strategies for sharing seismic data among our multiple computer platforms, and we will discuss our positive and negative experiences. We will include our solutions for handling the different byte ordering, floating-point formats, and text file ``end-of-line'' conventions on the various computer platforms we use (6 different operating systems on 5 processor architectures).
Computer Page: Computer Studies for All--A Wider Approach
ERIC Educational Resources Information Center
Edens, A. J.
1975-01-01
An approach to teaching children aged 12 through 14 a substantial course about computers is described. Topics covered include simple algorithms, information and communication, man-machine communication, the concept of a system, the definition of a system, and the use of files. (SD)
Forensic Analysis of Compromised Computers
NASA Technical Reports Server (NTRS)
Wolfe, Thomas
2004-01-01
Directory Tree Analysis File Generator is a Practical Extraction and Reporting Language (PERL) script that simplifies and automates the collection of information for forensic analysis of compromised computer systems. During such an analysis, it is sometimes necessary to collect and analyze information about files on a specific directory tree. Directory Tree Analysis File Generator collects information of this type (except information about directories) and writes it to a text file. In particular, the script asks the user for the root of the directory tree to be processed, the name of the output file, and the number of subtree levels to process. The script then processes the directory tree and puts out the aforementioned text file. The format of the text file is designed to enable the submission of the file as input to a spreadsheet program, wherein the forensic analysis is performed. The analysis usually consists of sorting files and examination of such characteristics of files as ownership, time of creation, and time of most recent access, all of which characteristics are among the data included in the text file.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruwart, T M; Eldel, A
2000-01-01
The primary objectives of this project were to evaluate the performance of the SGI CXFS File System in a Storage Area Network (SAN) and compare/contrast it to the performance of a locally attached XFS file system on the same computer and storage subsystems. The University of Minnesota participants were asked to verify that the performance of the SAN/CXFS configuration did not fall below 85% of the performance of the XFS local configuration. There were two basic hardware test configurations constructed from the following equipment: Two Onyx 2 computer systems each with two Qlogic-based Fibre Channel/XIO Host Bus Adapter (HBA); Onemore » 8-Port Brocade Silkworm 2400 Fibre Channel Switch; and Four Ciprico RF7000 RAID Disk Arrays populated Seagate Barracuda 50GB disk drives. The Operating System on each of the ONYX 2 computer systems was IRIX 6.5.6. The first hardware configuration consisted of directly connecting the Ciprico arrays to the Qlogic controllers without the Brocade switch. The purpose for this configuration was to establish baseline performance data on the Qlogic controllers / Ciprico disk raw subsystem. This baseline performance data would then be used to demonstrate any performance differences arising from the addition of the Brocade Fibre Channel Switch. Furthermore, the performance of the Qlogic controllers could be compared to that of the older, Adaptec-based XIO dual-channel Fibre Channel adapters previously used on these systems. It should be noted that only raw device tests were performed on this configuration. No file system testing was performed on this configuration. The second hardware configuration introduced the Brocade Fibre Channel Switch. Two FC ports from each of the ONYX2 computer systems were attached to four ports of the switch and the four Ciprico arrays were attached to the remaining four. Raw disk subsystem tests were performed on the SAN configuration in order to demonstrate the performance differences between the direct-connect and the switched configurations. After this testing was completed, the Ciprico arrays were formatted with an XFS file system and performance numbers were gathered to establish a File System Performance Baseline. Finally, the disks were formatted with CXFS and further tests were run to demonstrate the performance of the CXFS file system. A summary of the results of these tests is given.« less
Software Implements a Space-Mission File-Transfer Protocol
NASA Technical Reports Server (NTRS)
Rundstrom, Kathleen; Ho, Son Q.; Levesque, Michael; Sanders, Felicia; Burleigh, Scott; Veregge, John
2004-01-01
CFDP is a computer program that implements the CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol, which is an international standard for automatic, reliable transfers of files of data between locations on Earth and in outer space. CFDP administers concurrent file transfers in both directions, delivery of data out of transmission order, reliable and unreliable transmission modes, and automatic retransmission of lost or corrupted data by use of one or more of several lost-segment-detection modes. The program also implements several data-integrity measures, including file checksums and optional cyclic redundancy checks for each protocol data unit. The metadata accompanying each file can include messages to users application programs and commands for operating on remote file systems.
Configuration Management File Manager Developed for Numerical Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Follen, Gregory J.
1997-01-01
One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.
10 CFR 13.27 - Computation of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
...; and (2) By 11:59 p.m. Eastern Time for a document served by the E-Filing system. [72 FR 49153, Aug. 28... the calculation of additional days when a participant is not entitled to receive an entire filing... same filing and service method, the number of days for service will be determined by the presiding...
10 CFR 2.306 - Computation of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
...:59 p.m. Eastern Time for a document served by the E-Filing system. [72 FR 49151, Aug. 28, 2007] ... the calculation of additional days when a participant is not entitled to receive an entire filing... filing and service method, the number of days for service will be determined by the presiding officer...
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, Earl A.; Lipshutz, Robert J.; Morris, Macdonald S.; Winkler, James L.
1997-01-01
An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks.
Hydrologic data-verification management program plan
Alexander, C.W.
1982-01-01
Data verification refers to the performance of quality control on hydrologic data that have been retrieved from the field and are being prepared for dissemination to water-data users. Water-data users now have access to computerized data files containing unpublished, unverified hydrologic data. Therefore, it is necessary to develop techniques and systems whereby the computer can perform some data-verification functions before the data are stored in user-accessible files. Computerized data-verification routines can be developed for this purpose. A single, unified concept describing master data-verification program using multiple special-purpose subroutines, and a screen file containing verification criteria, can probably be adapted to any type and size of computer-processing system. Some traditional manual-verification procedures can be adapted for computerized verification, but new procedures can also be developed that would take advantage of the powerful statistical tools and data-handling procedures available to the computer. Prototype data-verification systems should be developed for all three data-processing environments as soon as possible. The WATSTORE system probably affords the greatest opportunity for long-range research and testing of new verification subroutines. (USGS)
Students "Hacking" School Computer Systems
ERIC Educational Resources Information Center
Stover, Del
2005-01-01
This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…
Community Information Centers and the Computer.
ERIC Educational Resources Information Center
Carroll, John M.; Tague, Jean M.
Two computer data bases have been developed by the Computer Science Department at the University of Western Ontario for "Information London," the local community information center. One system, called LONDON, permits Boolean searches of a file of 5,000 records describing human service agencies in the London area. The second system,…
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1992-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.
Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S
2018-06-01
Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.
Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng
2018-01-01
Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754
Optimizing the Use of Storage Systems Provided by Cloud Computing Environments
NASA Astrophysics Data System (ADS)
Gallagher, J. H.; Potter, N.; Byrne, D. A.; Ogata, J.; Relph, J.
2013-12-01
Cloud computing systems present a set of features that include familiar computing resources (albeit augmented to support dynamic scaling of processing power) bundled with a mix of conventional and unconventional storage systems. The linux base on which many Cloud environments (e.g., Amazon) are based make it tempting to assume that any Unix software will run efficiently in this environment efficiently without change. OPeNDAP and NODC collaborated on a short project to explore how the S3 and Glacier storage systems provided by the Amazon Cloud Computing infrastructure could be used with a data server developed primarily to access data stored in a traditional Unix file system. Our work used the Amazon cloud system, but we strived for designs that could be adapted easily to other systems like OpenStack. Lastly, we evaluated different architectures from a computer security perspective. We found that there are considerable issues associated with treating S3 as if it is a traditional file system, even though doing so is conceptually simple. These issues include performance penalties because using a software tool that emulates a traditional file system to store data in S3 performs poorly when compared to a storing data directly in S3. We also found there are important benefits beyond performance to ensuring that data written to S3 can directly accessed without relying on a specific software tool. To provide a hierarchical organization to the data stored in S3, we wrote 'catalog' files, using XML. These catalog files map discrete files to S3 access keys. Like a traditional file system's directories, the catalogs can also contain references to other catalogs, providing a simple but effective hierarchy overlaid on top of S3's flat storage space. An added benefit to these catalogs is that they can be viewed in a web browser; our storage scheme provides both efficient access for the data server and access via a web browser. We also looked at the Glacier storage system and found that the system's response characteristics are very different from a traditional file system or database; it behaves like a near-line storage system. To be used by a traditional data server, the underlying access protocol must support asynchronous accesses. This is because the Glacier system takes a minimum of four hours to deliver any data object, so systems built with the expectation of instant access (i.e., most web systems) must be fundamentally changed to use Glacier. Part of a related project has been to develop an asynchronous access mode for OPeNDAP, and we have developed a design using that new addition to the DAP protocol with Glacier as a near-line mass store. In summary, we found that both S3 and Glacier require special treatment to be effectively used by a data server. It is important to add (new) interfaces to data servers that enable them to use these storage devices through their native interfaces. We also found that our designs could easily map to a cloud environment based on OpenStack. Lastly, we noted that while these designs invited more liberal use of remote references for data objects, that can expose software to new security risks.
Automation of the CFD Process on Distributed Computing Systems
NASA Technical Reports Server (NTRS)
Tejnil, Ed; Gee, Ken; Rizk, Yehia M.
2000-01-01
A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.
Doing Your Science While You're in Orbit
NASA Astrophysics Data System (ADS)
Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.
2010-11-01
Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.
Reliable file sharing in distributed operating system using web RTC
NASA Astrophysics Data System (ADS)
Dukiya, Rajesh
2017-12-01
Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.
MIRADS-2 Implementation Manual
NASA Technical Reports Server (NTRS)
1975-01-01
The Marshall Information Retrieval and Display System (MIRADS) which is a data base management system designed to provide the user with a set of generalized file capabilities is presented. The system provides a wide variety of ways to process the contents of the data base and includes capabilities to search, sort, compute, update, and display the data. The process of creating, defining, and loading a data base is generally called the loading process. The steps in the loading process which includes (1) structuring, (2) creating, (3) defining, (4) and implementing the data base for use by MIRADS are defined. The execution of several computer programs is required to successfully complete all steps of the loading process. This library must be established as a cataloged mass storage file as the first step in MIRADS implementation. The procedure for establishing the MIRADS Library is given. The system is currently operational for the UNIVAC 1108 computer system utilizing the Executive Operating System. All procedures relate to the use of MIRADS on the U-1108 computer.
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1991-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.
Public census data on CD-ROM at Lawrence Berkeley Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, D.W.
The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socio-economic and geographic data files which are available to CEDR and PAREP collaborators via LBL`s computing network. At this time 70 CD-ROM diskettes (approximately 36 gigabytes) are on line via the Unix file server cedrcd. lbl. gov. Most of the files are from the US Bureau of the Census, and most pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the formmore » of ASCII text files. Printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), or the UC Documents Library. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user`s application program(s).« less
Public census data on CD-ROM at Lawrence Berkeley Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, D.W.
The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socio-economic and geographic data files which are available to CEDR and PAREP collaborators via LBL's computing network. At this time 70 CD-ROM diskettes (approximately 36 gigabytes) are on line via the Unix file server cedrcd. lbl. gov. Most of the files are from the US Bureau of the Census, and most pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the formmore » of ASCII text files. Printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), or the UC Documents Library. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user's application program(s).« less
A Next-Generation Parallel File System Environment for the OLCF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Gunasekaran, Raghul
2012-01-01
When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s originalmore » design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.« less
LVFS: A Scalable Petabye/Exabyte Data Storage System
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.
2013-12-01
Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system. The LVFS system replaces the NFS disk mounting approach of LAADS and utilizes the already existing highly optimized metadata database server, which is applicable to most scientific big data intensive compute systems. Thus, LVFS ties the existing storage system with the existing metadata infrastructure system which we believe leads to a scalable exabyte virtual file system. The uniqueness of the implemented design is not limited to LAADS but can be employed with most scientific data processing systems. By utilizing the Filesystem In Userspace (FUSE), a kernel module available in many operating systems, LVFS was able to replace the NFS system while staying POSIX compliant. As a result, the LVFS system becomes scalable to exabyte sizes owing to the use of highly scalable database servers optimized for metadata storage. The flexibility of the LVFS design allows it to organize data on the fly in different ways, such as by region, date, instrument or product without the need for duplication, symbolic links, or any other replication methods. We proposed here a strategic reference architecture that addresses the inefficiencies of scientific petabyte/exabyte file system access through the dynamic integration of the observing system's large metadata file.
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, Earl A.; Morris, MacDonald S.; Winkler, James L.
1999-01-05
An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system (100) is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files (104) to design and/or generate lithographic masks (110).
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, Earl A.; Morris, MacDonald S.; Winkler, James L.
1996-01-01
An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system (100) is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files (104) to design and/or generate lithographic masks (110).
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, E.A.; Morris, M.S.; Winkler, J.L.
1999-01-05
An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, E.A.; Lipshutz, R.J.; Morris, M.S.; Winkler, J.L.
1997-01-14
An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.
Computer-aided engineering system for design of sequence arrays and lithographic masks
Hubbell, E.A.; Morris, M.S.; Winkler, J.L.
1996-11-05
An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.
BIBLIO: A Reprint File Management Algorithm
ERIC Educational Resources Information Center
Zelnio, Robert N.; And Others
1977-01-01
The development of a simple computer algorithm designed for use by the individual educator or researcher in maintaining and searching reprint files is reported. Called BIBLIO, the system is inexpensive and easy to operate and maintain without sacrificing flexibility and utility. (LBH)
NASA Technical Reports Server (NTRS)
Lee, C. H.
1978-01-01
The CELFE computer program and user's manual, together with the execution of the CELFE/NASTRAN system, are described. The execution procedure and the transfer of data between the CELFE and NASTRAN programs are controlled through the use of DATA files in the Univac 1100 system. Five data files are used to control the runstream and data transfer, and three files are used to hold the programs. These files are contained on a single tape. Changes in NASTRAN routines required by the present analysis are also discussed in this report. All the program listings, except the last two files (where the absolute and relocatable elements are stored), are included in the appendixes.
Efficient proof of ownership for cloud storage systems
NASA Astrophysics Data System (ADS)
Zhong, Weiwei; Liu, Zhusong
2017-08-01
Cloud storage system through the deduplication technology to save disk space and bandwidth, but the use of this technology has appeared targeted security attacks: the attacker can deceive the server to obtain ownership of the file by get the hash value of original file. In order to solve the above security problems and the different security requirements of the files in the cloud storage system, an efficient and information-theoretical secure proof of ownership sceme is proposed to support the file rating. Through the K-means algorithm to implement file rating, and use random seed technology and pre-calculation method to achieve safe and efficient proof of ownership scheme. Finally, the scheme is information-theoretical secure, and achieve better performance in the most sensitive areas of client-side I/O and computation.
Parallel compression of data chunks of a shared data object using a log-structured file system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Grider, Gary
2016-10-25
Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less
A Computer System for a Union Catalog: Theme and Variations *
Felter, Jacqueline W.; Tjoeng, Djoeng S.
1965-01-01
This article describes a computer system for the generation and maintenance of a union catalog of periodicals and for printouts of both the entire file and selected portions. Although the system was designed to meet the specifications of the Union Catalog of Medical Periodicals of New York, its use is not limited. Only the basic file maintenance program is indispensable; the subsidiary programs may be used as needed. The scope and content of the catalog are determined by the input. The preparation of the input is described in detail, with comment on the keypunching of library records. Applications to other kinds of catalogs are suggested. PMID:14271111
Installation and management of the SPS and LEP control system computers
NASA Astrophysics Data System (ADS)
Bland, Alastair
1994-12-01
Control of the CERN SPS and LEP accelerators and service equipment on the two CERN main sites is performed via workstations, file servers, Process Control Assemblies (PCAs) and Device Stub Controllers (DSCs). This paper describes the methods and tools that have been developed to manage the file servers, PCAs and DSCs since the LEP startup in 1989. There are five operational DECstation 5000s used as file servers and boot servers for the PCAs and DSCs. The PCAs consist of 90 SCO Xenix 386 PCs, 40 LynxOS 486 PCs and more than 40 older NORD 100s. The DSCs consist of 90 OS-968030 VME crates and 10 LynxOS 68030 VME crates. In addition there are over 100 development systems. The controls group is responsible for installing the computers, starting all the user processes and ensuring that the computers and the processes run correctly. The operators in the SPS/LEP control room and the Services control room have a Motif-based X window program which gives them, in real time, the state of all the computers and allows them to solve problems or reboot them.
Performance regression manager for large scale systems
Faraj, Daniel A.
2017-10-17
System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.
Briel, L.I.
1993-01-01
A computer program was written to produce 6 different types of water-quality diagrams--Piper, Stiff, pie, X-Y, boxplot, and Piper 3-D--from the same file of input data. The Piper 3-D diagram is a new method that projects values from the surface of a Piper plot into a triangular prism to show how variations in chemical composition can be related to variations in other water-quality variables. This program is an analytical tool to aid in the interpretation of data. This program is interactive, and the user can select from a menu the type of diagram to be produced and a large number of individual features. Alternatively, these choices can be specified in the data file, which provides a batch mode for running the program. The program does not display water-quality diagrams directly; plots are written to a file. Four different plot- file formats are available: device-independent metafiles, Adobe PostScript graphics files, and two Hewlett-Packard graphics language formats (7475 and 7586). An ASCII data-table file is also produced to document the computed values. This program is written in Fortran '77 and uses graphics subroutines from either the PRIOR AGTK or the DISSPLA graphics library. The program has been implemented on Prime series 50 and Data General Aviion computers within the USGS; portability to other computing systems depends on the availability of the graphics library.
Tools for Administration of a UNIX-Based Network
NASA Technical Reports Server (NTRS)
LeClaire, Stephen; Farrar, Edward
2004-01-01
Several computer programs have been developed to enable efficient administration of a large, heterogeneous, UNIX-based computing and communication network that includes a variety of computers connected to a variety of subnetworks. One program provides secure software tools for administrators to create, modify, lock, and delete accounts of specific users. This program also provides tools for users to change their UNIX passwords and log-in shells. These tools check for errors. Another program comprises a client and a server component that, together, provide a secure mechanism to create, modify, and query quota levels on a network file system (NFS) mounted by use of the VERITAS File SystemJ software. The client software resides on an internal secure computer with a secure Web interface; one can gain access to the client software from any authorized computer capable of running web-browser software. The server software resides on a UNIX computer configured with the VERITAS software system. Directories where VERITAS quotas are applied are NFS-mounted. Another program is a Web-based, client/server Internet Protocol (IP) address tool that facilitates maintenance lookup of information about IP addresses for a network of computers.
ERIC Educational Resources Information Center
Sullivan, Todd
Using an IBM System/360 Model 50 computer, the New York Statewide Film Library Network schedules film use, reports on materials handling and statistics, and provides for interlibrary loan of films. Communications between the film libraries and the computer are maintained by Teletype model 33 ASR Teletypewriter terminals operating on TWX…
Decryption-decompression of AES protected ZIP files on GPUs
NASA Astrophysics Data System (ADS)
Duong, Tan Nhat; Pham, Phong Hong; Nguyen, Duc Huu; Nguyen, Thuy Thanh; Le, Hung Duc
2011-10-01
AES is a strong encryption system, so decryption-decompression of AES encrypted ZIP files requires very large computing power and techniques of reducing the password space. This makes implementations of techniques on common computing system not practical. In [1], we reduced the original very large password search space to a much smaller one which surely containing the correct password. Based on reduced set of passwords, in this paper, we parallel decryption, decompression and plain text recognition for encrypted ZIP files by using CUDA computing technology on graphics cards GeForce GTX295 of NVIDIA, to find out the correct password. The experimental results have shown that the speed of decrypting, decompressing, recognizing plain text and finding out the original password increases about from 45 to 180 times (depends on the number of GPUs) compared to sequential execution on the Intel Core 2 Quad Q8400 2.66 GHz. These results have demonstrated the potential applicability of GPUs in this cryptanalysis field.
Satellite freeze forecast system. System configuration definition manual
NASA Technical Reports Server (NTRS)
Martsolf, J. D. (Principal Investigator)
1983-01-01
Equipment listings, interconnection information, and a basic overview is given of the hardware interaction of the Ruskin HP-100 computer system. A block diagram is included of the SFFS system at the National Weather Service Office in Ruskin, Florida. The generation answer file used to create the RTE-IVB operating system currently resident in Ruskin HP-1000 computer system is also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadayappan, Ponnuswamy
Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. We propose a new approach to the data and work distribution model provided by system software based on the unifying formalism of an abstract file system. The proposed hierarchical data model providesmore » simple, familiar visibility and access to data structures through the file system hierarchy, while providing fault tolerance through selective redundancy. The hierarchical task model features work queues whose form and organization are represented as file system objects. Data and work are both first class entities. By exposing the relationships between data and work to the runtime system, information is available to optimize execution time and provide fault tolerance. The data distribution scheme provides replication (where desirable and possible) for fault tolerance and efficiency, and it is hierarchical to make it possible to take advantage of locality. The user, tools, and applications, including legacy applications, can interface with the data, work queues, and one another through the abstract file model. This runtime environment will provide multiple interfaces to support traditional Message Passing Interface applications, languages developed under DARPA's High Productivity Computing Systems program, as well as other, experimental programming models. We will validate our runtime system with pilot codes on existing platforms and will use simulation to validate for exascale-class platforms. In this final report, we summarize research results from the work done at the Ohio State University towards the larger goals of the project listed above.« less
Computation of Flow Through Water-Control Structures Using Program DAMFLO.2
Sanders, Curtis L.; Feaster, Toby D.
2004-01-01
As part of its mission to collect, analyze, and store streamflow data, the U.S. Geological Survey computes flow through several dam structures throughout the country. Flows are computed using hydraulic equations that describe flow through sluice and Tainter gates, crest gates, lock gates, spillways, locks, pumps, and siphons, which are calibrated using flow measurements. The program DAMFLO.2 was written to compute, tabulate, and plot flow through dam structures using data that describe the physical properties of dams and various hydraulic parameters and ratings that use time-varying data, such as lake elevations or gate openings. The program uses electronic computer files of time-varying data, such as lake elevation or gate openings, retrieved from the U.S. Geological Survey Automated Data Processing System. Computed time-varying flow data from DAMFLO.2 are output in flat files, which can be entered into the Automated Data Processing System database. All computations are made in units of feet and seconds. DAMFLO.2 uses the procedures and language developed by the SAS Institute Inc.
Shaping Ability of Single-file Systems with Different Movements: A Micro-computed Tomographic Study.
Santa-Rosa, Joedy; de Sousa-Neto, Manoel Damião; Versiani, Marco Aurelio; Nevares, Giselle; Xavier, Felipe; Romeiro, Kaline; Cassimiro, Marcely; Leoni, Graziela Bianchi; de Menezes, Rebeca Ferraz; Albuquerque, Diana
2016-01-01
This study aimed to perform a rigorous sample standardization and also evaluate the preparation of mesiobuccal (MB) root canals of maxillary molars with severe curvatures using two single-file engine-driven systems (WaveOne with reciprocating motion and OneShape with rotary movement), using micro-computed tomography (micro-CT). Ten MB roots with single canals were included, uniformly distributed into two groups (n=5). The samples were prepared with a WaveOne or OneShape files. The shaping ability and amount of canal transportation were assessed by a comparison of the pre- and post-instrumentation micro-CT scans. The Kolmogorov-Smirnov and t-tests were used for statistical analysis. The level of significance was set at 0.05. Instrumentation of canals increased their surface area and volume. Canal transportation occurred in coronal, middle and apical thirds and no statistical difference was observed between the two systems (P>0.05). In apical third, significant differences were found between groups in canal roundness (in 3 mm level) and perimeter (in 3 and 4 mm levels) (P<0.05). The WaveOne and One Shape single-file systems were able to shape curved root canals, producing minor changes in the canal curvature.
ERIC Educational Resources Information Center
Jul, Erik
1992-01-01
Describes the use of file transfer protocol (FTP) on the INTERNET computer network and considers its use as an electronic publishing system. The differing electronic formats of text files are discussed; the preparation and access of documents are described; and problems are addressed, including a lack of consistency. (LRW)
In-House Automation of a Small Library Using a Mainframe Computer.
ERIC Educational Resources Information Center
Waranius, Frances B.; Tellier, Stephen H.
1986-01-01
An automated library routine management system was developed in-house to create system unique to the Library and Information Center, Lunar and Planetary Institute, Houston, Texas. A modular approach was used to allow continuity in operations and services as system was implemented. Acronyms and computer accounts and file names are appended.…
Distributed Computing for the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Chudoba, J.
2015-12-01
Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.
Plouff, Donald
1998-01-01
Computer programs were written in the Fortran language to process and display gravity data with locations expressed in geographic coordinates. The programs and associated processes have been tested for gravity data in an area of about 125,000 square kilometers in northwest Nevada, southeast Oregon, and northeast California. This report discusses the geographic aspects of data processing. Utilization of the programs begins with application of a template (printed in PostScript format) to transfer locations obtained with Global Positioning Systems to and from field maps and includes a 5-digit geographic-based map naming convention for field maps. Computer programs, with source codes that can be copied, are used to display data values (printed in PostScript format) and data coverage, insert data into files, extract data from files, shift locations, test for redundancy, and organize data by map quadrangles. It is suggested that 30-meter Digital Elevation Models needed for gravity terrain corrections and other applications should be accessed in a file search by using the USGS 7.5-minute map name as a file name, for example, file '40117_B8.DEM' contains elevation data for the map with a southeast corner at lat 40? 07' 30' N. and lon 117? 52' 30' W.
Integrated geometry and grid generation system for complex configurations
NASA Technical Reports Server (NTRS)
Akdag, Vedat; Wulf, Armin
1992-01-01
A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.
Characterizing output bottlenecks in a supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Bing; Chase, Jeffrey; Dillow, David A
2012-01-01
Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less
An introduction to the Marshall information retrieval and display system
NASA Technical Reports Server (NTRS)
1974-01-01
An on-line terminal oriented data storage and retrieval system is presented which allows a user to extract and process information from stored data bases. The use of on-line terminals for extracting and displaying data from the data bases provides a fast and responsive method for obtaining needed information. The system consists of general purpose computer programs that provide the overall capabilities of the total system. The system can process any number of data files via a Dictionary (one for each file) which describes the data format to the system. New files may be added to the system at any time, and reprogramming is not required. Illustrations of the system are shown, and sample inquiries and responses are given.
Özkocak, I; Taşkan, M M; Gökt Rk, H; Aytac, F; Karaarslan, E Şirin
2015-01-01
The aim of this study is to evaluate increases in temperature on the external root surface during endodontic treatment with different rotary systems. Fifty human mandibular incisors with a single root canal were selected. All root canals were instrumented using a size 20 Hedstrom file, and the canals were irrigated with 5% sodium hypochlorite solution. The samples were randomly divided into the following three groups of 15 teeth: Group 1: The OneShape Endodontic File no.: 25; Group 2: The Reciproc Endodontic File no.: 25; Group 3: The WaveOne Endodontic File no.: 25. During the preparation, the temperature changes were measured in the middle third of the roots using a noncontact infrared thermometer. The temperature data were transferred from the thermometer to the computer and were observed graphically. Statistical analysis was performed using the Kruskal-Wallis analysis of variance at a significance level of 0.05. The increases in temperature caused by the OneShape file system were lower than those of the other files (P < 0.05). The WaveOne file showed the highest temperature increases. However, there were no significant differences between the Reciproc and WaveOne files. The single file rotary systems used in this study may be recommended for clinical use.
Issues in ATM Support of High-Performance, Geographically Distributed Computing
NASA Technical Reports Server (NTRS)
Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G
1995-01-01
This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.
Computerisation of diabetic clinic records.
Watkins, G B; Sutcliffe, T; Pyke, D A; Watkins, P J
1980-01-01
A simple system for putting diabetic records on a computer file is achieved by using stationery that combines the usual handwritten records (not computerised) with the minimum of essential data suitable for punching on to computer tape. The record may be brought up to date at a selected time time interval. This simple, cheap system has been in use in a busy clinic for six years. The information on about 8000 diabetics now held in the computer file is used chiefly to help research by creating registers of patients with specified characteristics, such as treatment, heredity complications, and pregnancy. A complete up-to-date index of the entire clinic population is always available, and routine clinic statistics are returned every six months. PMID:7437814
CAPRI: Using a Geometric Foundation for Computational Analysis and Design
NASA Technical Reports Server (NTRS)
Haimes, Robert
2002-01-01
CAPRI (Computational Analysis Programming Interface) is a software development tool intended to make computerized design, simulation and analysis faster and more efficient. The computational steps traditionally taken for most engineering analysis (Computational Fluid Dynamics (CFD), structural analysis, etc.) are: Surface Generation, usually by employing a Computer Aided Design (CAD) system; Grid Generation, preparing the volume for the simulation; Flow Solver, producing the results at the specified operational point; Post-processing Visualization, interactively attempting to understand the results. It should be noted that the structures problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go IGES files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D) data formats and the upcoming CGNS). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Instead of the serial approach to analysis, CAPRI takes a geometry centric approach. CAPRI is a software building tool-kit that refers to two ideas: (1) A simplified, object-oriented, hierarchical view of a solid part integrating both geometry and topology definitions, and (2) programming access to this part or assembly and any attached data. The connection to the geometry is made through an Application Programming Interface (API) and not a file system.
NASA-IGES Translator and Viewer
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Logan, Michael A.
1995-01-01
NASA-IGES Translator (NIGEStranslator) is a batch program that translates a general IGES (Initial Graphics Exchange Specification) file to a NASA-IGES-Nurbs-Only (NINO) file. IGES is the most popular geometry exchange standard among Computer Aided Geometric Design (CAD) systems. NINO format is a subset of IGES, implementing the simple and yet the most popular NURBS (Non-Uniform Rational B-Splines) representation. NIGEStranslator converts a complex IGES file to the simpler NINO file to simplify the tasks of CFD grid generation for models in CAD format. The NASA-IGES Viewer (NIGESview) is an Open-Inventor-based, highly interactive viewer/ editor for NINO files. Geometry in the IGES files can be viewed, copied, transformed, deleted, and inquired. Users can use NIGEStranslator to translate IGES files from CAD systems to NINO files. The geometry then can be examined with NIGESview. Extraneous geometries can be interactively removed, and the cleaned model can be written to an IGES file, ready to be used in grid generation.
Drill user's manual. [drilling machine automation
NASA Technical Reports Server (NTRS)
Pitts, E. A.
1976-01-01
Instructions are given for using the DRILL computer program which converts data contained in an Interactive Computer Graphics System (IGDS) design file to production of a paper tape for driving a numerically controlled drilling machine.
Performance regression manager for large scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputtingmore » for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less
76 FR 22682 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-22
...: Maintained in file folders and computer storage media. Retrievability: Retrieved by name and/or Social... folders and computer storage media.'' * * * * * System Manager(s) and address: Delete entry and replace... provide their full name, Social Security Number (SSN), any details which may assist in locating records...
Tuning HDF5 subfiling performance on parallel file systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey
Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less
Computer Storage and Retrieval of Position - Dependent Data.
1982-06-01
This thesis covers the design of a new digital database system to replace the merged (observation and geographic location) record, one file per cruise...68 "The Digital Data Library System: Library Storage and Retrieval of Digital Geophysical Data" by Robert C. Groan) provided a relatively simple...dependent, ’geophysical’ data. The system is operational on a Digital Equipment Corporation VAX-11/780 computer. Values of measured and computed
Access to Inter-Organization Computer Networks.
1985-08-01
management of computing and information systems, system management . 20. ABSTRACT (Continue on reverse aide it neceeery end identify by block number) When two...necessary control mechanisms. Message-based gateways that support non-real-time invocation of services (e.g., file and print servers, financial ...operations (C.2.3), electronic mail (H.4.3), public policy issues (K.4.1), organizationa impacts (K.4.3), management of computing and information systems (K.6
Construction of image database for newspapaer articles using CTS
NASA Astrophysics Data System (ADS)
Kamio, Tatsuo
Nihon Keizai Shimbun, Inc. developed a system of making articles' image database automatically by use of CTS (Computer Typesetting System). Besides the articles and the headlines inputted in CTS, it reproduces the image of elements of such as photography and graphs by article in accordance with information of position on the paper. So to speak, computer itself clips the articles out of the newspaper. Image database is accumulated in magnetic file and optical file and is output to the facsimile of users. With diffusion of CTS, newspaper companies which start to have structure of articles database are increased rapidly, the said system is the first attempt to make database automatically. This paper describes the device of CTS which supports this system and outline.
NASA Technical Reports Server (NTRS)
Snyder, W. V.; Hanson, R. J.
1986-01-01
Text Exchange System (TES) exchanges and maintains organized textual information including source code, documentation, data, and listings. System consists of two computer programs and definition of format for information storage. Comprehensive program used to create, read, and maintain TES files. TES developed to meet three goals: First, easy and efficient exchange of programs and other textual data between similar and dissimilar computer systems via magnetic tape. Second, provide transportable management system for textual information. Third, provide common user interface, over wide variety of computing systems, for all activities associated with text exchange.
Performance of the engineering analysis and data system 2 common file system
NASA Technical Reports Server (NTRS)
Debrunner, Linda S.
1993-01-01
The Engineering Analysis and Data System (EADS) was used from April 1986 to July 1993 to support large scale scientific and engineering computation (e.g. computational fluid dynamics) at Marshall Space Flight Center. The need for an updated system resulted in a RFP in June 1991, after which a contract was awarded to Cray Grumman. EADS II was installed in February 1993, and by July 1993 most users were migrated. EADS II is a network of heterogeneous computer systems supporting scientific and engineering applications. The Common File System (CFS) is a key component of this system. The CFS provides a seamless, integrated environment to the users of EADS II including both disk and tape storage. UniTree software is used to implement this hierarchical storage management system. The performance of the CFS suffered during the early months of the production system. Several of the performance problems were traced to software bugs which have been corrected. Other problems were associated with hardware. However, the use of NFS in UniTree UCFM software limits the performance of the system. The performance issues related to the CFS have led to a need to develop a greater understanding of the CFS organization. This paper will first describe the EADS II with emphasis on the CFS. Then, a discussion of mass storage systems will be presented, and methods of measuring the performance of the Common File System will be outlined. Finally, areas for further study will be identified and conclusions will be drawn.
14 CFR 302.507 - Computing time for filing complaints.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing time for filing complaints. 302... Proceedings With Respect to Rates, Fares and Charges for Foreign Air Transportation § 302.507 Computing time for filing complaints. In computing the time for filing formal complaints pursuant to § 302.506, with...
Agarwal, Rolly S; Agarwal, Jatin; Jain, Pradeep; Chandra, Anil
2015-05-01
The ability of an endodontic instrument to remain centered in the root canal system is one of the most important characteristic influencing the clinical performance of a particular file system. Thus, it is important to assess the canal centering ability of newly introduced single file systems before they can be considered a viable replacement of full-sequence rotary file systems. The aim of the study was to compare the canal transportation, centering ability, and time taken for preparation of curved root canals after instrumentation with single file systems One Shape and Wave One, using cone-beam computed tomography (CBCT). Sixty mesiobuccal canals of mandibular molars with an angle of curvature ranging from 20(o) to 35(o) were divided into three groups of 20 samples each: ProTaper PT (group I) - full-sequence rotary control group, OneShape OS (group II)- single file continuous rotation, WaveOne WO - single file reciprocal motion (group III). Pre instrumentation and post instrumentation three-dimensional CBCT images were obtained from root cross-sections at 3mm, 6mm and 9mm from the apex. Scanned images were then accessed to determine canal transportation and centering ability. The data collected were evaluated using one-way analysis of variance (ANOVA) with Tukey's honestly significant difference test. It was observed that there were no differences in the magnitude of transportation between the rotary instruments (p >0.05) at both 3mm as well as 6mm from the apex. At 9 mm from the apex, Group I PT showed significantly higher mean canal transportation and lower centering ability (0.19±0.08 and 0.39±0.16), as compared to Group II OS (0.12±0.07 and 0.54±0.24) and Group III WO (0.13±0.06 and 0.55±0.18) while the differences between OS and WO were not statistically significant. It was concluded that there was minor difference between the tested groups. Single file systems demonstrated average canal transportation and centering ability comparable to full sequence Protaper system in curved root canals.
ERIC Educational Resources Information Center
Roberson, E. Wayne; Glowinski, Debra J.
The Computer Assisted Diagnostic Prescriptive Program (CADPP) is a customized databased curriculum management system which permits the user to load the following into a filing/retrieval software system: (1) learning characteristics of individual students (e.g., age, instructional level, learning modality); (2) skill-oriented characteristics of…
76 FR 43278 - Privacy Act; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-20
... computer (PC). The Security Management Officer's office remains locked when not in use. RETENTION AND... records to include names, addresses, social security numbers, service computation dates, leave usage data... that resides on a desktop computer. RETRIEVABILITY: Records maintained in file folders are indexed and...
NASA Astrophysics Data System (ADS)
Poat, M. D.; Lauret, J.; Betts, W.
2015-12-01
The STAR online computing infrastructure has become an intensive dynamic system used for first-hand data collection and analysis resulting in a dense collection of data output. As we have transitioned to our current state, inefficient, limited storage systems have become an impediment to fast feedback to online shift crews. Motivation for a centrally accessible, scalable and redundant distributed storage system had become a necessity in this environment. OpenStack Swift Object Storage and Ceph Object Storage are two eye-opening technologies as community use and development have led to success elsewhere. In this contribution, OpenStack Swift and Ceph have been put to the test with single and parallel I/O tests, emulating real world scenarios for data processing and workflows. The Ceph file system storage, offering a POSIX compliant file system mounted similarly to an NFS share was of particular interest as it aligned with our requirements and was retained as our solution. I/O performance tests were run against the Ceph POSIX file system and have presented surprising results indicating true potential for fast I/O and reliability. STAR'S online compute farm historical use has been for job submission and first hand data analysis. The goal of reusing the online compute farm to maintain a storage cluster and job submission will be an efficient use of the current infrastructure.
ERIC Educational Resources Information Center
Buckland, Lawrence F.; Weaver, Vance
Reported are the findings of the Uspekhi experiment in creating a labeled machine file, as well as sample products of this system - an article from a scientific journal and an index page. Production cost tables are presented for the machine file, primary journals, and journal indexes. Comparisons were made between the 1965 predicted costs and the…
Peru, M; Peru, C; Mannocci, F; Sherriff, M; Buchanan, L S; Pitt Ford, T R
2006-02-01
The aim of this study was to evaluate root canals instrumented by dental students using the modified double-flared technique, nickel-titanium (NiTi) rotary System GT files and NiTi rotary ProTaper files by micro-computed tomography (MCT). A total of 36 root canals from 18 mesial roots of mandibular molar teeth were prepared; 12 canals were prepared with the modified double-flared technique, using K-flexofiles and Gates-Glidden burs; 12 canals were prepared using System GT and 12 using ProTaper rotary files. Each root was scanned using MCT preoperatively and postoperatively. At the coronal and mid-root sections, System GT and ProTaper files produced significantly less enlarged canal cross-sectional area, volume and perimeter than the modified double-flared technique (P < 0.05). In the mid-root sections there was significantly less thinning of the root structure towards the furcation with System GT and ProTaper (P < 0.05). The rotary techniques were both three times faster than the modified double-flared technique (P < 0.05). Qualitative evaluation of the preparations showed that both ProTaper and System GT were able to prepare root canals with little or no procedural error compared with the modified double-flared technique. Under the conditions of this study, inexperienced dental students were able to prepare curved root canals with rotary files with greater preservation of tooth structure, low risk of procedural errors and much quicker than with hand instruments.
36 CFR 1236.2 - What definitions apply to this part?
Code of Federal Regulations, 2014 CFR
2014-07-01
... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...
36 CFR 1236.2 - What definitions apply to this part?
Code of Federal Regulations, 2011 CFR
2011-07-01
... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...
36 CFR 1236.2 - What definitions apply to this part?
Code of Federal Regulations, 2012 CFR
2012-07-01
... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...
36 CFR 1236.2 - What definitions apply to this part?
Code of Federal Regulations, 2010 CFR
2010-07-01
... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...
DOEDEF Software System, Version 2. 2: Operational instructions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meirans, L.
The DOEDEF (Department of Energy Data Exchange Format) Software System is a collection of software routines written to facilitate the manipulation of IGES (Initial Graphics Exchange Specification) data. Typically, the IGES data has been produced by the IGES processors for a Computer-Aided Design (CAD) system, and the data manipulations are user-defined ''flavoring'' operations. The DOEDEF Software System is used in conjunction with the RIM (Relational Information Management) DBMS from Boeing Computer Services (Version 7, UD18 or higher). The three major pieces of the software system are: Parser, reads an ASCII IGES file and converts it to the RIM database equivalent;more » Kernel, provides the user with IGES-oriented interface routines to the database; and Filewriter, writes the RIM database to an IGES file.« less
ERIC Educational Resources Information Center
Russell, Thyra K.
Morris Library at Southern Illinois University computerized its technical processes using the Library Computer System (LCS), which was implemented in the library to streamline order processing by: (1) providing up-to-date online files to track in-process items; (2) encouraging quick, efficient accessing of information; (3) reducing manual files;…
5 CFR 2429.21 - Computation of time for filing papers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Computation of time for filing papers... REQUIREMENTS General Requirements § 2429.21 Computation of time for filing papers. (a) In computing any period... § 2429.23(a) of this part, when this subchapter requires the filing of any paper with the Authority, the...
Proposed Computer System for Library Catalog Maintenance. Part II: System Design.
ERIC Educational Resources Information Center
Stein (Theodore) Co., New York, NY.
The logic of the system presented in this report is divided into six parts for computer processing and manipulation. They are: (1) processing of Library of Congress copy, (2) editing of input into standard format, (3) processing of information into and out from the authority files, (4) creation of the catalog records, (5) production of the…
Tambe, Varsha Harshal; Nagmode, Pradnya Sunil; Abraham, Sathish; Patait, Mahendra; Lahoti, Pratik Vinod; Jaju, Neha
2014-01-01
Aim: The aim of the present study was to compare the canal transportation and centering ability of Rotary ProTaper, One Shape and Wave One systems using cone beam computed tomography (CBCT) in curved root canals to find better instrumentation technique for maintaining root canal geometry. Materials and Methods: Total 30 freshly extracted premolars having curved root canals with at least 10 degrees of curvature were divided into three groups of 10 teeth each. All teeth were scanned by CBCT to determine the root canal shape before instrumentation. In Group 1, the canals were prepared with Rotary ProTaper files, in Group 2 the canals were prepared with One Shape files and in Group 3 canals were prepared with Wave One files. After preparation, post-instrumentation scan was performed. Pre-instrumentation and post-instrumentation images were obtained at three levels, 3 mm apical, 3 mm coronal and 8 mm apical above the apical foramen were compared using CBCT software. Amount of transportation and centering ability were assessed. The three groups were statistically compared with analysis of variance and Tukey honestly significant. Results: All instruments maintained the original canal curvature with significant differences between the different files. Data suggested that Wave One files presented the best outcomes for both the variables evaluated. Wave One files caused lesser transportation and remained better centered in the canal than One Shape and Rotary ProTaper files. Conclusion: The canal preparation with Wave One files showed lesser transportation and better centering ability than One Shape and ProTaper. PMID:25506145
Computers as an Instrument for Data Analysis. Technical Report No. 11.
ERIC Educational Resources Information Center
Muller, Mervin E.
A review of statistical data analysis involving computers as a multi-dimensional problem provides the perspective for consideration of the use of computers in statistical analysis and the problems associated with large data files. An overall description of STATJOB, a particular system for doing statistical data analysis on a digital computer,…
Engineering description of the ascent/descent bet product
NASA Technical Reports Server (NTRS)
Seacord, A. W., II
1986-01-01
The Ascent/Descent output product is produced in the OPIP routine from three files which constitute its input. One of these, OPIP.IN, contains mission specific parameters. Meteorological data, such as atmospheric wind velocities, temperatures, and density, are obtained from the second file, the Corrected Meteorological Data File (METDATA). The third file is the TRJATTDATA file which contains the time-tagged state vectors that combine trajectory information from the Best Estimate of Trajectory (BET) filter, LBRET5, and Best Estimate of Attitude (BEA) derived from IMU telemetry. Each term in the two output data files (BETDATA and the Navigation Block, or NAVBLK) are defined. The description of the BETDATA file includes an outline of the algorithm used to calculate each term. To facilitate describing the algorithms, a nomenclature is defined. The description of the nomenclature includes a definition of the coordinate systems used. The NAVBLK file contains navigation input parameters. Each term in NAVBLK is defined and its source is listed. The production of NAVBLK requires only two computational algorithms. These two algorithms, which compute the terms DELTA and RSUBO, are described. Finally, the distribution of data in the NAVBLK records is listed.
Combining high performance simulation, data acquisition, and graphics display computers
NASA Technical Reports Server (NTRS)
Hickman, Robert J.
1989-01-01
Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.
WinSCP for Windows File Transfers | High-Performance Computing | NREL
WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux
Implementing Journaling in a Linux Shared Disk File System
NASA Technical Reports Server (NTRS)
Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew;
2000-01-01
In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.
The version control service for the ATLAS data acquisition configuration files
NASA Astrophysics Data System (ADS)
Soloviev, Igor
2012-12-01
The ATLAS experiment at the LHC in Geneva uses a complex and highly distributed Trigger and Data Acquisition system, involving a very large number of computing nodes and custom modules. The configuration of the system is specified by schema and data in more than 1000 XML files, with various experts responsible for updating the files associated with their components. Maintaining an error free and consistent set of XML files proved a major challenge. Therefore a special service was implemented; to validate any modifications; to check the authorization of anyone trying to modify a file; to record who had made changes, plus when and why; and to provide tools to compare different versions of files and to go back to earlier versions if required. This paper provides details of the implementation and exploitation experience, that may be interesting for other applications using many human-readable files maintained by different people, where consistency of the files and traceability of modifications are key requirements.
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2002-01-01
Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.
75 FR 32915 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-10
... used to authenticate authorized desktop and laptop computer users. Computer servers are scanned monthly... data is also used for management and statistical reports and studies. Routine uses of records... duties. The computer files are password protected with access restricted to authorized users. Records are...
OVERSMART Reporting Tool for Flow Computations Over Large Grid Systems
NASA Technical Reports Server (NTRS)
Kao, David L.; Chan, William M.
2012-01-01
Structured grid solvers such as NASA's OVERFLOW compressible Navier-Stokes flow solver can generate large data files that contain convergence histories for flow equation residuals, turbulence model equation residuals, component forces and moments, and component relative motion dynamics variables. Most of today's large-scale problems can extend to hundreds of grids, and over 100 million grid points. However, due to the lack of efficient tools, only a small fraction of information contained in these files is analyzed. OVERSMART (OVERFLOW Solution Monitoring And Reporting Tool) provides a comprehensive report of solution convergence of flow computations over large, complex grid systems. It produces a one-page executive summary of the behavior of flow equation residuals, turbulence model equation residuals, and component forces and moments. Under the automatic option, a matrix of commonly viewed plots such as residual histograms, composite residuals, sub-iteration bar graphs, and component forces and moments is automatically generated. Specific plots required by the user can also be prescribed via a command file or a graphical user interface. Output is directed to the user s computer screen and/or to an html file for archival purposes. The current implementation has been targeted for the OVERFLOW flow solver, which is used to obtain a flow solution on structured overset grids. The OVERSMART framework allows easy extension to other flow solvers.
36 CFR § 1236.2 - What definitions apply to this part?
Code of Federal Regulations, 2013 CFR
2013-07-01
... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...
Enhancement/upgrade of Engine Structures Technology Best Estimator (EST/BEST) Software System
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2003-01-01
This report describes the work performed during the contract period and the capabilities included in the EST/BEST software system. The developed EST/BEST software system includes the integrated NESSUS, IPACS, COBSTRAN, and ALCCA computer codes required to perform the engine cycle mission and component structural analysis. Also, the interactive input generator for NESSUS, IPACS, and COBSTRAN computer codes have been developed and integrated with the EST/BEST software system. The input generator allows the user to create input from scratch as well as edit existing input files interactively. Since it has been integrated with the EST/BEST software system, it enables the user to modify EST/BEST generated files and perform the analysis to evaluate the benefits. Appendix A gives details of how to use the newly added features in the EST/BEST software system.
Definition and maintenance of a telemetry database dictionary
NASA Technical Reports Server (NTRS)
Knopf, William P. (Inventor)
2007-01-01
A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.
1985-09-01
FILL. MOVE ALPHA-RESPONSE TO RESPONSE. 221C-RUN-TASKS-EXIT. EXIT. 2220-DISPLAY-TASK-MENU. PERFORM 5000- OETER -NISC-TASK-VALS. MOVE 1 TO ANSWER-FILE-KEY...INDEX-FIELD-2 ELSE MOVE 4 TO ANSWER-FILE-KEY SUBTRACT 200 FROM INDEX-FIELD-2. 5000- OETER -MISC-TASK-VALS. IF AREA-NUMBER a ŕ" MOVE 1 TO TASK-FILE-REC-NUM
1985-02-01
li’Lii El. IE F INE ,UT 1 = K MM. * GET, NAST484/UN=SYSTEM. E(EGIN, ,NAST464. PFL, 160000, RED’UCE(-). LINKI , L~DDEDDD Figure A-I1 Typical Control-Card...initiated via Che LINKI statement, in which the second term is the input data file. The permanent file name KMDM, shown in conjunction with local file
NASA Technical Reports Server (NTRS)
Morin, Bruce L.
2010-01-01
Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the second volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User s Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by step-by-step instructions for installing and running BFaNS. It concludes with technical documentation of the BFaNS computer program.
NASA Technical Reports Server (NTRS)
Morin, Bruce L.
2010-01-01
Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the first volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User's Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by step-by-step instructions for installing and running Setup_BFaNS. It concludes with technical documentation of the Setup_BFaNS computer program.
Securing the AliEn File Catalogue - Enforcing authorization with accountable file operations
NASA Astrophysics Data System (ADS)
Schreiner, Steffen; Bagnasco, Stefano; Sankar Banerjee, Subho; Betev, Latchezar; Carminati, Federico; Vladimirovna Datskova, Olga; Furano, Fabrizio; Grigoras, Alina; Grigoras, Costin; Mendez Lorenzo, Patricia; Peters, Andreas Joachim; Saiz, Pablo; Zhu, Jianlin
2011-12-01
The AliEn Grid Services, as operated by the ALICE Collaboration in its global physics analysis grid framework, is based on a central File Catalogue together with a distributed set of storage systems and the possibility to register links to external data resources. This paper describes several identified vulnerabilities in the AliEn File Catalogue access protocol regarding fraud and unauthorized file alteration and presents a more secure and revised design: a new mechanism, called LFN Booking Table, is introduced in order to keep track of access authorization in the transient state of files entering or leaving the File Catalogue. Due to a simplification of the original Access Envelope mechanism for xrootd-protocol-based storage systems, fundamental computational improvements of the mechanism were achieved as well as an up to 50% reduction of the credential's size. By extending the access protocol with signed status messages from the underlying storage system, the File Catalogue receives trusted information about a file's size and checksum and the protocol is no longer dependent on client trust. Altogether, the revised design complies with atomic and consistent transactions and allows for accountable, authentic, and traceable file operations. This paper describes these changes as part and beyond the development of AliEn version 2.19.
Converting laserdisc video to digital video: a demonstration project using brain animations.
Jao, C S; Hier, D B; Brint, S U
1995-01-01
Interactive laserdiscs are of limited value in large group learning situations due to the expense of establishing multiple workstations. The authors implemented an alternative to laserdisc video by using indexed digital video combined with an expert system. High-quality video was captured from a laserdisc player and combined with waveform audio into an audio-video-interleave (AVI) file format in the Microsoft Video-for-Windows environment (Microsoft Corp., Seattle, WA). With the use of an expert system, a knowledge-based computer program provided random access to these indexed AVI files. The program can be played on any multimedia computer without the need for laserdiscs. This system offers a high level of interactive video without the overhead and cost of a laserdisc player.
47 CFR 76.1700 - Records to be maintained by cable system operators.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false Records to be maintained by cable system operators. 76.1700 Section 76.1700 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST... or part of the public inspection file may be maintained in a computer database, as long as a computer...
47 CFR 76.1700 - Records to be maintained by cable system operators.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Records to be maintained by cable system operators. 76.1700 Section 76.1700 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST... or part of the public inspection file may be maintained in a computer database, as long as a computer...
A users' guide to the trace contaminant control simulation computer program
NASA Technical Reports Server (NTRS)
Perry, J. L.
1994-01-01
The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various trace contaminant control technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. The results obtained from the program can be useful in assessing different technology combinations, system sizing, system location with respect to other life support systems, and the overall life cycle economics of a trace contaminant control system. The user's manual is extracted in its entirety from NASA TM-108409 to provide a stand-alone reference for using any version of the program. The first publication of the manual as part of TM-108409 also included a detailed listing of version 8.0 of the program. As changes to the code were necessary, it became apparent that the user's manual should be separate from the computer code documentation and be general enough to provide guidance in using any version of the program. Provided in the guide are tips for input file preparation, general program execution, and output file manipulation. Information concerning source code listings of the latest version of the computer program may be obtained by contacting the author.
XpressWare Installation User guide
NASA Astrophysics Data System (ADS)
Duffey, K. P.
XpressWare is a set of X terminal software, released by Tektronix Inc, that accommodates the X Window system on a range of host computers. The software comprises boot files (the X server image), configuration files, fonts, and font tools to support the X terminal. The files can be installed on one host or distributed across multiple hosts The purpose of this guide is to present the system or network administrator with a step-by-step account of how to install XpressWare, and how subsequently to configure the X terminals appropriately for the environment in which they operate.
Heterogeneous distributed query processing: The DAVID system
NASA Technical Reports Server (NTRS)
Jacobs, Barry E.
1985-01-01
The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.
An Analysis for Capital Expenditure Decisions at a Naval Regional Medical Center.
1981-12-01
Service Equipment Review Committee 1. Portable defibrilator Computed tomographic scanner and cardioscope 2. ECG cart Automated blood cell counter 3. Gas...system sterilizer Gas system sterilizer 4. Automated blood cell Portable defibrilator and counter cardioscope 5. Computed tomographic ECG cart scanner...dictating and automated typing) systems. e. Filing equipment f. Automatic data processing equipment including data communications equipment. g
ERIC Educational Resources Information Center
Freeman, Robert R.
A set of twenty five questions was processed against a computer-stored file of 9159 document references in the field of ferrous metallurgy, representing the 1965 coverage of the Iron and Steel Institute (London) information service. A basis for evaluation of system performance characteristics and analysis of system failures was provided by using…
Evaluation of a data dictionary system. [information dissemination and computer systems programs
NASA Technical Reports Server (NTRS)
Driggers, W. G.
1975-01-01
The usefulness was investigated of a data dictionary/directory system for achieving optimum benefits from existing and planned investments in computer data files in the Data Systems Development Branch and the Institutional Data Systems Division. Potential applications of the data catalogue system are discussed along with an evaluation of the system. Other topics discussed include data description, data structure, programming aids, programming languages, program networks, and test data.
77 FR 12528 - Amendments to Commission's Rules of Practice and Procedure-Subparts E and L
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-01
..., he advocates the use of a cloud computing system in which documents can be filed giving multiple... concerns. Barillo believes that cloud computing would streamline efficiency and reduce staff labor in...
The Area Resource File: ARF. A Manpower Planning and Research Tool.
ERIC Educational Resources Information Center
Applied Management Sciences, Inc., Silver Spring, MD.
This publication describes the Area Resource File (ARF), a computer-based, county-specific health information system with broad analytical capabilities which utilizes manpower and manpower-related data that are available on a compatible basis for all counties in the United States, and which was developed to summarize statistics from many disparate…
Announcements | High-Performance Computing | NREL
12th from 12 - 3 PM. Continue reading Data Transfer Queue March 08, 2018 A new queue "data , scratch) and MSS. Continue reading Purge-Alert January 11, 2018 We now have a notification method that . Continue reading Please Move Inactive Files Off the /projects File System January 11, 2018 The /projects
Prefetching in file systems for MIMD multiprocessors
NASA Technical Reports Server (NTRS)
Kotz, David F.; Ellis, Carla Schlatter
1990-01-01
The question of whether prefetching blocks on the file into the block cache can effectively reduce overall execution time of a parallel computation, even under favorable assumptions, is considered. Experiments have been conducted with an interleaved file system testbed on the Butterfly Plus multiprocessor. Results of these experiments suggest that (1) the hit ratio, the accepted measure in traditional caching studies, may not be an adequate measure of performance when the workload consists of parallel computations and parallel file access patterns, (2) caching with prefetching can significantly improve the hit ratio and the average time to perform an I/O (input/output) operation, and (3) an improvement in overall execution time has been observed in most cases. In spite of these gains, prefetching sometimes results in increased execution times (a negative result, given the optimistic nature of the study). The authors explore why it is not trivial to translate savings on individual I/O requests into consistently better overall performance and identify the key problems that need to be addressed in order to improve the potential of prefetching techniques in the environment.
Software Supports Distributed Operations via the Internet
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Backers, Paul; Steinke, Robert
2003-01-01
Multi-mission Encrypted Communication System (MECS) is a computer program that enables authorized, geographically dispersed users to gain secure access to a common set of data files via the Internet. MECS is compatible with legacy application programs and a variety of operating systems. The MECS architecture is centered around maintaining consistent replicas of data files cached on remote computers. MECS monitors these files and, whenever one is changed, the changed file is committed to a master database as soon as network connectivity makes it possible to do so. MECS provides subscriptions for remote users to automatically receive new data as they are generated. Remote users can be producers as well as consumers of data. Whereas a prior program that provides some of the same services treats disconnection of a user from the network of users as an error from which recovery must be effected, MECS treats disconnection as a nominal state of the network: This leads to a different design that is more efficient for serving many users, each of whom typically connects and disconnects frequently and wants only a small fraction of the data at any given time.
PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC
Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.
1997-01-01
PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.
Swanson, James R.
1977-01-01
GEOTHERM is a computerized geothermal resources file developed by the U.S. Geological Survey. The file contains data on geothermal fields, wells, and chemical analyses from the United states and international sources. The General Information Processing System (GIPSY) in the IBM 370/155 computer is used to store and retrieve data. The GIPSY retrieval program contains simple commands which can be used to search the file, select a narrowly defined subset, sort the records, and output the data in a variety of forms. Eight commands are listed and explained so that the GEOTHERM file can be accessed directly by geologists. No programming experience is necessary to retrieve data from the file.
Kaya, E; Elbay, M; Yiğit, D
2017-06-01
The Self-Adjusting File (SAF) system has been recommended for use in permanent teeth since it offers more conservative and effective root-canal preparation when compared to traditional rotary systems. However, no study had evaluated the usage of SAF in primary teeth. The aim of this study was to evaluate and compare the use of SAF, K file (manual instrumentation) and Profile (traditional rotary instrumentation) systems for primary-tooth root-canal preparation in terms of instrumentation time and amounts of dentin removed using micro-computed tomography (μCT) technology. Study Design: The study was conducted with 60 human primary mandibular second molar teeth divided into 3 groups according to instrumentation technique: Group I: SAF (n=20); Group II: K file (n=20); Group III; Profile (n=20). Teeth were embedded in acrylic blocks and scanned with a μCT scanner prior to instrumentation. All distal root canals were prepared up to size 30 for K file,.04/30 for Profile and 2 mm thickness, size 25 for SAF; instrumentation time was recorded for each tooth, and a second μCT scan was performed after instrumentation was complete. Amounts of dentin removed were measured using the three-dimensional images by calculating the difference in root-canal volume before and after preparation. Data was statistically analysed using the Kolmogorov-Smirnov and Kruskal-Wallis tests. Manual instrumentation (K file) resulted in significantly more dentin removal when compared to rotary instrumentation (Profile and SAF), while the SAF system generated significantly less dentin removal than both manual instrumentation (K file) and traditional rotary instrumentation (Profile) (p<.05). Instrumentation time was significantly greater with manual instrumentation when compared to rotary instrumentation (p<.05), whereas instrumentation time did not differ significantly between the Profile and SAF systems. Within the experimental conditions of the present study, the SAF seems as a useful system for root-canal instrumentation in primary molars because it removed less dentin than other systems, which is especially important for the relatively thin-walled canals of primary teeth, and because it involves less clinical time, which is particularly important in the treatment of paediatric patients.
LTCP 2D Graphical User Interface. Application Description and User's Guide
NASA Technical Reports Server (NTRS)
Ball, Robert; Navaz, Homayun K.
1996-01-01
A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.
Architecture and method for a burst buffer using flash technology
Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung
2016-03-15
A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.
LHCb experience with LFC replication
NASA Astrophysics Data System (ADS)
Bonifazi, F.; Carbone, A.; Perez, E. D.; D'Apice, A.; dell'Agnello, L.; Duellmann, D.; Girone, M.; Re, G. L.; Martelli, B.; Peco, G.; Ricci, P. P.; Sapunenko, V.; Vagnoni, V.; Vitlacil, D.
2008-07-01
Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.
Computer Software Management and Information Center
NASA Technical Reports Server (NTRS)
1983-01-01
Computer programs for passive anti-roll tank, earth resources laboratory applications, the NIMBUS-7 coastal zone color scanner derived products, transportable applications executive, plastic and failure analysis of composites, velocity gradient method for calculating velocities in an axisymmetric annular duct, an integrated procurement management system, data I/O PRON for the Motorola exorcisor, aerodynamic shock-layer shape, kinematic modeling, hardware library for a graphics computer, and a file archival system are documented.
MarFS-Requirements-Design-Configuration-Admin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kettering, Brett Michael; Grider, Gary Alan
This document will be organized into sections that are defined by the requirements for a file system that presents a near-POSIX (Portable Operating System Interface) interface to the user, but whose data is stored in whatever form is most efficient for the type of data being stored. After defining the requirement the design for meeting the requirement will be explained. Finally there will be sections on configuring and administering this file system. More and more, data dominates the computing world. There is a “sea” of data out there in many different formats that needs to be managed and used. “Mar”more » means “sea” in Spanish. Thus, this product is dubbed MarFS, a file system for a sea of data.« less
Computer assisted audit techniques for UNIX (UNIX-CAATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polk, W.T.
1991-12-31
Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General`s Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less
Computer assisted audit techniques for UNIX (UNIX-CAATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polk, W.T.
1991-01-01
Federal and DOE regulations impose specific requirements for internal controls of computer systems. These controls include adequate separation of duties and sufficient controls for access of system and data. The DOE Inspector General's Office has the responsibility to examine internal controls, as well as efficient use of computer system resources. As a result, DOE supported NIST development of computer assisted audit techniques to examine BSD UNIX computers (UNIX-CAATS). These systems were selected due to the increasing number of UNIX workstations in use within DOE. This paper describes the design and development of these techniques, as well as the results ofmore » testing at NIST and the first audit at a DOE site. UNIX-CAATS consists of tools which examine security of passwords, file systems, and network access. In addition, a tool was developed to examine efficiency of disk utilization. Test results at NIST indicated inadequate password management, as well as weak network resource controls. File system security was considered adequate. Audit results at a DOE site indicated weak password management and inefficient disk utilization. During the audit, we also found improvements to UNIX-CAATS were needed when applied to large systems. NIST plans to enhance the techniques developed for DOE/IG in future work. This future work would leverage currently available tools, along with needed enhancements. These enhancements would enable DOE/IG to audit large systems, such as supercomputers.« less
28 CFR 32.2 - Computation of time; filing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Computation of time; filing. 32.2 Section 32.2 Judicial Administration DEPARTMENT OF JUSTICE PUBLIC SAFETY OFFICERS' DEATH, DISABILITY, AND EDUCATIONAL ASSISTANCE BENEFIT CLAIMS General Provisions § 32.2 Computation of time; filing. (a) In computing...
28 CFR 32.2 - Computation of time; filing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Computation of time; filing. 32.2 Section 32.2 Judicial Administration DEPARTMENT OF JUSTICE PUBLIC SAFETY OFFICERS' DEATH, DISABILITY, AND EDUCATIONAL ASSISTANCE BENEFIT CLAIMS General Provisions § 32.2 Computation of time; filing. (a) In computing...
A History of the Andrew File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bashear, Derrick
2011-02-22
Derrick Brashear and Jeffrey Altman will present a technical history of the evolution of Andrew File System starting with the early days of the Andrew Project at Carnegie Mellon through the commercialization by Transarc Corporation and IBM and a decade of OpenAFS. The talk will be technical with a focus on the various decisions and implementation trade-offs that were made over the course of AFS versions 1 through 4, the development of the Distributed Computing Environment Distributed File System (DCE DFS), and the course of the OpenAFS development community. The speakers will also discuss the various AFS branches developed atmore » the University of Michigan, Massachusetts Institute of Technology and Carnegie Mellon University.« less
File-access characteristics of parallel scientific workloads
NASA Technical Reports Server (NTRS)
Nieuwejaar, Nils; Kotz, David; Purakayastha, Apratim; Best, Michael; Ellis, Carla Schlatter
1995-01-01
Phenomenal improvements in the computational performance of multiprocessors have not been matched by comparable gains in I/O system performance. This imbalance has resulted in I/O becoming a significant bottleneck for many scientific applications. One key to overcoming this bottleneck is improving the performance of parallel file systems. The design of a high-performance parallel file system requires a comprehensive understanding of the expected workload. Unfortunately, until recently, no general workload studies of parallel file systems have been conducted. The goal of the CHARISMA project was to remedy this problem by characterizing the behavior of several production workloads, on different machines, at the level of individual reads and writes. The first set of results from the CHARISMA project describe the workloads observed on an Intel iPSC/860 and a Thinking Machines CM-5. This paper is intended to compare and contrast these two workloads for an understanding of their essential similarities and differences, isolating common trends and platform-dependent variances. Using this comparison, we are able to gain more insight into the general principles that should guide parallel file-system design.
NASA Technical Reports Server (NTRS)
Grantham, C.
1979-01-01
The Interactive Software Invocation (ISIS), an interactive data management system, was developed to act as a buffer between the user and host computer system. The user is provided by ISIS with a powerful system for developing software or systems in the interactive environment. The user is protected from the idiosyncracies of the host computer system by providing such a complete range of capabilities that the user should have no need for direct access to the host computer. These capabilities are divided into four areas: desk top calculator, data editor, file manager, and tool invoker.
A Database of Computer Attacks for the Evaluation of Intrusion Detection Systems
1999-06-01
administrator whenever a system binary file (such as the ps, login , or ls program) is modified. Normal users have no legitimate reason to alter these files...development of EMERALD [46], which combines statistical anomaly detection from NIDES with signature verification. Specification-based intrusion detection...the creation of a single host that can act as many hosts. Daemons that provide network services—including telnetd, ftpd, and login — display banners
Software Aids In Graphical Depiction Of Flow Data
NASA Technical Reports Server (NTRS)
Stegeman, J. D.
1995-01-01
Interactive Data Display System (IDDS) computer program is graphical-display program designed to assist in visualization of three-dimensional flow in turbomachinery. Grid and simulation data files in PLOT3D format required for input. Able to unwrap volumetric data cone associated with centrifugal compressor and display results in easy-to-understand two- or three-dimensional plots. IDDS provides majority of visualization and analysis capability for Integrated Computational Fluid Dynamics and Experiment (ICE) system. IDDS invoked from any subsystem, or used as stand-alone package of display software. Generates contour, vector, shaded, x-y, and carpet plots. Written in C language. Input file format used by IDDS is that of PLOT3D (COSMIC item ARC-12782).
Agarwal, Jatin; Jain, Pradeep; Chandra, Anil
2015-01-01
Background The ability of an endodontic instrument to remain centered in the root canal system is one of the most important characteristic influencing the clinical performance of a particular file system. Thus, it is important to assess the canal centering ability of newly introduced single file systems before they can be considered a viable replacement of full-sequence rotary file systems. Aim The aim of the study was to compare the canal transportation, centering ability, and time taken for preparation of curved root canals after instrumentation with single file systems One Shape and Wave One, using cone-beam computed tomography (CBCT). Materials and Methods Sixty mesiobuccal canals of mandibular molars with an angle of curvature ranging from 20o to 35o were divided into three groups of 20 samples each: ProTaper PT (group I) – full-sequence rotary control group, OneShape OS (group II)- single file continuous rotation, WaveOne WO – single file reciprocal motion (group III). Pre instrumentation and post instrumentation three-dimensional CBCT images were obtained from root cross-sections at 3mm, 6mm and 9mm from the apex. Scanned images were then accessed to determine canal transportation and centering ability. The data collected were evaluated using one-way analysis of variance (ANOVA) with Tukey’s honestly significant difference test. Results It was observed that there were no differences in the magnitude of transportation between the rotary instruments (p >0.05) at both 3mm as well as 6mm from the apex. At 9 mm from the apex, Group I PT showed significantly higher mean canal transportation and lower centering ability (0.19±0.08 and 0.39±0.16), as compared to Group II OS (0.12±0.07 and 0.54±0.24) and Group III WO (0.13±0.06 and 0.55±0.18) while the differences between OS and WO were not statistically significant Conclusion It was concluded that there was minor difference between the tested groups. Single file systems demonstrated average canal transportation and centering ability comparable to full sequence Protaper system in curved root canals. PMID:26155551
22 CFR 1429.21 - Computation of time for filing papers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Computation of time for filing papers. 1429.21... MISCELLANEOUS AND GENERAL REQUIREMENTS General Requirements § 1429.21 Computation of time for filing papers. In... subchapter requires the filing of any paper, such document must be received by the Board or the officer or...
dos-Santos, M; Fujino, A
2012-01-01
Radiology teaching usually employs a systematic and comprehensive set of medical images and related information. Databases with representative radiological images and documents are highly desirable and widely used in Radiology teaching programs. Currently, computer-based teaching file systems are widely used in Medicine and Radiology teaching as an educational resource. This work addresses a user-centered radiology electronic teaching file system as an instance of MIRC compliant medical image database. Such as a digital library, the clinical cases are available to access by using a web browser. The system has offered great opportunities to some Radiology residents interact with experts. This has been done by applying user-centered techniques and creating usage context-based tools in order to make available an interactive system.
Automatic computer subprogram selection from application program libraries
NASA Technical Reports Server (NTRS)
Drozdowski, J. M.
1972-01-01
The program ALTLIB (ALTernate LIBrary) which allows a user access to an alternate subprogram library with a minimum effort is discussed. The ALTLIB program selects subprograms from an alternate library file and merges them with the user's program load file. Only subprograms that are called for (directly or indirectly) by the user's programs and that are available on the alternate library file will be selected. ALTLIB eliminates the need for elaborate control-card manipulations to add subprograms from a subprogram file. ALTLIB returns to the user his binary file and the selected subprograms in correct order for a call to the loader. The user supplies the alternate library file. Subprogram requests which are not satisfied from the alternate library file will be satisfied at load time from the system library.
Purchasing a Computer System for the Small Construction Company,
1983-06-08
October 1982. . . . . . . .. i. i...< ..- . ,. .-: i. i? ,. . .-- i * i . . . ., r77- 7 *,~ .- -- 36 IBM - International Business Machines...Corporation, "Small Systems Solutions: An Introduction to Business Computing," Pamphlet SC21-5205-0, Atlanta, Georgia, January 1979. International Business Machines...Corporation, "IBM System/34 Introduction," Pamphlet GC21-5153-5, File No. S34-00, Atlanta, Georgia, January 1979. International Business Machines
Shick, G L; Hoover, L W; Moore, A N
1979-04-01
A data base was developed for a computer-assisted personnel data system for a university hospital department of dietetics which would store data on employees' employment, personnel information, attendance records, and termination. Development of the data base required designing computer programs and files, coding directions and forms for card input, and forms and procedures for on-line transmission. A program was written to compute accrued vacation, sick leave, and holiday time, and to generate historical records.
ERIC Educational Resources Information Center
Scholtz, R. G.; And Others
This final report of a feasibility study describes the research performed in assessing the requirements for a chemical signature file and search scheme for organic compound identification and information retrieval. The research performed to determined feasibility of identifying an unknown compound involved screening the compound against a file of…
NASA Technical Reports Server (NTRS)
Easley, Wesley C.
1991-01-01
Experiment critical use of RS-232 data busses in the Transport Systems Research Vehicle (TSRV) operated by the Advanced Transport Operating Systems Program Office at the NASA Langley Research Center has recently increased. Each application utilizes a number of nonidentical computer and peripheral configurations and requires task specific software development. To aid these development tasks, an IBM PC-based RS-232 bus monitoring system was produced. It can simultaneously monitor two communication ports of a PC or clone, including the nonstandard bus expansion of the TSRV Grid laptop computers. Display occurs in a separate window for each port's input with binary display being selectable. A number of other features including binary log files, screen capture to files, and a full range of communication parameters are provided.
In Internet-Based Visualization System Study about Breakthrough Applet Security Restrictions
NASA Astrophysics Data System (ADS)
Chen, Jie; Huang, Yan
In the process of realization Internet-based visualization system of the protein molecules, system needs to allow users to use the system to observe the molecular structure of the local computer, that is, customers can generate the three-dimensional graphics from PDB file on the client computer. This requires Applet access to local file, related to the Applet security restrictions question. In this paper include two realization methods: 1.Use such as signature tools, key management tools and Policy Editor tools provided by the JDK to digital signature and authentication for Java Applet, breakthrough certain security restrictions in the browser. 2. Through the use of Servlet agent implement indirect access data methods, breakthrough the traditional Java Virtual Machine sandbox model restriction of Applet ability. The two ways can break through the Applet's security restrictions, but each has its own strengths.
Data management system for USGS/USEPA urban hydrology studies program
Doyle, W.H.; Lorens, J.A.
1982-01-01
A data management system was developed to store, update, and retrieve data collected in urban stormwater studies jointly conducted by the U.S. Geological Survey and U.S. Environmental Protection Agency in 11 cities in the United States. The data management system is used to retrieve and combine data from USGS data files for use in rainfall, runoff, and water-quality models and for data computations such as storm loads. The system is based on the data management aspect of the Statistical Analysis System (SAS) and was used to create all the data files in the data base. SAS is used for storage and retrieval of basin physiography, land-use, and environmental practices inventory data. Also, storm-event water-quality characteristics are stored in the data base. The advantages of using SAS to create and manage a data base are many with a few being that it is simple, easy to use, contains a comprehensive statistical package, and can be used to modify files very easily. Data base system development has progressed rapidly during the last two decades and the data managment system concepts used in this study reflect the advancement made in computer technology during this era. Urban stormwater data is, however, just one application for which the system can be used. (USGS)
Akbulut, Makbule Bilge; Akman, Melek; Terlemez, Arslan; Magat, Guldane; Sener, Sevgi; Shetty, Heeresh
2016-01-01
The aim of this study was to evaluate the efficacy of Twisted File (TF) Adaptive, Reciproc, and ProTaper Universal Retreatment (UR) System instruments for removing root-canal-filling. Sixty single rooted teeth were decoronated, instrumented and obturated. Preoperative CBCT scans were taken and the teeth were retreated with TF Adaptive, Reciproc, ProTaper UR, or hand files (n=15). Then, the teeth were rescanned, and the percentage volume of the residual root-canal-filling material was established. The total time for retreatment was recorded, and the data was statistically analyzed. The statistical ranking of the residual filling material volume was as follows: hand file=TF Adaptive>ProTaper UR=Reciproc. The ProTaper UR and Reciproc systems required shorter periods of time for retreatment. Root canal filling was more efficiently removed by using Reciproc and ProTaper UR instruments than TF Adaptive instruments and hand files. The TF Adaptive system was advantageous over hand files with regard to operating time.
29 CFR 4000.28 - What if I send a computer disk?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 9 2014-07-01 2014-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...
29 CFR 4000.28 - What if I send a computer disk?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 9 2013-07-01 2013-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...
29 CFR 4000.28 - What if I send a computer disk?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 9 2012-07-01 2012-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...
29 CFR 4000.28 - What if I send a computer disk?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...
29 CFR 4000.28 - What if I send a computer disk?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 9 2011-07-01 2011-07-01 false What if I send a computer disk? 4000.28 Section 4000.28... I send a computer disk? (a) In general. We determine your filing or issuance date for a computer... paragraph (b) of this section. (1) Filings. For computer-disk filings, we may treat your submission as...
Using Python on the Peregrine System | High-Performance Computing | NREL
was not designed for use in a shared computing environment. The following example creates a new Python is run. For example an environment.yml file can be created on the developer's laptop and used on the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Eric Y.; Flory, Adam E.; Lamarche, Brian L.
2014-06-01
The Juvenile Salmon Acoustic Telemetry System (JSATS) Detector is a software and hardware system that captures JSATS Acoustic Micro Transmitter (AMT) signals. The system uses hydrophones to capture acoustic signals in the water. This analog signal is then amplified and processed by the Analog to Digital Converter (ADC) and Digital Signal Processor (DSP) board in the computer. This board digitizes and processes the acoustic signal to determine if a possible JSATS tag is present. With this detection, the data will be saved to the computer for further analysis. This document details the features and functionality of the JSATS Detector software.more » The document covers how to install the software, setup and run the detector software. The document will also go over the raw binary waveform file format and CSV files containing RMS values« less
Digital Stratigraphy: Contextual Analysis of File System Traces in Forensic Science.
Casey, Eoghan
2017-12-28
This work introduces novel methods for conducting forensic analysis of file allocation traces, collectively called digital stratigraphy. These in-depth forensic analysis methods can provide insight into the origin, composition, distribution, and time frame of strata within storage media. Using case examples and empirical studies, this paper illuminates the successes, challenges, and limitations of digital stratigraphy. This study also shows how understanding file allocation methods can provide insight into concealment activities and how real-world computer usage can complicate digital stratigraphy. Furthermore, this work explains how forensic analysts have misinterpreted traces of normal file system behavior as indications of concealment activities. This work raises awareness of the value of taking the overall context into account when analyzing file system traces. This work calls for further research in this area and for forensic tools to provide necessary information for such contextual analysis, such as highlighting mass deletion, mass copying, and potential backdating. © 2017 American Academy of Forensic Sciences.
Zhao, Dan; Shen, Ya; Peng, Bin; Haapasalo, Markus
2013-03-01
The aim of this study was to describe the canal shaping properties of Hyflex CM, Twisted Files (TF), and K3 rotary nickel-titanium files by using micro-computed tomography in maxillary first molars. A total of 36 mesiobuccal root canals of maxillary first molars were prepared with Hyflex CM, TF, or K3 system. Micro-computed tomography was used to scan the specimens before and after instrumentation. The volume of untreated canal, volume of dentin removed after preparation, amount of uninstrumented area, and the transportation for the coronal, middle, and apical thirds of canals were measured. Instrumentation of canals increased their volume and surface area. TF group showed the greatest amount of volumetric dentin removal (P < .05), whereas no significant difference was found in Hyflex CM and K3 groups. There were no significant differences among instrument types concerning uninstrumented area. The TF system produced significantly less transportation than the K3 system in the apical third of canals. No significant difference was found between TF and Hyflex CM instruments relating to apical transportation. In vitro, Hyflex and TF instruments shaped curved root canals in maxillary first molar without significant shaping errors. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Kernel and System Procedures in Flex.
1983-08-01
System procedures on which the operating system for the Flex computer is based. These are the low level rOCedures Whbich are used to implement the compilers, file-store* coummand interpreters etc on Flex. 168 ... System procedures on which the operating system for the Flex computer is based. These are the low level procedures which are used to implement the...privileged mode. They form the interface between the user and a particular operating system written on top of the Kernel.
Analyzing Sliding Stability of Structures Using the Modified Computer Program GWALL. Revision,
1983-11-01
R136 954 RNRLYZING SLIDING STRBILITY OF STRUCTURES USING THE i/i MODIFIED COMPUTER PRO..(U) ARMY ENGINEER WATERRYS EXPERIMENT STATION VICKSBURG MS...GWALL and/or the graphics software package, Graphics Compati- bility System (GCS). Input Features 4. GWALL is very easy to use because it allows the...Prepared Data File 9. Time-sharing computer systems do not always respond quickly to the userts commands, especially when there are many users
Ferrigno, C.F.
1986-01-01
Machine-readable files were developed for the High Plains Regional Aquifer-System Analysis project are stored on two magnetic tapes available from the U.S. Geological Survey. The first tape contains computer programs that were used to prepare, store, retrieve, organize, and preserve the areal interpretive data collected by the project staff. The second tape contains 134 data files that can be divided into five general classes: (1) Aquifer geometry data, (2) aquifer and water characteristics , (3) water levels, (4) climatological data, and (5) land use and water use data. (Author 's abstract)
Oak Ridge Institutional Cluster Autotune Test Drive Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jibonananda, Sanyal; New, Joshua Ryan
2014-02-01
The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titanmore » (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.« less
Tsukamoto, Takafumi; Yasunaga, Takuo
2014-11-01
Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Yamamoto, K.; Murata, K.; Kimura, E.; Honda, R.
2006-12-01
In the Solar-Terrestrial Physics (STP) field, the amount of satellite observation data has been increasing every year. It is necessary to solve the following three problems to achieve large-scale statistical analyses of plenty of data. (i) More CPU power and larger memory and disk size are required. However, total powers of personal computers are not enough to analyze such amount of data. Super-computers provide a high performance CPU and rich memory area, but they are usually separated from the Internet or connected only for the purpose of programming or data file transfer. (ii) Most of the observation data files are managed at distributed data sites over the Internet. Users have to know where the data files are located. (iii) Since no common data format in the STP field is available now, users have to prepare reading program for each data by themselves. To overcome the problems (i) and (ii), we constructed a parallel and distributed data analysis environment based on the Gfarm reference implementation of the Grid Datafarm architecture. The Gfarm shares both computational resources and perform parallel distributed processings. In addition, the Gfarm provides the Gfarm filesystem which can be as virtual directory tree among nodes. The Gfarm environment is composed of three parts; a metadata server to manage distributed files information, filesystem nodes to provide computational resources and a client to throw a job into metadata server and manages data processing schedulings. In the present study, both data files and data processes are parallelized on the Gfarm with 6 file system nodes: CPU clock frequency of each node is Pentium V 1GHz, 256MB memory and40GB disk. To evaluate performances of the present Gfarm system, we scanned plenty of data files, the size of which is about 300MB for each, in three processing methods: sequential processing in one node, sequential processing by each node and parallel processing by each node. As a result, in comparison between the number of files and the elapsed time, parallel and distributed processing shorten the elapsed time to 1/5 than sequential processing. On the other hand, sequential processing times were shortened in another experiment, whose file size is smaller than 100KB. In this case, the elapsed time to scan one file is within one second. It implies that disk swap took place in case of parallel processing by each node. We note that the operation became unstable when the number of the files exceeded 1000. To overcome the problem (iii), we developed an original data class. This class supports our reading of data files with various data formats since it converts them into an original data format since it defines schemata for every type of data and encapsulates the structure of data files. In addition, since this class provides a function of time re-sampling, users can easily convert multiple data (array) with different time resolution into the same time resolution array. Finally, using the Gfarm, we achieved a high performance environment for large-scale statistical data analyses. It should be noted that the present method is effective only when one data file size is large enough. At present, we are restructuring the new Gfarm environment with 8 nodes: CPU is Athlon 64 x2 Dual Core 2GHz, 2GB memory and 1.2TB disk (using RAID0) for each node. Our original class is to be implemented on the new Gfarm environment. In the present talk, we show the latest results with applying the present system for data analyses with huge number of satellite observation data files.
Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.
2012-01-01
Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the parameter estimation software PEST; the discussion presented in this report focuses on the use of the PESTCommander together with Parallel PEST. However, PESTCommander can be used with a wide variety of programs and models that require management, distribution, and cleanup of files before or after model execution. In addition to its use with the Parallel PEST program suite, discussion is also included in this report regarding the use of PESTCommander with the Global Run Manager GENIE, which was developed simultaneously with PESTCommander.
Dhingra, Annil; Ruhal, Nidhi; Miglani, Anjali
2015-04-01
Successful endodontic therapy depends on many factor, one of the most important step in any root canal treatment is root canal preparation. In addition, respecting the original shape of the canal is of the same importance; otherwise, canal aberrations such as transportation will be created. The purpose of this study is to compare and evaluate Reciprocating WaveOne ,Reciproc and Rotary Oneshape Single File Instrumentation System On Cervical Dentin Thickness, Cross Sectional Area and Canal Transportation on First Mandibular Molar Using Cone Beam Computed Tomography. Sixty Mandibular First Molars extracted due to periodontal reason was collected from the Department of Oral and Maxillofacial. Teeth were prepared using one rotary and two reciprocating single file system. Teeth were divided into 3 groups 20 teeth in each group. Pre instrumentation and Post instrumentation scans was done and evaluated for three parameters Canal Transportation, Cervical Dentinal Thickness, Cross-sectional Area. Results were analysed statistically using ANOVA, Post-Hoc Tukey analysis. The change in cross-sectional area after filing showed significant difference at 0mm, 1mm, 2mm and 7mm (p<0.001, p =0.006, 0.004 & <0.001 respectively). There was significant difference between wave one and oneshape; oneshape and reciproc at 0mm, 1mm, 2mm & 7mm (p-values for waveone and Oneshape <0.001, 0.022, 0.011 & <0.001 resp. and for oneshape and reciproc < 0.001, p= 0.011, p=0.008 & <0.001). On assessing the transportation of the three file system over a distance of 7 mm (starting from 0mm and then evaluation at 1mm, 2mm, 3mm, 5mm and 7mm), the results showed a significant difference among the file systems at various lengths (p= 0.014, 0.046, 0.004, 0.028, 0.005 & 0.029 respectively). Mean value of cervical dentinal removal is maximum at all the levels for oneshape and minimum for waveone showing the better quality of waveone and reciproc over oneshape file system. Significant difference was found at 9mm, 11mm and 12mm between all the three file systems (p<0.001,< 0.001, <0.001). It was concluded that reciprocating motion is better than rotary motion in all the three parameters Canal Transportation, Cross-sectional Area, Cervical Dentinal Thickness.
Escott, Edward J; Rubinstein, David
2004-01-01
It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.
Ground Software Maintenance Facility (GSMF) system manual
NASA Technical Reports Server (NTRS)
Derrig, D.; Griffith, G.
1986-01-01
The Ground Software Maintenance Facility (GSMF) is designed to support development and maintenance of spacelab ground support software. THE GSMF consists of a Perkin Elmer 3250 (Host computer) and a MITRA 125s (ATE computer), with appropriate interface devices and software to simulate the Electrical Ground Support Equipment (EGSE). This document is presented in three sections: (1) GSMF Overview; (2) Software Structure; and (3) Fault Isolation Capability. The overview contains information on hardware and software organization along with their corresponding block diagrams. The Software Structure section describes the modes of software structure including source files, link information, and database files. The Fault Isolation section describes the capabilities of the Ground Computer Interface Device, Perkin Elmer host, and MITRA ATE.
Terascale Cluster for Advanced Turbulent Combustion Simulations
2008-07-25
the system We have given the name CATS (for Combustion And Turbulence Simulator) to the terascale system that was obtained through this grant. CATS ...lnfiniBand interconnect. CATS includes an interactive login node and a file server, each holding in excess of 1 terabyte of file storage. The 35 active...compute nodes of CATS enable us to run up to 140-core parallel MPI batch jobs; one node is reserved to run the scheduler. CATS is operated and
IBM NJE protocol emulator for VAX/VMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.
1981-01-01
Communications software has been written at Argonne National Laboratory to enable a VAX/VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE is actually a collection of programs that support job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any node in the network for printing, punching, or job submission,more » as well as to a VM/370 user's virtual reader. Files sent from the VAX are queued and transmitted asynchronously to allow users to perform other work while files are awaiting transmission. No changes are required to the IBM software.« less
A Lightweight, High-performance I/O Management Package for Data-intensive Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jun Wang
2007-07-17
File storage systems are playing an increasingly important role in high-performance computing as the performance gap between CPU and disk increases. It could take a long time to develop an entire system from scratch. Solutions will have to be built as extensions to existing systems. If new portable, customized software components are plugged into these systems, better sustained high I/O performance and higher scalability will be achieved, and the development cycle of next-generation of parallel file systems will be shortened. The overall research objective of this ECPI development plan aims to develop a lightweight, customized, high-performance I/O management package namedmore » LightI/O to extend and leverage current parallel file systems used by DOE. During this period, We have developed a novel component in LightI/O and prototype them into PVFS2, and evaluate the resultant prototype—extended PVFS2 system on data-intensive applications. The preliminary results indicate the extended PVFS2 delivers better performance and reliability to users. A strong collaborative effort between the PI at the University of Nebraska Lincoln and the DOE collaborators—Drs Rob Ross and Rajeev Thakur at Argonne National Laboratory who are leading the PVFS2 group makes the project more promising.« less
XML Flight/Ground Data Dictionary Management
NASA Technical Reports Server (NTRS)
Wright, Jesse; Wiklow, Colette
2007-01-01
A computer program generates Extensible Markup Language (XML) files that effect coupling between the command- and telemetry-handling software running aboard a spacecraft and the corresponding software running in ground support systems. The XML files are produced by use of information from the flight software and from flight-system engineering. The XML files are converted to legacy ground-system data formats for command and telemetry, transformed into Web-based and printed documentation, and used in developing new ground-system data-handling software. Previously, the information about telemetry and command was scattered in various paper documents that were not synchronized. The process of searching and reading the documents was time-consuming and introduced errors. In contrast, the XML files contain all of the information in one place. XML structures can evolve in such a manner as to enable the addition, to the XML files, of the metadata necessary to track the changes and the associated documentation. The use of this software has reduced the extent of manual operations in developing a ground data system, thereby saving considerable time and removing errors that previously arose in the translation and transcription of software information from the flight to the ground system.
7 CFR 272.14 - Deceased matching system.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Master File, obtained through the State Verification and Exchange System (SVES) and enter into a computer... comparison of matched data at the time of application and no less frequently than once a year. (2) The...
7 CFR 272.14 - Deceased matching system.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Master File, obtained through the State Verification and Exchange System (SVES) and enter into a computer... comparison of matched data at the time of application and no less frequently than once a year. (2) The...
Eastern Gas Shales Project (EGSP) data files: a final report. Open-file report 81-598
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyman, T.S.
1981-01-01
The United States Geological Survey and Petroleum Information Corporation (PI) of Denver have created two large computerized files of data for the Eastern Gas Shales Project (EGSP) as part of a large responsibility to the Department of Energy (DOE), Morgantown Energy Technology Center (METC), in Morgantown, West Virginia. Computer-compatible well, outdrop, and sample data from EGSP contractors are being stored on digital tape and delivered to METC for subsequent data-base management. This report has been written to: (1) discuss data-file background and development, (2) address specific problems and solutions for future project use, and (3) present a general summary ofmore » well- and sample-data file content by state, county, well, contractor, and subject coverage. When looking at the EGSP data-gathering task in retrospect, modifications to project management would have made the data-gathering process more successful. Many problems resulted from having contractors perform their own data encoding. Some EGSP contractors had little knowledge of computer- and data-encoding techniques, and they often delegated encoding responsibilities to subordinates who were not properly informed about procedures. The overall lack of uniformity in analytical procedures and methods resulted in an apparent over-abundance of card classes. Nearly 40% of the available card classes were never used, and about 30% of those used contain fewer than 100 data records. The most serious problem encountered during data-file development has been the long delay in arranging for an efficient retrieval and mapping system. Sample-and well-data file management are now coordinated through METC, and Petroleum Information Corporation maintains an effective in-house data management system for data retrieval and analysis. The present system would have been very useful to retrieve data for contractor needs two years earlier, even though the files were incomplete.« less
NASA Astrophysics Data System (ADS)
Flomenbom, Ophir; Castañeda-Priego, Ramón; Peeters, François
2014-11-01
In this document, we present the Special Issue's projects; these include reviews and articles about mathematical solutions and formulations of single-file dynamics (SFD), yet also its computational modeling, experimental evidence, and value in explaining real life occurrences. In particular, we introduce projects focusing on electron dynamics on liquid helium in channels with changing width, on the zig-zag configuration in files with longitudinal movement, on expanding files, on both heterogeneous and slow files, on files with external forces, and on the importance of the interaction potential shape on the particle dynamics along the file. Applications of SFD are of intrinsic value in life sciences, biophysics, physics, and materials science, since they can explain a large diversity of many-body systems, e.g., biological channels, biological motors, membranes, crowding, electron motion in proteins, etc. These systems are explained in all the projects that participate in this topical issue. This Special Issue can therefore intrigue, inspire and advance scientifically young people, yet also those scientists that actively work in this field.
PROMIS (Procurement Management Information System)
NASA Technical Reports Server (NTRS)
1987-01-01
The PROcurement Management Information System (PROMIS) provides both detailed and summary level information on all procurement actions performed within NASA's procurement offices at Marshall Space Flight Center (MSFC). It provides not only on-line access, but also schedules procurement actions, monitors their progress, and updates Forecast Award Dates. Except for a few computational routines coded in FORTRAN, the majority of the systems is coded in a high level language called NATURAL. A relational Data Base Management System called ADABAS is utilized. Certain fields, called descriptors, are set up on each file to allow the selection of records based on a specified value or range of values. The use of like descriptors on different files serves as the link between the falls, thus producing a relational data base. Twenty related files are currently being maintained on PROMIS.
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less
TMS communications software. Volume 1: Computer interfaces
NASA Technical Reports Server (NTRS)
Brown, J. S.; Lenker, M. D.
1979-01-01
A prototype bus communications system, which is being used to support the Trend Monitoring System (TMS) as well as for evaluation of the bus concept is considered. Hardware and software interfaces to the MODCOMP and NOVA minicomputers are included. The system software required to drive the interfaces in each TMS computer is described. Documentation of other software for bus statistics monitoring and for transferring files across the bus is also included.
Validation of CFD/Heat Transfer Software for Turbine Blade Analysis
NASA Technical Reports Server (NTRS)
Kiefer, Walter D.
2004-01-01
I am an intern in the Turbine Branch of the Turbomachinery and Propulsion Systems Division. The division is primarily concerned with experimental and computational methods of calculating heat transfer effects of turbine blades during operation in jet engines and land-based power systems. These include modeling flow in internal cooling passages and film cooling, as well as calculating heat flux and peak temperatures to ensure safe and efficient operation. The branch is research-oriented, emphasizing the development of tools that may be used by gas turbine designers in industry. The branch has been developing a computational fluid dynamics (CFD) and heat transfer code called GlennHT to achieve the computational end of this analysis. The code was originally written in FORTRAN 77 and run on Silicon Graphics machines. However the code has been rewritten and compiled in FORTRAN 90 to take advantage of more modem computer memory systems. In addition the branch has made a switch in system architectures from SGI's to Linux PC's. The newly modified code therefore needs to be tested and validated. This is the primary goal of my internship. To validate the GlennHT code, it must be run using benchmark fluid mechanics and heat transfer test cases, for which there are either analytical solutions or widely accepted experimental data. From the solutions generated by the code, comparisons can be made to the correct solutions to establish the accuracy of the code. To design and create these test cases, there are many steps and programs that must be used. Before a test case can be run, pre-processing steps must be accomplished. These include generating a grid to describe the geometry, using a software package called GridPro. Also various files required by the GlennHT code must be created including a boundary condition file, a file for multi-processor computing, and a file to describe problem and algorithm parameters. A good deal of this internship will be to become familiar with these programs and the structure of the GlennHT code. Additional information is included in the original extended abstract.
75 FR 30411 - Privacy Act of 1974; Report of a Modified or Altered System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-01
... Privacy Act of 1974; the Federal Information Security Management Act of 2002; the Computer Fraud and Abuse... Security Management Act of 2002; the Computer Fraud and Abuse Act of 1986; the Health Insurance Portability... systems and data files necessary for compliance with Title XI, Part C of the Social Security Act because...
Peregrine System Configuration | High-Performance Computing | NREL
nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a
Geiger, Linda H.
1983-01-01
The report is an update of U.S. Geological Survey Open-File Report 77-703, which described a retrieval program for administrative index of active data-collection sites in Florida. Extensive changes to the Findex system have been made since 1977 , making the previous report obsolete. A description of the data base and computer programs that are available in the Findex system are documented in this report. This system serves a vital need in the administration of the many and diverse water-data collection activities. District offices with extensive data-collection activities will benefit from the documentation of the system. Largely descriptive, the report tells how a file of computer card images has been established which contains entries for all sites in Florida at which there is currently a water-data collection activity. Entries include information such as identification number, station name, location, type of site, county, frequency of data collection, funding, and other pertinent details. The computer program FINDEX selectively retrieves entries and lists them in a format suitable for publication. The index is updated routinely. (USGS)
LUMIS: Land Use Management and Information Systems; coordinate oriented program documentation
NASA Technical Reports Server (NTRS)
1976-01-01
An integrated geographic information system to assist program managers and planning groups in metropolitan regions is presented. The series of computer software programs and procedures involved in data base construction uses the census DIME file and point-in-polygon architectures. The system is described in two parts: (1) instructions to operators with regard to digitizing and editing procedures, and (2) application of data base construction algorithms to achieve map registration, assure the topological integrity of polygon files, and tabulate land use acreages within administrative districts.
Serefoglu, Burcu; Piskin, Beyser
2017-09-26
The aim of this investigation was to compare the cleaning and shaping efficiency of Self-adjusting file and Protaper, and to assess the correlation between root canal curvature and working time in mandibular molars using micro-computed tomography. Twenty extracted mandibular molars instrumented with Protaper and Self-adjusting file and the total working time was measured in mesial canals. The changes in canal volume, surface area and structure model index, transportation, uninstrumented area and the correlation between working-time and the curvature were analyzed. Although no statistically significant difference was observed between two systems in distal canals (p>0.05), a significantly higher amount of removed dentin volume and lower uninstrumented area were provided by Protaper in mesial canals (p<0.0001). A correlation between working-time and the canal-curvature was also observed in mesial canals for both groups (SAFr 2 =0.792, p<0.0004, PTUr 2 =0.9098, p<0.0001).
Characterizing parallel file-access patterns on a large-scale multiprocessor
NASA Technical Reports Server (NTRS)
Purakayastha, Apratim; Ellis, Carla Schlatter; Kotz, David; Nieuwejaar, Nils; Best, Michael
1994-01-01
Rapid increases in the computational speeds of multiprocessors have not been matched by corresponding performance enhancements in the I/O subsystem. To satisfy the large and growing I/O requirements of some parallel scientific applications, we need parallel file systems that can provide high-bandwidth and high-volume data transfer between the I/O subsystem and thousands of processors. Design of such high-performance parallel file systems depends on a thorough grasp of the expected workload. So far there have been no comprehensive usage studies of multiprocessor file systems. Our CHARISMA project intends to fill this void. The first results from our study involve an iPSC/860 at NASA Ames. This paper presents results from a different platform, the CM-5 at the National Center for Supercomputing Applications. The CHARISMA studies are unique because we collect information about every individual read and write request and about the entire mix of applications running on the machines. The results of our trace analysis lead to recommendations for parallel file system design. First the file system should support efficient concurrent access to many files, and I/O requests from many jobs under varying load conditions. Second, it must efficiently manage large files kept open for long periods. Third, it should expect to see small requests predominantly sequential access patterns, application-wide synchronous access, no concurrent file-sharing between jobs appreciable byte and block sharing between processes within jobs, and strong interprocess locality. Finally, the trace data suggest that node-level write caches and collective I/O request interfaces may be useful in certain environments.
Li, Mei-Lin; Liao, Wei-Li; Cai, Hua-Xiong
2018-01-01
The aim of the present study was to evaluate the length of dentinal microcracks observed prior to and following root canal preparation with different single-file nickel-titanium (Ni-Ti) systems using micro-computed tomography (micro-CT) analysis. A total of 80 mesial roots of mandibular first molars presenting with type II Vertucci canal configurations were scanned at an isotropic resolution of 7.4 µm. The samples were randomly assigned into four groups (n=20 per group) according to the system used for root canal preparation, including the WaveOne (WO), OneShape (OS), Reciproc (RE) and control groups. A second micro-CT scan was conducted after the root canals were prepared with size 25 instruments. Pre- and postoperative cross-section images of the roots (n=237,760) were then screened to identify the lengths of the microcracks. The results indicated that the microcrack lengths were notably increased following root canal preparation (P<0.05). The alterations in microcrack length in the OS group were more significant compared with those in the WO, RE and control groups (P<0.05). In conclusion, the formation and development of dentinal microcracks may be associated with the movement caused by preparation rather than the taper of the files. Among the single-file Ni-Ti systems, WO and RE were not observed to cause notable microcracks, while the OS system resulted in evident microcracks.
Program Helps Generate And Manage Graphics
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Living Color Frame Maker (LCFM) computer program generates computer-graphics frames. Graphical frames saved as text files, in readable and disclosed format, easily retrieved and manipulated by user programs for wide range of real-time visual information applications. LCFM implemented in frame-based expert system for visual aids in management of systems. Monitoring, diagnosis, and/or control, diagrams of circuits or systems brought to "life" by use of designated video colors and intensities to symbolize status of hardware components (via real-time feedback from sensors). Status of systems can be displayed. Written in C++ using Borland C++ 2.0 compiler for IBM PC-series computers and compatible computers running MS-DOS.
Experimental Analysis of File Transfer Rates over Wide-Area Dedicated Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata
2016-12-01
File transfers over dedicated connections, supported by large parallel file systems, have become increasingly important in high-performance computing and big data workflows. It remains a challenge to achieve peak rates for such transfers due to the complexities of file I/O, host, and network transport subsystems, and equally importantly, their interactions. We present extensive measurements of disk-to-disk file transfers using Lustre and XFS file systems mounted on multi-core servers over a suite of 10 Gbps emulated connections with 0-366 ms round trip times. Our results indicate that large buffer sizes and many parallel flows do not always guarantee high transfer rates.more » Furthermore, large variations in the measured rates necessitate repeated measurements to ensure confidence in inferences based on them. We propose a new method to efficiently identify the optimal joint file I/O and network transport parameters using a small number of measurements. We show that for XFS and Lustre with direct I/O, this method identifies configurations achieving 97% of the peak transfer rate while probing only 12% of the parameter space.« less
Managing Data From Signal-Propagation Experiments
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1989-01-01
Computer programs generate characteristic plots from amplitudes and phases. Software system enables minicomputer to process data on amplitudes and phases of signals received during experiments in ground-mobile/satellite radio propagation. Takes advantage of file-handling capabilities of UNIX operating system and C programming language. Interacts with user, under whose guidance programs in FORTRAN language generate plots of spectra or other curves of types commonly used to characterize signals. FORTRAN programs used to process file-handling outputs into any of several useful forms.
Improving Reliability in a Stochastic Communication Network
1990-12-01
and GINO. In addition, the following computers were used: a Sun 386i workstation, a Digital Equipment Corporation (DEC) 11/785 miniframe , and a DEC...operating system. The DEC 11/785 miniframe used in the experiment was running Unix Version 4.3 (Berkley System Domain). Maxflo was run on the DEC 11/785...the file was still called Mod- ifyl.for). 4. The Maxflo program was started on the DEC 11/785 miniframe . 5. At this time the Convert.max file, created
GEO2D - Two-Dimensional Computer Model of a Ground Source Heat Pump System
James Menart
2013-06-07
This file contains a zipped file that contains many files required to run GEO2D. GEO2D is a computer code for simulating ground source heat pump (GSHP) systems in two-dimensions. GEO2D performs a detailed finite difference simulation of the heat transfer occurring within the working fluid, the tube wall, the grout, and the ground. Both horizontal and vertical wells can be simulated with this program, but it should be noted that the vertical wall is modeled as a single tube. This program also models the heat pump in conjunction with the heat transfer occurring. GEO2D simulates the heat pump and ground loop as a system. Many results are produced by GEO2D as a function of time and position, such as heat transfer rates, temperatures and heat pump performance. On top of this information from an economic comparison between the geothermal system simulated and a comparable air heat pump systems or a comparable gas, oil or propane heating systems with a vapor compression air conditioner. The version of GEO2D in the attached file has been coupled to the DOE heating and cooling load software called ENERGYPLUS. This is a great convenience for the user because heating and cooling loads are an input to GEO2D. GEO2D is a user friendly program that uses a graphical user interface for inputs and outputs. These make entering data simple and they produce many plotted results that are easy to understand. In order to run GEO2D access to MATLAB is required. If this program is not available on your computer you can download the program MCRInstaller.exe, the 64 bit version, from the MATLAB website or from this geothermal depository. This is a free download which will enable you to run GEO2D..
ERIC Educational Resources Information Center
Marcum, Deanna; Boss, Richard
1983-01-01
Relates office automation to its application in libraries, discussing computer software packages for microcomputers performing tasks involved in word processing, accounting, statistical analysis, electronic filing cabinets, and electronic mail systems. (EJS)
Measurements of file transfer rates over dedicated long-haul connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Settlemyer, Bradley W; Imam, Neena
2016-01-01
Wide-area file transfers are an integral part of several High-Performance Computing (HPC) scenarios. Dedicated network connections with high capacity, low loss rate and low competing traffic, are increasingly being provisioned over current HPC infrastructures to support such transfers. To gain insights into these file transfers, we collected transfer rate measurements for Lustre and xfs file systems between dedicated multi-core servers over emulated 10 Gbps connections with round trip times (rtt) in 0-366 ms range. Memory transfer throughput over these connections is measured using iperf, and file IO throughput on host systems is measured using xddprof. We consider two file systemmore » configurations: Lustre over IB network and xfs over SSD connected to PCI bus. Files are transferred using xdd across these connections, and the transfer rates are measured, which indicate the need to jointly optimize the connection and host file IO parameters to achieve peak transfer rates. In particular, these measurements indicate that (i) peak file transfer rate is lower than peak connection and host IO throughput, in some cases by as much as 50% or lower, (ii) xdd request sizes that achieve peak throughput for host file IO do not necessarily lead to peak file transfer rates, and (iii) parallelism in host IO and TCP transport does not always improve the file transfer rates.« less
Automated Instructional Management Systems (AIMS) Version III, Users Manual.
ERIC Educational Resources Information Center
New York Inst. of Tech., Old Westbury.
This document sets forth the procedures necessary to utilize and understand the operating characteristics of the Automated Instructional Management System - Version III, a computer-based system for management of educational processes. Directions for initialization, including internal and user files; system and operational input requirements;…
A brain-computer interface controlled mail client.
Yu, Tianyou; Li, Yuanqing; Long, Jinyi; Wang, Cong
2013-01-01
In this paper, we propose a brain-computer interface (BCI) based mail client. This system is controlled by hybrid features extracted from scalp-recorded electroencephalographic (EEG). We emulate the computer mouse by the motor imagery-based mu rhythm and the P300 potential. Furthermore, an adaptive P300 speller is included to provide text input function. With this BCI mail client, users can receive, read, write mails, as well as attach files in mail writing. The system has been tested on 3 subjects. Experimental results show that mail communication with this system is feasible.
A Geometry Based Infra-Structure for Computational Analysis and Design
NASA Technical Reports Server (NTRS)
Haimes, Robert
1998-01-01
The computational steps traditionally taken for most engineering analysis suites (computational fluid dynamics (CFD), structural analysis, heat transfer and etc.) are: (1) Surface Generation -- usually by employing a Computer Assisted Design (CAD) system; (2) Grid Generation -- preparing the volume for the simulation; (3) Flow Solver -- producing the results at the specified operational point; (4) Post-processing Visualization -- interactively attempting to understand the results. For structural analysis, integrated systems can be obtained from a number of commercial vendors. These vendors couple directly to a number of CAD systems and are executed from within the CAD Graphical User Interface (GUI). It should be noted that the structural analysis problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go to Initial Graphics Exchange Specification (IGES) or Standard Exchange Program (STEP) files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D data formats). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Specifically the problems with this procedure are:(1) File based -- Information flows from one step to the next via data files with formats specified for that procedure. File standards, when they exist, are wholly inadequate. For example, geometry from CAD systems (transmitted via IGES files) is defined as disjoint surfaces and curves (as well as masses of other information of no interest for the Grid Generator). This is particularly onerous for modern CAD systems based on solid modeling. The part was a proper solid and in the translation to IGES has lost this important characteristic. STEP is another standard for CAD data that exists and supports the concept of a solid. The problem with STEP is that a solid modeling geometry kernel is required to query and manipulate the data within this type of file. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. Adroit multi-block methods are not far behind. This means that a million node steady-state solution can be computed on the order of hours (using current high performance computers) starting from this 'good' geometry. Unfortunately, the geometry usually transmitted from the CAD system is not 'good' in the grid generator sense. The grid generator needs smooth closed solid geometry. It can take a week (or more) of interaction with the CAD output (sometimes by hand) before the process can begin. One way Communication. (3) One-way Communication -- All information travels on from one phase to the next. This makes procedures like node adaptation difficult when attempting to add or move nodes that sit on bounding surfaces (when the actual surface data has been lost after the grid generation phase). Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive. There is also no way to easily deal with this system in a modular manner. One can only replace the grid generator, for example, if the software reads and writes the same files. Instead of the serial approach to analysis as described above, CAPRI takes a geometry centric approach. This makes the actual geometry (not a discretized version) accessible to all phases of the analysis. The connection to the geometry is made through an Application Programming Interface (API) and NOT a file system. This API isolates the top-level applications (grid generators, solvers and visualization components) from the geometry engine. Also this allows the replacement of one geometry kernel with another, without effecting these top-level applications. For example, if UniGraphics is used as the CAD package then Parasolid (UG's own geometry engine) can be used for all geometric queries so that no solid geometry information is lost in a translation. This is much better than STEP because when the data is queried, the same software is executed as used in the CAD system. Therefore, one analyzes the exact part that is in the CAD system. CAPRI uses the same idea as the commercial structural analysis codes but does not specify control. Software components of the CAD system are used, but the analysis suite, not the CAD operator, specifies the control of the software session. This also means that the license issues (may be) minimized and individuals need not have to know how to operate a CAD system in order to run the suite.
System Administrator for LCS Development Sets
NASA Technical Reports Server (NTRS)
Garcia, Aaron
2013-01-01
The Spaceport Command and Control System Project is creating a Checkout and Control System that will eventually launch the next generation of vehicles from Kennedy Space Center. KSC has a large set of Development and Operational equipment already deployed in several facilities, including the Launch Control Center, which requires support. The position of System Administrator will complete tasks across multiple platforms (Linux/Windows), many of them virtual. The Hardware Branch of the Control and Data Systems Division at the Kennedy Space Center uses system administrators for a variety of tasks. The position of system administrator comes with many responsibilities which include maintaining computer systems, repair or set up hardware, install software, create backups and recover drive images are a sample of jobs which one must complete. Other duties may include working with clients in person or over the phone and resolving their computer system needs. Training is a major part of learning how an organization functions and operates. Taking that into consideration, NASA is no exception. Training on how to better protect the NASA computer infrastructure will be a topic to learn, followed by NASA work polices. Attending meetings and discussing progress will be expected. A system administrator will have an account with root access. Root access gives a user full access to a computer system and or network. System admins can remove critical system files and recover files using a tape backup. Problem solving will be an important skill to develop in order to complete the many tasks.
Broadband Fan Noise Prediction System for Turbofan Engines. Volume 3; Validation and Test Cases
NASA Technical Reports Server (NTRS)
Morin, Bruce L.
2010-01-01
Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the third volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User s Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by validation studies that were done on three fan rigs. It concludes with recommended improvements and additional studies for BFaNS.
Security on Cloud Revocation Authority using Identity Based Encryption
NASA Astrophysics Data System (ADS)
Rajaprabha, M. N.
2017-11-01
As due to the era of cloud computing most of the people are saving there documents, files and other things on cloud spaces. Due to this security over the cloud is also important because all the confidential things are there on the cloud. So to overcome private key infrastructure (PKI) issues some revocable Identity Based Encryption (IBE) techniques are introduced which eliminates the demand of PKI. The technique introduced is key update cloud service provider which is having two issues in it and they are computation and communication cost is high and second one is scalability issue. So to overcome this problem we come along with the system in which the Cloud Revocation Authority (CRA) is there for the security which will only hold the secret key for each user. And the secret key was send with the help of advanced encryption standard security. The key is encrypted and send to the CRA for giving the authentication to the person who wants to share the data or files or for the communication purpose. Through that key only the other user will able to access that file and if the user apply some invalid key on the particular file than the information of that user and file is send to the administrator and administrator is having rights to block that person of black list that person to use the system services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, Charlie E M
2010-01-01
Xgrid, with a capital-X is the name for Apple's grid computing system. With a lower case x, xgrid is the name of the command line utility that clients can use, among other ways, to submit jobs to a controller. An Xgrid divides into three logical components: Agent, Controller and Client. Client computers submit jobs (a set of tasks) they want run to a Controller computer. The Controller queues the Client jobs and distributes tasks to Agent computers. Agent computers run the tasks and report their output and status back to the controller where it is stored until deleted by themore » Client. The Clients can asynchronously query the controller about the status of a job and the results. Any OSX computer can be any of these. A single mac can be more than one: it's possible to be Agent, Controller and Client at the same time. There is one Controller per Grid. Clients can submit jobs to Controllers of different grids. Agents can work for more than one grid. Xgrid's setup has a pleasantly small palette of choices. The first two decisions to make are the kind of authentication & authorization to use and if a shared file system is needed. A shared file system that all the agents can access can be very beneficial for many computing problems, but it is not appropriate for every network.« less
PCACE-Personal-Computer-Aided Cabling Engineering
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1987-01-01
PCACE computer program developed to provide inexpensive, interactive system for learning and using engineering approach to interconnection systems. Basically database system that stores information as files of individual connectors and handles wiring information in circuit groups stored as records. Directly emulates typical manual engineering methods of handling data, thus making interface between user and program very natural. Apple version written in P-Code Pascal and IBM PC version of PCACE written in TURBO Pascal 3.0
Lossef, S V; Schwartz, L H
1990-09-01
A computerized reference system for radiology journal articles was developed by using an IBM-compatible personal computer with a hand-held optical scanner and optical character recognition software. This allows direct entry of scanned text from printed material into word processing or data-base files. Additionally, line diagrams and photographs of radiographs can be incorporated into these files. A text search and retrieval software program enables rapid searching for keywords in scanned documents. The hand scanner and software programs are commercially available, relatively inexpensive, and easily used. This permits construction of a personalized radiology literature file of readily accessible text and images requiring minimal typing or keystroke entry.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-08
... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Miracor Diagnostics, Inc., Monaco Finance, Inc., MPEL Holdings Corp. (f/k/a Computer Transceiver Systems, Inc.), MR3 Systems, Inc., Mutual Risk Management, Ltd.; Order of Suspension of Trading June 4, 2010. It appears to the Securities and Exchange Commission that there is a lack of current and...
Are Computer Science Students Ready for the Real World.
ERIC Educational Resources Information Center
Elliot, Noreen
The typical undergraduate program in computer science includes an introduction to hardware and operating systems, file processing and database organization, data communication and networking, and programming. However, many graduates may lack the ability to integrate the concepts "learned" into a skill set and pattern of approaching problems that…
BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis, Version III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Fowler, T.B.; Cunningham, G.W. III.
1981-06-01
This report is a condensed documentation for VERSION III of the BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis. An experienced analyst should be able to use this system routinely for solving problems by referring to this document. Individual reports must be referenced for details. This report covers basic input instructions and describes recent extensions to the modules as well as to the interface data file specifications. Some application considerations are discussed and an elaborate sample problem is used as an instruction aid. Instructions for creating the system on IBM computers are also given.
Availability of software services for a hospital information system.
Sakamoto, N
1998-03-01
Hospital information systems (HISs) are becoming more important and covering more parts in daily hospital operations as order-entry systems become popular and electronic charts are introduced. Thus, HISs today need to be able to provide necessary services for hospital operations for a 24-h day, 365 days a year. The provision of services discussed here does not simply mean the availability of computers, in which all that matters is that the computer is functioning. It means the provision of necessary information for hospital operations by the computer software, and we will call it the availability of software services. HISs these days are mostly client-server systems. To increase availability of software services in these systems, it is not enough to just use system structures that are highly reliable in existing host-centred systems. Four main components which support availability of software services are network systems, client computers, server computers, and application software. In this paper, we suggest how to structure these four components to provide the minimum requested software services even if a part of the system stops to function. The network system should be double-protected in stratus using Asynchronous Transfer Mode (ATM) as its base network. Client computers should be fat clients with as much application logic as possible, and reference information which do not require frequent updates (master files, for example) should be replicated in clients. It would be best if all server computers could be double-protected. However, if that is physically impossible, one database file should be made accessible by several server computers. Still, at least the basic patients' information and the latest clinical records should be double-protected physically. Application software should be tested carefully before introduction. Different versions of the application software should always be kept and managed in case the new version has problems. If a hospital information system is designed and developed with these points in mind, it's availability of software services should increase greatly.
ArrayBridge: Interweaving declarative array processing with high-performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros
Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less
A system for the input and storage of data in the Besm-6 digital computer
NASA Technical Reports Server (NTRS)
Schmidt, K.; Blenke, L.
1975-01-01
Computer programs used for the decoding and storage of large volumes of data on the the BESM-6 computer are described. The following factors are discussed: the programming control language allows the programs to be run as part of a modular programming system used in data processing; data control is executed in a hierarchically built file on magnetic tape with sequential index storage; and the programs are not dependent on the structure of the data.
Recognition of Computer Viruses by Detecting Their Gene of Self Replication
2006-03-01
etection A pproach ................................................................................................. 6 1.4.1 The syntactic analysis m...Therefore a group of instructions acting together in the right order have to be identified for the gene of self-replication to be obvious in a...its first system call NtCreateFile, while the outputs of NtWriteFile become its output arguments. These four blocks form the final structure - The Gene
11 CFR 104.18 - Electronic filing of reports (2 U.S.C. 432(d) and 434(a)(11)).
Code of Federal Regulations, 2012 CFR
2012-01-01
... specifications and can be read by the Commission's computer system. Each report submitted in an electronic format... 11 Federal Elections 1 2012-01-01 2012-01-01 false Electronic filing of reports (2 U.S.C. 432(d) and 434(a)(11)). 104.18 Section 104.18 Federal Elections FEDERAL ELECTION COMMISSION GENERAL REPORTS...
11 CFR 104.18 - Electronic filing of reports (2 U.S.C. 432(d) and 434(a)(11)).
Code of Federal Regulations, 2011 CFR
2011-01-01
... specifications and can be read by the Commission's computer system. Each report submitted in an electronic format... 11 Federal Elections 1 2011-01-01 2011-01-01 false Electronic filing of reports (2 U.S.C. 432(d) and 434(a)(11)). 104.18 Section 104.18 Federal Elections FEDERAL ELECTION COMMISSION GENERAL REPORTS...
11 CFR 104.18 - Electronic filing of reports (2 U.S.C. 432(d) and 434(a)(11)).
Code of Federal Regulations, 2013 CFR
2013-01-01
... specifications and can be read by the Commission's computer system. Each report submitted in an electronic format... 11 Federal Elections 1 2013-01-01 2012-01-01 true Electronic filing of reports (2 U.S.C. 432(d) and 434(a)(11)). 104.18 Section 104.18 Federal Elections FEDERAL ELECTION COMMISSION GENERAL REPORTS...
11 CFR 104.18 - Electronic filing of reports (2 U.S.C. 432(d) and 434(a)(11)).
Code of Federal Regulations, 2014 CFR
2014-01-01
... specifications and can be read by the Commission's computer system. Each report submitted in an electronic format... 11 Federal Elections 1 2014-01-01 2014-01-01 false Electronic filing of reports (2 U.S.C. 432(d) and 434(a)(11)). 104.18 Section 104.18 Federal Elections FEDERAL ELECTION COMMISSION GENERAL REPORTS...
SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions
Poeter, Eileen P.; Hill, Mary C.
2008-01-01
This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.
Workflow Management for Complex HEP Analyses
NASA Astrophysics Data System (ADS)
Erdmann, M.; Fischer, R.; Rieger, M.; von Cube, R. F.
2017-10-01
We present the novel Analysis Workflow Management (AWM) that provides users with the tools and competences of professional large scale workflow systems, e.g. Apache’s Airavata[1]. The approach presents a paradigm shift from executing parts of the analysis to defining the analysis. Within AWM an analysis consists of steps. For example, a step defines to run a certain executable for multiple files of an input data collection. Each call to the executable for one of those input files can be submitted to the desired run location, which could be the local computer or a remote batch system. An integrated software manager enables automated user installation of dependencies in the working directory at the run location. Each execution of a step item creates one report for bookkeeping purposes containing error codes and output data or file references. Required files, e.g. created by previous steps, are retrieved automatically. Since data storage and run locations are exchangeable from the steps perspective, computing resources can be used opportunistically. A visualization of the workflow as a graph of the steps in the web browser provides a high-level view on the analysis. The workflow system is developed and tested alongside of a ttbb cross section measurement where, for instance, the event selection is represented by one step and a Bayesian statistical inference is performed by another. The clear interface and dependencies between steps enables a make-like execution of the whole analysis.
Toward a standard reference database for computer-aided mammography
NASA Astrophysics Data System (ADS)
Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.
2008-03-01
Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).
NASA Astrophysics Data System (ADS)
Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.
2012-04-01
The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on these systems, and developed as part of the Ultra-High Resolution Climate Modeling Project, allows users of OLCF resources to efficiently share simulated data, often multi-terabyte in volume, as well as the results from the modeling experiments and various synthesized products derived from these simulations. The final objective in the exercise is to ensure that the simulation results and the enhanced understanding will serve the needs of a diverse group of stakeholders across the world, including our research partners in U.S. Department of Energy laboratories & universities, domain scientists, students (K-12 as well as higher education), resource managers, decision makers, and the general public.
SNS programming environment user's guide
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.
1992-01-01
The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.
Oak Ridge Leadership Computing Facility Position Paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oral, H Sarp; Hill, Jason J; Thach, Kevin G
This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.
NEMAR plotting computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program which generates CalComp plots of trajectory parameters is examined. The trajectory parameters are calculated and placed on a data file by the Near Earth Mission Analysis Routine computer program. The plot program accesses the data file and generates the plots as defined by inputs to the plot program. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included. Although this plot program utilizes a random access data file, a data file of the same type and formatted in 102 numbers per record could be generated by any computer program and used by this plot program.
The basic health care system for the lunar base crew
NASA Astrophysics Data System (ADS)
Terai, Minoru; Nitta, Keiji
A plan of the health care system for the crew on the lunar base is described in this study. The health care system consists of two subsystems. The first is the daily health care system. The system contains health care menus, similar to those on Earth, and some biochemical and ordinary medical examinations. The second system is a periodic medical inspection for the crew's bones and the determination of natural radioisotopes in the body. These care systems are automatically treated with the examination and data filing. Usually these examinations are carried out without the presence of a medical doctor. Examinations and files of the whole results are controlled by a computer. The daily results of examinations are compared with data in the file. If any abnormal values are found in the results, an appropriate message is sent advising whether he must receive an in-depth examination by a medical doctor, or be reexamined by the same submenu. The automatic health care system also records transactions with the life support monitoring system.
High performance network and channel-based storage
NASA Technical Reports Server (NTRS)
Katz, Randy H.
1991-01-01
In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.
Bayram, H Melike; Bayram, Emre; Ocak, Mert; Uygun, Ahmet Demirhan; Celik, Hakan Hamdi
2017-07-01
The aim of the present study was to evaluate the frequency of dentinal microcracks observed after root canal preparation with ProTaper Universal (PTU; Dentsply Tulsa Dental Specialties, Tulsa, OK), ProTaper Gold (PTG; Dentsply Tulsa Dental Specialties), Self-Adjusting File (SAF; ReDent Nova, Ra'anana, Israel), and XP-endo Shaper (XP; FKG Dentaire, La Chaux-de-Fonds, Switzerland) instruments using micro-computed tomographic (CT) analysis. Forty extracted human mandibular premolars having single-canal and straight root were randomly assigned to 4 experimental groups (n = 10) according to the different nickel-titanium systems used for root canal preparation: PTU, PTG, SAF, and XP. In the SAF and XP groups, the canals were first prepared with a K-file until #25 at the working length, and then the SAF or XP files were used. The specimens were scanned using high-resolution micro-computed tomographic imaging before and after root canal preparation. Afterward, preoperative and postoperative cross-sectional images of the teeth were screened to identify the presence of dentinal defects. For each group, the number of microcracks was determined as a percentage rate. The McNemar test was used to determine significant differences before and after instrumentation. The level of significance was set at P ≤ .05. The PTU system significantly increased the percentage rate of microcracks compared with preoperative specimens (P < .05). No new dentinal microcracks were observed in the PTG, SAF, or XP groups. Root canal preparations with the PTG, SAF, and XP systems did not induce the formation of new dentinal microcracks on straight root canals of mandibular premolars. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
On-Line Systems: Promise and Pitfalls
ERIC Educational Resources Information Center
Cuadra, Carlos A.
1971-01-01
The virtues of interactive systems are speed, intimacy, and - if time-sharing is involved - economy. The major problems are the cost of the large computers and files necessary for bibliographic data, the still-high cost of communications, and the generally poor design of the user-system interfaces. (Author)
Spacecraft crew procedures from paper to computers
NASA Technical Reports Server (NTRS)
Oneal, Michael; Manahan, Meera
1993-01-01
Large volumes of paper are launched with each Space Shuttle Mission that contain step-by-step instructions for various activities that are to be performed by the crew during the mission. These instructions include normal operational procedures and malfunction or contingency procedures and are collectively known as the Flight Data File (FDF). An example of nominal procedures would be those used in the deployment of a satellite from the Space Shuttle; a malfunction procedure would describe actions to be taken if a specific problem developed during the deployment. A new FDF and associated system is being created for Space Station Freedom. The system will be called the Space Station Flight Data File (SFDF). NASA has determined that the SFDF will be computer-based rather than paper-based. Various aspects of the SFDF are discussed.
Dhingra, Annil; Miglani, Anjali
2015-01-01
Background Successful endodontic therapy depends on many factor, one of the most important step in any root canal treatment is root canal preparation. In addition, respecting the original shape of the canal is of the same importance; otherwise, canal aberrations such as transportation will be created. Aim The purpose of this study is to compare and evaluate Reciprocating WaveOne ,Reciproc and Rotary Oneshape Single File Instrumentation System On Cervical Dentin Thickness, Cross Sectional Area and Canal Transportation on First Mandibular Molar Using Cone Beam Computed Tomography. Materials and Methods Sixty Mandibular First Molars extracted due to periodontal reason was collected from the Department of Oral and Maxillofacial. Teeth were prepared using one rotary and two reciprocating single file system. Teeth were divided into 3 groups 20 teeth in each group. Pre instrumentation and Post instrumentation scans was done and evaluated for three parameters Canal Transportation, Cervical Dentinal Thickness, Cross-sectional Area. Results were analysed statistically using ANOVA, Post-Hoc Tukey analysis. Results The change in cross-sectional area after filing showed significant difference at 0mm, 1mm, 2mm and 7mm (p<0.001, p =0.006, 0.004 & <0.001 respectively). There was significant difference between wave one and oneshape; oneshape and reciproc at 0mm, 1mm, 2mm & 7mm (p-values for waveone and Oneshape <0.001, 0.022, 0.011 & <0.001 resp. and for oneshape and reciproc < 0.001, p= 0.011, p=0.008 & <0.001). On assessing the transportation of the three file system over a distance of 7 mm (starting from 0mm and then evaluation at 1mm, 2mm, 3mm, 5mm and 7mm), the results showed a significant difference among the file systems at various lengths (p= 0.014, 0.046, 0.004, 0.028, 0.005 & 0.029 respectively). Mean value of cervical dentinal removal is maximum at all the levels for oneshape and minimum for waveone showing the better quality of waveone and reciproc over oneshape file system. Significant difference was found at 9mm, 11mm and 12mm between all the three file systems (p<0.001,< 0.001, <0.001). Conclusion It was concluded that reciprocating motion is better than rotary motion in all the three parameters Canal Transportation, Cross-sectional Area, Cervical Dentinal Thickness. PMID:26023639
Computers in medicine. A simple system for references and reprints.
Henderson, A S; Bosly-Craft, R
1983-01-01
Because a personal index system for published reports is so useful in clinical research, it is well worth while setting this up on cards. If the collection is likely to exceed about 200 entries, there are now distinct advantages in also having it on a computer file. Using the simple system described here, one can then carry out very rapid searches within one's collection and can prepare error free reference lists for publications. PMID:6416452
Optimizing Targeting of Intrusion Detection Systems in Social Networks
NASA Astrophysics Data System (ADS)
Puzis, Rami; Tubi, Meytal; Elovici, Yuval
Internet users communicate with each other in various ways: by Emails, instant messaging, social networking, accessing Web sites, etc. In the course of communicating, users may unintentionally copy files contaminated with computer viruses and worms [1, 2] to their computers and spread them to other users [3]. (Hereafter we will use the term "threats", rather than computer viruses and computer worms). The Internet is the chief source of these threats [4].
NASA Astrophysics Data System (ADS)
McFall, Steve
1994-03-01
With the increase in business automation and the widespread availability and low cost of computer systems, law enforcement agencies have seen a corresponding increase in criminal acts involving computers. The examination of computer evidence is a new field of forensic science with numerous opportunities for research and development. Research is needed to develop new software utilities to examine computer storage media, expert systems capable of finding criminal activity in large amounts of data, and to find methods of recovering data from chemically and physically damaged computer storage media. In addition, defeating encryption and password protection of computer files is also a topic requiring more research and development.
Collectively loading programs in a multiple program multiple data environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.
Techniques are disclosed for loading programs efficiently in a parallel computing system. In one embodiment, nodes of the parallel computing system receive a load description file which indicates, for each program of a multiple program multiple data (MPMD) job, nodes which are to load the program. The nodes determine, using collective operations, a total number of programs to load and a number of programs to load in parallel. The nodes further generate a class route for each program to be loaded in parallel, where the class route generated for a particular program includes only those nodes on which the programmore » needs to be loaded. For each class route, a node is selected using a collective operation to be a load leader which accesses a file system to load the program associated with a class route and broadcasts the program via the class route to other nodes which require the program.« less
cadcVOFS: A FUSE Based File System Layer for VOSpace
NASA Astrophysics Data System (ADS)
Kavelaars, J.; Dowler, P.; Jenkins, D.; Hill, N.; Damian, A.
2012-09-01
The CADC is now making extensive use of the VOSpace protocol for user managed storage. The VOSpace standard allows a diverse set of rich data services to be delivered to users via a simple protocol. We have recently developed the cadcVOFS, a FUSE based file-system layer for VOSpace. cadcVOFS provides a filesystem layer on-top of VOSpace so that standard Unix tools (such as ‘find’, ‘emacs’, ‘awk’ etc) can be used directly on the data objects stored in VOSpace. Once mounted the VOSpace appears as a network storage volume inside the operating system. Within the CADC Cloud Computing project (CANFAR) we have used VOSpace as the method for retrieving and storing processing inputs and products. The abstraction of storage is an important component of Cloud Computing and the high use level of our VOSpace service reflects this.
Development of an ADP Training Program to Serve the EPA Data Processing Community.
1976-07-29
divide, compute , perform and alter statements; data representation and conversion; table processing; and indexed sequential and random access file...processing. The course workshop will include the testing of coded exercises and problems on a computer system. CLASS SIZE: Individualized METHODS/CONDUCT...familiarization with computer concepts will be helpful. OBJECTIVES OF CURRICULUM After completing this course, the student should have developed a working
Measuring a year of child pornography trafficking by U.S. computers on a peer-to-peer network.
Wolak, Janis; Liberatore, Marc; Levine, Brian Neil
2014-02-01
We used data gathered via investigative "RoundUp" software to measure a year of online child pornography (CP) trafficking activity by U.S. computers on the Gnutella peer-to-peer network. The data include millions of observations of Internet Protocol addresses sharing known CP files, identified as such in previous law enforcement investigations. We found that 244,920 U.S. computers shared 120,418 unique known CP files on Gnutella during the study year. More than 80% of these computers shared fewer than 10 such files during the study year or shared files for fewer than 10 days. However, less than 1% of computers (n=915) made high annual contributions to the number of known CP files available on the network (100 or more files). If law enforcement arrested the operators of these high-contribution computers and took their files offline, the number of distinct known CP files available in the P2P network could be reduced by as much as 30%. Our findings indicate widespread low level CP trafficking by U.S. computers in one peer-to-peer network, while a small percentage of computers made high contributions to the problem. However, our measures were not comprehensive and should be considered lower bounds estimates. Nonetheless, our findings show that data can be systematically gathered and analyzed to develop an empirical grasp of the scope and characteristics of CP trafficking on peer-to-peer networks. Such measurements can be used to combat the problem. Further, investigative software tools can be used strategically to help law enforcement prioritize investigations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data
NASA Technical Reports Server (NTRS)
Schairer, Edward T.
2001-01-01
'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.
Caruso, Ronald D
2004-01-01
Proper configuration of software security settings and proper file management are necessary and important elements of safe computer use. Unfortunately, the configuration of software security options is often not user friendly. Safe file management requires the use of several utilities, most of which are already installed on the computer or available as freeware. Among these file operations are setting passwords, defragmentation, deletion, wiping, removal of personal information, and encryption. For example, Digital Imaging and Communications in Medicine medical images need to be anonymized, or "scrubbed," to remove patient identifying information in the header section prior to their use in a public educational or research environment. The choices made with respect to computer security may affect the convenience of the computing process. Ultimately, the degree of inconvenience accepted will depend on the sensitivity of the files and communications to be protected and the tolerance of the user. Copyright RSNA, 2004
MarFS, a Near-POSIX Interface to Cloud Objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inman, Jeffrey Thornton; Vining, William Flynn; Ransom, Garrett Wilson
The engineering forces driving development of “cloud” storage have produced resilient, cost-effective storage systems that can scale to 100s of petabytes, with good parallel access and bandwidth. These features would make a good match for the vast storage needs of High-Performance Computing datacenters, but cloud storage gains some of its capability from its use of HTTP-style Representational State Transfer (REST) semantics, whereas most large datacenters have legacy applications that rely on POSIX file-system semantics. MarFS is an open-source project at Los Alamos National Laboratory that allows us to present cloud-style object-storage as a scalable near-POSIX file system. We have alsomore » developed a new storage architecture to improve bandwidth and scalability beyond what’s available in commodity object stores, while retaining their resilience and economy. Additionally, we present a scheme for scaling the POSIX interface to allow billions of files in a single directory and trillions of files in total.« less
MarFS, a Near-POSIX Interface to Cloud Objects
Inman, Jeffrey Thornton; Vining, William Flynn; Ransom, Garrett Wilson; ...
2017-01-01
The engineering forces driving development of “cloud” storage have produced resilient, cost-effective storage systems that can scale to 100s of petabytes, with good parallel access and bandwidth. These features would make a good match for the vast storage needs of High-Performance Computing datacenters, but cloud storage gains some of its capability from its use of HTTP-style Representational State Transfer (REST) semantics, whereas most large datacenters have legacy applications that rely on POSIX file-system semantics. MarFS is an open-source project at Los Alamos National Laboratory that allows us to present cloud-style object-storage as a scalable near-POSIX file system. We have alsomore » developed a new storage architecture to improve bandwidth and scalability beyond what’s available in commodity object stores, while retaining their resilience and economy. Additionally, we present a scheme for scaling the POSIX interface to allow billions of files in a single directory and trillions of files in total.« less
A Database Training Module for Nassau Community College Staff and Faculty.
ERIC Educational Resources Information Center
Goodman, Harriett Ziskin
A training module developed following the Instructional System Design model was implemented at Nassau Community College (NCC) to teach its administration, faculty, and staff members computer skills that would enable them to use the available computer equipment more efficiently. Using this module, each trainee designed a file to be used for the…
A Low Cost Microcomputer Laboratory for Investigating Computer Architecture.
ERIC Educational Resources Information Center
Mitchell, Eugene E., Ed.
1980-01-01
Described is a microcomputer laboratory at the United States Military Academy at West Point, New York, which provides easy access to non-volatile memory and a single input/output file system for 16 microcomputer laboratory positions. A microcomputer network that has a centralized data base is implemented using the concepts of computer network…
Katzman, G L
2001-03-01
The goal of the project was to create a method by which an in-house digital teaching file could be constructed that was simple, inexpensive, independent of hypertext markup language (HTML) restrictions, and appears identical on multiple platforms. To accomplish this, Microsoft PowerPoint and Adobe Acrobat were used in succession to assemble digital teaching files in the Acrobat portable document file format. They were then verified to appear identically on computers running Windows, Macintosh Operating Systems (OS), and the Silicon Graphics Unix-based OS as either a free-standing file using Acrobat Reader software or from within a browser window using the Acrobat browser plug-in. This latter display method yields a file viewed through a browser window, yet remains independent of underlying HTML restrictions, which may confer an advantage over simple HTML teaching file construction. Thus, a hybrid of HTML-distributed Adobe Acrobat generated WWW documents may be a viable alternative for digital teaching file construction and distribution.
NASA Technical Reports Server (NTRS)
Reardon, John E.; Violett, Duane L., Jr.
1991-01-01
The AFAS Database System was developed to provide the basic structure of a comprehensive database system for the Marshall Space Flight Center (MSFC) Structures and Dynamics Laboratory Aerophysics Division. The system is intended to handle all of the Aerophysics Division Test Facilities as well as data from other sources. The system was written for the DEC VAX family of computers in FORTRAN-77 and utilizes the VMS indexed file system and screen management routines. Various aspects of the system are covered, including a description of the user interface, lists of all code structure elements, descriptions of the file structures, a description of the security system operation, a detailed description of the data retrieval tasks, a description of the session log, and a description of the archival system.
ERIC Educational Resources Information Center
Cunningham, Jay L.; And Others
This report presents the results of the initial phase of the File Organization Project, a study which focuses upon the on-line maintenance and search of the library's catalog holdings record. The focus of the project is to develop a facility for research and experimentation with the many issues of on-line file organizations and search. The first…
Klett, T.R.; Le, P.A.
2006-01-01
This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).
The 28-entity IGES test file results using ComputerVision CADDS 4X
NASA Technical Reports Server (NTRS)
Kuan, Anchyi; Shah, Saurin; Smith, Kevin
1987-01-01
The investigation was based on the following steps: (1) Read the 28 Entity IGES (Initial Graphics Exchange Specification) Test File into the CAD data base with the IGES post-processor; (2) Make the modifications to the displayed geometries, which should produce the normalized front view and the drawing entity defined display; (3) Produce the drawing entity defined display of the file as it appears in the CAD system after modification to the geometry; (4) Translate the file back to IGES format using IGES pre-processor; (5) Read the IGES file produced by the pre-processor back into the CAD data base; (6) Produce another drawing entity defined display of the CAD display; and (7) Compare the plots resulting from steps 3 and 6 - they should be identical to each other.
Computer Security Products Technology Overview
1988-10-01
13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a
An Intelligent Tutor for Intrusion Detection on Computer Systems.
ERIC Educational Resources Information Center
Rowe, Neil C.; Schiavo, Sandra
1998-01-01
Describes an intelligent tutor incorporating a program using artificial-intelligence planning methods to generate realistic audit files reporting actions of simulated users and intruders of a UNIX system, and a program simulating the system afterwards that asks students to inspect the audit and fix problems. Experiments show that students using…
75 FR 26847 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
..., encrypted VHA servers, personal computers, laptops, or media. All e-mail transmissions of such files use... requested in connection with appeals, special studies of the civil service and other merit systems, reviews... connection with appeals, special studies of the civil service and other merit systems, reviews of rules and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This manual is a guide to use the file protection mechanisms available on the Martin Marietta Energy Systems, Inc. Scientific and Technical Computing (STC) System VAXes. User identification codes (UICs) and general identifiers are discussed as a basis for understanding UIC-based and access control list (ACL) protection. 5 figs.
LUMIS: A Land Use Management Information System for urban planning
NASA Technical Reports Server (NTRS)
Paul, C. K.
1975-01-01
The Land Use Management Information System (LUMIS) consists of a methodology of compiling land use maps by means of air photo interpretation techniques, digitizing these and other maps into machine-readable form, and numerically overlaying these various maps in two computer software routines to provide land use and natural resource data files referenced to the individual census block. The two computer routines are the Polygon Intersection Overlay System (PIOS) and an interactive graphics APL program. A block referenced file of land use, natural resources, geology, elevation, slope, and fault-line items has been created and supplied to the Los Angeles Department of City Planning for the City's portion of the Santa Monica Mountains. In addition, the interactive system contains one hundred and seventy-three socio-economic data items created by merging the Third Count U.S. Census Bureau tapes and the Los Angeles County Secured Assessor File. This data can be graphically displayed for each and every block, block group, or tract for six test tracts in Woodland Hills, California. Other benefits of LUMIS are the knowledge of air photo availability, flight pattern coverage and frequencies, and private photogrammetry companies flying Southern California, as well as a formal Delphi study of relevant land use informational needs in the Santa Monicas.
Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud
NASA Astrophysics Data System (ADS)
Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.
2014-12-01
The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.
ERIC Educational Resources Information Center
Lee, Young-Jin
2015-01-01
This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…
Free Oscilloscope Web App Using a Computer Mic, Built-In Sound Library, or Your Own Files
ERIC Educational Resources Information Center
Ball, Edward; Ruiz, Frances; Ruiz, Michael J.
2017-01-01
We have developed an online oscilloscope program which allows users to see waveforms by utilizing their computer microphones, selecting from our library of over 30 audio files, and opening any *.mp3 or *.wav file on their computers. The oscilloscope displays real-time signals against time. The oscilloscope has been calibrated so one can make…
Accounting utility for determining individual usage of production level software systems
NASA Technical Reports Server (NTRS)
Garber, S. C.
1984-01-01
An accounting package was developed which determines the computer resources utilized by a user during the execution of a particular program and updates a file containing accumulated resource totals. The accounting package is divided into two separate programs. The first program determines the total amount of computer resources utilized by a user during the execution of a particular program. The second program uses these totals to update a file containing accumulated totals of computer resources utilized by a user for a particular program. This package is useful to those persons who have several other users continually accessing and running programs from their accounts. The package provides the ability to determine which users are accessing and running specified programs along with their total level of usage.
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Anderson, Thomas E.; Ousterhout, John K.; Patterson, David A.
1991-01-01
Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications.
Karavitis, G.A.
1984-01-01
The SIMSYS2D two-dimensional water-quality simulation system is a large-scale digital modeling software system used to simulate flow and transport of solutes in freshwater and estuarine environments. Due to the size, processing requirements, and complexity of the system, there is a need to easily move the system and its associated files between computer sites when required. A series of job control language (JCL) procedures was written to allow transferability between IBM and IBM-compatible computers. (USGS)
Nemesis I: Parallel Enhancements to ExodusII
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hennigan, Gary L.; John, Matthew S.; Shadid, John N.
2006-03-28
NEMESIS I is an enhancement to the EXODUS II finite element database model used to store and retrieve data for unstructured parallel finite element analyses. NEMESIS I adds data structures which facilitate the partitioning of a scalar (standard serial) EXODUS II file onto parallel disk systems found on many parallel computers. Since the NEMESIS I application programming interface (APl)can be used to append information to an existing EXODUS II files can be used on files which contain NEMESIS I information. The NEMESIS I information is written and read via C or C++ callable functions which compromise the NEMESIS I API.
Outline of Toshiba Business Information Center
NASA Astrophysics Data System (ADS)
Nagata, Yoshihiro
Toshiba Business Information Center gathers and stores inhouse and external business information used in common within the Toshiba Corp., and provides companywide circulation, reference and other services. The Center established centralized information management system by employing decentralized computers, electronic file apparatus (30cm laser disc) and other office automation equipments. Online retrieval through LAN is available to search the stored documents and increasing copying requests are processed by electronic file. This paper describes the purpose of establishment of the Center, the facilities, management scheme, systematization of the files and the present situation and plan of each information service.
Life and dynamic capacity modeling for aircraft transmissions
NASA Technical Reports Server (NTRS)
Savage, Michael
1991-01-01
A computer program to simulate the dynamic capacity and life of parallel shaft aircraft transmissions is presented. Five basic configurations can be analyzed: single mesh, compound, parallel, reverted, and single plane reductions. In execution, the program prompts the user for the data file prefix name, takes input from a ASCII file, and writes its output to a second ASCII file with the same prefix name. The input data file includes the transmission configuration, the input shaft torque and speed, and descriptions of the transmission geometry and the component gears and bearings. The program output file describes the transmission, its components, their capabilities, locations, and loads. It also lists the dynamic capability, ninety percent reliability, and mean life of each component and the transmission as a system. Here, the program, its input and output files, and the theory behind the operation of the program are described.
Distributing an executable job load file to compute nodes in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gooding, Thomas M.
Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications linkmore » over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.« less
Expanding Software Productivity and Power while Reducing Costs.
ERIC Educational Resources Information Center
Winer, Ellen N.
1988-01-01
Microcomputer efficiency and software economy can be achieved through file transfer and data sharing. Costs can be reduced by purchasing computer systems that allow for expansion and portability of data. (MLF)
1984-12-01
3Com Corporation ....... A-18 Ethernet Controller Support . . . . . . A-19 Host Systems Support . . . . . . . . . A-20 Personal Computers Support...A-23 VAX EtherSeries Software 0 * A-23 Network Research Corporation . o o o . o A-24 File Transfer Service . . . . o A-25 Virtual Terminal Service 0...Control office is planning to acquire a Digital Equipment Corporation VAX 11/780 mainframe computer with the Unix Berkeley 4.2BSD operating system. They
Structural/aerodynamic Blade Analyzer (SAB) User's Guide, Version 1.0
NASA Technical Reports Server (NTRS)
Morel, M. R.
1994-01-01
The structural/aerodynamic blade (SAB) analyzer provides an automated tool for the static-deflection analysis of turbomachinery blades with aerodynamic and rotational loads. A structural code calculates a deflected blade shape using aerodynamic loads input. An aerodynamic solver computes aerodynamic loads using deflected blade shape input. The two programs are iterated automatically until deflections converge. Currently, SAB version 1.0 is interfaced with MSC/NASTRAN to perform the structural analysis and PROP3D to perform the aerodynamic analysis. This document serves as a guide for the operation of the SAB system with specific emphasis on its use at NASA Lewis Research Center (LeRC). This guide consists of six chapters: an introduction which gives a summary of SAB; SAB's methodology, component files, links, and interfaces; input/output file structure; setup and execution of the SAB files on the Cray computers; hints and tips to advise the user; and an example problem demonstrating the SAB process. In addition, four appendices are presented to define the different computer programs used within the SAB analyzer and describe the required input decks.
A performance analysis of advanced I/O architectures for PC-based network file servers
NASA Astrophysics Data System (ADS)
Huynh, K. D.; Khoshgoftaar, T. M.
1994-12-01
In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.
Portable Map-Reduce Utility for MIT SuperCloud Environment
2015-09-17
Reuther, A. Rosa, C. Yee, “Driving Big Data With Big Compute,” IEEE HPEC, Sep 10-12, 2012, Waltham, MA. [6] Apache Hadoop 1.2.1 Documentation: HDFS... big data architecture, which is designed to address these challenges, is made of the computing resources, scheduler, central storage file system...databases, analytics software and web interfaces [1]. These components are common to many big data and supercomputing systems. The platform is
Free oscilloscope web app using a computer mic, built-in sound library, or your own files
NASA Astrophysics Data System (ADS)
Ball, Edward; Ruiz, Frances; Ruiz, Michael J.
2017-07-01
We have developed an online oscilloscope program which allows users to see waveforms by utilizing their computer microphones, selecting from our library of over 30 audio files, and opening any *.mp3 or *.wav file on their computers. The oscilloscope displays real-time signals against time. The oscilloscope has been calibrated so one can make accurate frequency measurements of periodic waves to within 1%. The web app is ideal for computer projection in class.
Chen, Xiao-bo; Chen, Chen; Liang, Yu-hong
2016-02-18
To evaluate the efficacy and security of two type of rotary nickel titanium system (Twisted File and ProTaper Universal) for root canal preparation based on micro-computed tomography(micro-CT). Twenty extracted molars (including 62 canals) were divided into two experimental groups and were respectively instrumented using Twisted File rotary nickel titanium system (TF) and ProTaper Universal rotary nickel titanium system (PU) to #25/0.08 following recommended protocol. Time for root canal instrumentation (accumulation of time for every single file) was recorded. The 0-3 mm root surface from apex was observed under an optical stereomicroscope at 25 × magnification. The presence of crack line was noted. The root canals were scanned with micro-CT before and after root canal preparation. Three-dimensional shape images of canals were reconstructed, calculated and evaluated. The amount of canal central transportation of the two groups was calculated and compared. The shorter preparation time [(0.53 ± 0.14) min] was observed in TF group, while the preparation time of PU group was (2.06 ± 0.39) min (P<0.05). In mid-root level, TF group shaping resulted in less canal center transportation than PU group [(0.070 ± 0.056) mm vs. (0.097 ± 0.084) mm, P<0.05]. No instrument separation was observed in both the groups. Cracks were not found in both the groups either based in micro-CT images or observation under an optical stereomicroscope at 25 × magnification. Compared with ProTaper Universal, Twisted File took less time in root canal preparation and exhibited better shaping ability, and less canal transportation.
An analysis of the low-earth-orbit communications environment
NASA Astrophysics Data System (ADS)
Diersing, Robert Joseph
Advances in microprocessor technology and availability of launch opportunities have caused interest in low-earth-orbit satellite based communications systems to increase dramatically during the past several years. In this research the capabilities of two low-cost, store-and-forward LEO communications satellites operating in the public domain are examined--PACSAT-1 (operated by the Radio Amateur Satellite Corporation) and UoSAT-3 (operated by the University of Surrey, England, Electrical Engineering Department). The file broadcasting and file transfer facilities are examined in detail and a simulation model of the downlink traffic pattern is developed. The simulator will aid the assessment of changes in design and implementation for other systems. The development of the downlink traffic simulator is based on three major parts. First, is a characterization of the low-earth-orbit operating environment along with preliminary measurements of the PACSAT-1 and UoSAT-3 systems including: satellite visibility constraints on communications, monitoring equipment configuration, link margin computations, determination of block and bit error rates, and establishing typical data capture rates for ground stations using computer-pointed directional antennas and fixed omni-directional antennas. Second, arrival rates for successful and unsuccessful file server connections are established along with transaction service times. Downlink traffic has been further characterized by measuring: frame and byte counts for all data-link layer traffic; 30-second interval average response time for all traffic and for file server traffic only; file server response time on a per-connection basis; and retry rates for information and supervisory frames. Finally, the model is verified by comparison with measurements of actual traffic not previously used in the model building process. The simulator is then used to predict operation of the PACSAT-1 satellite with modifications to the original design.
Using Maple to Implement eLearning Integrated with Computer Aided Assessment
ERIC Educational Resources Information Center
Blyth, Bill; Labovic, Aleksandra
2009-01-01
Advanced mathematics courses have been developed and refined by the first author, using an action research methodology, for more than a decade. These courses use the computer algebra system (CAS) Maple in an "immersion mode" where all presentations and student work are done using Maple. Assignments and examinations are Maple files downloaded from…
Migration of the digital interactive breast-imaging teaching file
NASA Astrophysics Data System (ADS)
Cao, Fei; Sickles, Edward A.; Huang, H. K.; Zhou, Xiaoqiang
1998-06-01
The digital breast imaging teaching file developed during the last two years in our laboratory has been used successfully at UCSF (University of California, San Francisco) as a routine teaching tool for training radiology residents and fellows in mammography. Building on this success, we have ported the teaching file from an old Pixar imaging/Sun SPARC 470 display system to our newly designed telemammography display workstation (Ultra SPARC 2 platform with two DOME Md5/SBX display boards). The old Pixar/Sun 470 system, although adequate for fast and high-resolution image display, is 4- year-old technology, expensive to maintain and difficult to upgrade. The new display workstation is more cost-effective and is also compatible with the digital image format from a full-field direct digital mammography system. The digital teaching file is built on a sophisticated computer-aided instruction (CAI) model, which simulates the management sequences used in imaging interpretation and work-up. Each user can be prompted to respond by making his/her own observations, assessments, and work-up decisions as well as the marking of image abnormalities. This effectively replaces the traditional 'show-and-tell' teaching file experience with an interactive, response-driven type of instruction.
Executive Council lists and general practitioner files
Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.
1974-01-01
An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
... Change Relating to Market-Maker Continuous Quoting Obligations February 15, 2013. Pursuant to Section 19... relating to Market-Maker continuous quoting obligations. The text of the proposed rule change is available... Trading System (the ``System'').\\14\\ Their system computations also factor in their market risk models...
NASA Technical Reports Server (NTRS)
1980-01-01
Detailed software and hardware documentation for the Cardiopulmonary Data Acquisition System is presented. General wiring and timing diagrams are given including those for the LSI-11 computer control panel and interface cables. Flowcharts and complete listings of system programs are provided along with the format of the floppy disk file.
Company's Data Security - Case Study
NASA Astrophysics Data System (ADS)
Stera, Piotr
This paper describes a computer network and data security problems in an existing company. Two main issues were pointed out: data loss protection and uncontrolled data copying. Security system was designed and implemented. The system consists of many dedicated programs. This system protect from data loss and detected unauthorized file copying from company's server by a dishonest employee.
,
2006-01-01
This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).
Klett, T.R.; Le, P.A.
2006-01-01
This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).
Klett, T.R.; Le, P.A.
2007-01-01
This chapter describes data used in support of the assessment process. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the portable document format (.pdf) files of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).
Measurements over distributed high performance computing and storage systems
NASA Technical Reports Server (NTRS)
Williams, Elizabeth; Myers, Tom
1993-01-01
A strawman proposal is given for a framework for presenting a common set of metrics for supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them. Production control and database systems are also included. Though other applications and third part software systems are not addressed, it is important to measure them as well.
Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr
2010-10-28
Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.
1990-01-01
PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.
Mamede-Neto, Iussif; Borges, Álvaro Henrique; Alencar, Ana Helena Gonçalves; Duarte, Marco Antonio Hungaro; Sousa Neto, Manoel Damião; Estrela, Carlos
2018-01-01
Objective: To evaluate transportation (T) and centering ability (CA) of root canal preparations using continuous or reciprocating nickel-titanium endodontic files. Materials and Methods: Ninety-six mesiobuccal root canals of mandibular first and second molars were randomly divided into 6 groups (n=16) according to the rotary file used: 1. ProTaper Next; 2. ProTaper Gold; 3. Mtwo; 4. BioRaCe; 5. WaveOne Gold; 6. Reciproc. Root canals were prepared according to manufacturer’s instructions. Cone beam computed tomography scans were obtained before and after root canal preparation. Measurements were made at six different reference points: 2, 3 and 4 mm from the apex and 2, 3 and 4 mm below furcation in different directions. Results: The greatest Mesiodistal (MD) Transportation (T) was found for Reciproc files (p<0.05), and the greatest buccolingual (BL) T, for Reciproc, ProTaper Gold and ProTaper Next files (p<0.05). The greatest Mesiodistal (MD) Centering Ability (CA) was found for BioRaCe files (p<0.05), and the greatest Buccolingual (BL) CA, for BioRaCe and Mtwo files (p<0.05). Conclusion: All systems produced root canal transportation. No file system achieved perfect CA of root preparation. Reciproc files had the greatest MD T and BL T. BioRaCe files had the greatest MD CA, whereas BL CA was similar for BioRaCe and Mtwo files. PMID:29456772
ERIC Educational Resources Information Center
Emmens, Carol E.
1982-01-01
Describes "Maggie's Place," the library computer system of the Pikes Peak Library District, Colorado Springs, Colorado, noting its use as an electronic card catalog and community information file, accessibility by home users and library users, and terminal considerations. (EJS)
A SARA Timeseries Utility supports analysis and management of time-varying environmental data including listing, graphing, computing statistics, computing meteorological data and saving in a WDM or text file. File formats supported include WDM, HSPF Binary (.hbn), USGS RDB, and T...
Schools (Students) Exchanging CAD/CAM Files over the Internet.
ERIC Educational Resources Information Center
Mahoney, Gary S.; Smallwood, James E.
This document discusses how students and schools can benefit from exchanging computer-aided design/computer-aided manufacturing (CAD/CAM) files over the Internet, explains how files are exchanged, and examines the problem of selected hardware/software incompatibility. Key terms associated with information search services are defined, and several…
MISR HDF-to-Binary Converter and Radiance/BRF Calculation Tools
Atmospheric Science Data Center
2013-04-01
... to have the HDF and HDF-EOS libraries for the target computer. The HDF libraries are available from The HDF Group (THG) . The ... and the HDF-EOS include and library files on the target computer. The following files are included in the distribution tar file for ...
Associative programming language and virtual associative access manager
NASA Technical Reports Server (NTRS)
Price, C.
1978-01-01
APL provides convenient associative data manipulation functions in a high level language. Six statements were added to PL/1 via a preprocessor: CREATE, INSERT, FIND, FOR EACH, REMOVE, and DELETE. They allow complete control of all data base operations. During execution, data base management programs perform the functions required to support the APL language. VAAM is the data base management system designed to support the APL language. APL/VAAM is used by CADANCE, an interactive graphic computer system. VAAM is designed to support heavily referenced files. Virtual memory files, which utilize the paging mechanism of the operating system, are used. VAAM supports a full network data structure. The two basic blocks in a VAAM file are entities and sets. Entities are the basic information element and correspond to PL/1 based structures defined by the user. Sets contain the relationship information and are implemented as arrays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
KURTZER, GREGORY; MURIKI, KRISHNA
Singularity is a container solution designed to facilitate mobility of compute across systems and HPC infrastructures. It does this by creating minimal containers that are defined by a specfile and files from the host system are used to build the container. The resulting container can then be launched by any Linux computer with Singularity installed regardless if the programs inside the container are present on the target system, or if they are a different version, or even incompatible versions. Singularity achieves extreme portability without sacrificing usability thus solving the need of mobility of compute. Singularity containers can be executed withinmore » a normal/standard command line process flow.« less
A brain computer interface-based explorer.
Bai, Lijuan; Yu, Tianyou; Li, Yuanqing
2015-04-15
In recent years, various applications of brain computer interfaces (BCIs) have been studied. In this paper, we present a hybrid BCI combining P300 and motor imagery to operate an explorer. Our system is mainly composed of a BCI mouse, a BCI speller and an explorer. Through this system, the user can access his computer and manipulate (open, close, copy, paste, and delete) files such as documents, pictures, music, movies and so on. The system has been tested with five subjects, and the experimental results show that the explorer can be successfully operated according to subjects' intentions. Copyright © 2014 Elsevier B.V. All rights reserved.
2014-09-01
becoming a more and more prevalent technology in the business world today. According to Syal and Goswami (2012), cloud technology is seen as a...use of computing resources, applications, and personal files without reliance on a single computer or system ( Syal & Goswami, 2012). By operating in...cloud services largely being web-based, which can be retrieved through most systems with access to the Internet ( Syal & Goswami, 2012). The end user can
The Computer: An Effective Research Assistant
Gancher, Wendy
1984-01-01
The development of software packages such as data management systems and statistical packages has made it possible to process large amounts of research data. Data management systems make the organization and manipulation of such data easier. Floppy disks ease the problem of storing and retrieving records. Patient information can be kept confidential by limiting access to computer passwords linked with research files, or by using floppy disks. These attributes make the microcomputer essential to modern primary care research. PMID:21279042
,
1997-01-01
GeoChange is an online data system providing access to research results and data generated by the U.S. Geological Survey's Global Change Research Program. Researchers in this program study climate history and the causes of climatic variations, as well as providing baseline data sets on contemporary atmospheric chemistry, high-resolution meteorology, and dust deposition. Research results are packaged as data sets, groups of digital files containing scientific observations, documentation, and interpretation. The data sets are arranged in a consistent manner using standard file formats so that users of a variety of computer systems can access and use them.
Design and application of PDF model for extracting
NASA Astrophysics Data System (ADS)
Xiong, Lei
2013-07-01
In order to change the steps of contributions in editorial department system from two steps to one, this paper advocates that the technology of extracting the information of PDF files should be transplanted from PDF reader into IEEE Xplore contribution system and that it should be combined with uploading in batch skillfully to enable editors to upload PDF files about 1GB in batch for once. Computers will extract the information of the title, author, address, mailbox, abstract and key words of thesis voluntarily for later retrieval so as to save plenty of labor, material and finance for editorial department.
FEQinput—An editor for the full equations (FEQ) hydraulic modeling system
Ancalle, David S.; Ancalle, Pablo J.; Domanski, Marian M.
2017-10-30
IntroductionThe Full Equations Model (FEQ) is a computer program that solves the full, dynamic equations of motion for one-dimensional unsteady hydraulic flow in open channels and through control structures. As a result, hydrologists have used FEQ to design and operate flood-control structures, delineate inundation maps, and analyze peak-flow impacts. To aid in fighting floods, hydrologists are using the software to develop a system that uses flood-plain models to simulate real-time streamflow.Input files for FEQ are composed of text files that contain large amounts of parameters, data, and instructions that are written in a format exclusive to FEQ. Although documentation exists that can aid in the creation and editing of these input files, new users face a steep learning curve in order to understand the specific format and language of the files.FEQinput provides a set of tools to help a new user overcome the steep learning curve associated with creating and modifying input files for the FEQ hydraulic model and the related utility tool, Full Equations Utilities (FEQUTL).
LVFS: A Big Data File Storage Bridge for the HPC Community
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.
2015-12-01
Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.
REPHLEX II: An information management system for the ARS Water Data Base
NASA Astrophysics Data System (ADS)
Thurman, Jane L.
1993-08-01
The REPHLEX II computer system is an on-line information management system which allows scientists, engineers, and other researchers to retrieve data from the ARS Water Data Base using asynchronous communications. The system features two phone lines handling baud rates from 300 to 2400, customized menus to facilitate browsing, help screens, direct access to information and data files, electronic mail processing, file transfers using the XMODEM protocol, and log-in procedures which capture information on new users, process passwords, and log activity for a permanent audit trail. The primary data base on the REPHLEX II system is the ARS Water Data Base which consists of rainfall and runoff data from experimental agricultural watersheds located in the United States.
The Air Force Academy Instructor Workstation (IWS): I. Design and Implementation.
ERIC Educational Resources Information Center
Gist, Thomas E.; And Others
1989-01-01
Discusses the design and implementation of a computer-controlled instructor workstation (IWS), including a videodisc player, that was developed at the Air Force Academy. System capabilities for lesson presentation, administrative functions, an authoring system, and a file server for courseware maintenance are explained. (seven references) (LRW)
Multiyear Interactive Computer Almanac (MICA)
from the U.S. Naval Observatory About MICA Features System Requirements Delta T File and Software Requirements | Delta T and Software Updates | FAQ and Bug Reports | Ordering ] Features MICA can perform the , and delta T). Twilight, rise, set, and transit times for major solar system bodies, selected bright
Teleconferencing Handouts Directory of State Officials ($3 plus s&h) Bill Action & Status Inquiry System - (BASIS) Each of the LIOs has computer access to information on floor and committee action as it occurs via BASIS (Bill Action Status Inquiry System). For each bill introduced, a file is created in BASIS
An Automated Circulation System for a Small Technical Library.
ERIC Educational Resources Information Center
Culnan, Mary J.
The traditional manually-controlled circulation records of the Burroughs Corporation Library in Goleta, California, presented problems of inaccuracies, time time-consuming searches, and lack of use statistics. An automated system with the capacity to do file maintenance and statistical record-keeping was implemented on a Burroughts B1700 computer.…
Integration experiences and performance studies of A COTS parallel archive systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Bary
2010-01-01
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less
Integration experiments and performance studies of a COTS parallel archive system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Gary
2010-06-16
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less
2018-01-18
processing. Specifically, the method described herein uses wgrib2 commands along with a Python script or program to produce tabular text files that in...It makes use of software that is readily available and can be implemented on many computer systems combined with relatively modest additional...example), extracts appropriate information, and lists the extracted information in a readable tabular form. The Python script used here is described in
Byers, J A
1992-09-01
A compiled program, JCE-REFS.EXE (coded in the QuickBASIC language), for use on IBM-compatible personal computers is described. The program converts a DOS text file of current B-I-T-S (BIOSIS Information Transfer System) or BIOSIS Previews references into a DOS file of citations, including abstracts, in a general style used by scientific journals. The latter file can be imported directly into a word processor or the program can convert the file into a random access data base of the references. The program can search the data base for up to 40 text strings with Boolean logic. Selected references in the data base can be exported as a DOS text file of citations. Using the search facility, articles in theJournal of Chemical Ecology from 1975 to 1991 were searched for certain key words in regard to semiochemicals, taxa, methods, chemical classes, and biological terms to determine trends in usage over the period. Positive trends were statistically significant in the use of the words: semiochemical, allomone, allelochemic, deterrent, repellent, plants, angiosperms, dicots, wind tunnel, olfactometer, electrophysiology, mass spectrometry, ketone, evolution, physiology, herbivore, defense, and receptor. Significant negative trends were found for: pheromone, vertebrates, mammals, Coleoptera, Scolytidae,Dendroctonus, lactone, isomer, and calling.
Recent developments in user-job management with Ganga
NASA Astrophysics Data System (ADS)
Currie, R.; Elmsheuser, J.; Fay, R.; Owen, P. H.; Richards, A.; Slater, M.; Sutcliffe, W.; Williams, M.
2015-12-01
The Ganga project was originally developed for use by LHC experiments and has been used extensively throughout Run1 in both LHCb and ATLAS. This document describes some the most recent developments within the Ganga project. There have been improvements in the handling of large scale computational tasks in the form of a new GangaTasks infrastructure. Improvements in file handling through using a new IGangaFile interface makes handling files largely transparent to the end user. In addition to this the performance and usability of Ganga have both been addressed through the development of a new queues system allows for parallel processing of job related tasks.
Automated quality control in a file-based broadcasting workflow
NASA Astrophysics Data System (ADS)
Zhang, Lina
2014-04-01
Benefit from the development of information and internet technologies, television broadcasting is transforming from inefficient tape-based production and distribution to integrated file-based workflows. However, no matter how many changes have took place, successful broadcasting still depends on the ability to deliver a consistent high quality signal to the audiences. After the transition from tape to file, traditional methods of manual quality control (QC) become inadequate, subjective, and inefficient. Based on China Central Television's full file-based workflow in the new site, this paper introduces an automated quality control test system for accurate detection of hidden troubles in media contents. It discusses the system framework and workflow control when the automated QC is added. It puts forward a QC criterion and brings forth a QC software followed this criterion. It also does some experiments on QC speed by adopting parallel processing and distributed computing. The performance of the test system shows that the adoption of automated QC can make the production effective and efficient, and help the station to achieve a competitive advantage in the media market.
PC-based Multiple Information System Interface (PC/MISI) detailed design and implementation plan
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Hall, Philip P.
1985-01-01
The design plan for the personal computer multiple information system interface (PC/MISI) project is discussed. The document is intended to be used as a blueprint for the implementation of the system. Each component is described in the detail necessary to allow programmers to implement the system. A description of the system data flow and system file structures is given.
CARE3MENU- A CARE III USER FRIENDLY INTERFACE
NASA Technical Reports Server (NTRS)
Pierce, J. L.
1994-01-01
CARE3MENU generates an input file for the CARE III program. CARE III is used for reliability prediction of complex, redundant, fault-tolerant systems including digital computers, aircraft, nuclear and chemical control systems. The CARE III input file often becomes complicated and is not easily formatted with a text editor. CARE3MENU provides an easy, interactive method of creating an input file by automatically formatting a set of user-supplied inputs for the CARE III system. CARE3MENU provides detailed on-line help for most of its screen formats. The reliability model input process is divided into sections using menu-driven screen displays. Each stage, or set of identical modules comprising the model, must be identified and described in terms of number of modules, minimum number of modules for stage operation, and critical fault threshold. The fault handling and fault occurence models are detailed in several screens by parameters such as transition rates, propagation and detection densities, Weibull or exponential characteristics, and model accuracy. The system fault tree and critical pairs fault tree screens are used to define the governing logic and to identify modules affected by component failures. Additional CARE3MENU screens prompt the user for output options and run time control values such as mission time and truncation values. There are fourteen major screens, many with default values and HELP options. The documentation includes: 1) a users guide with several examples of CARE III models, the dialog required to input them to CARE3MENU, and the output files created; and 2) a maintenance manual for assistance in changing the HELP files and modifying any of the menu formats or contents. CARE3MENU is written in FORTRAN 77 for interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985.
Leonardi, Rosalia; Maiorana, Francesco; Giordano, Daniela
2008-06-01
Many of us use and maintain files on more than 1 computer--a desktop part of the time, and a notebook, a palmtop, or removable devices at other times. It can be easy to forget which device contains the latest version of a particular file, and time-consuming searches often ensue. One way to solve this problem is to use software that synchronizes the files. This allows users to maintain updated versions of the same file in several locations.
Indexed Retrieval System for Navy Experimental Diving Unit Research and Evaluation Reports.
KWIC computer programs developed by the International Business Machine Corporation (IBM) were so successful in this application that they are now being applied to all of NEDU’s microfilmed research files. (Author)
UniGene Tabulator: a full parser for the UniGene format.
Lenzi, Luca; Frabetti, Flavia; Facchin, Federica; Casadei, Raffaella; Vitale, Lorenza; Canaider, Silvia; Carinci, Paolo; Zannotti, Maria; Strippoli, Pierluigi
2006-10-15
UniGene Tabulator 1.0 provides a solution for full parsing of UniGene flat file format; it implements a structured graphical representation of each data field present in UniGene following import into a common database managing system usable in a personal computer. This database includes related tables for sequence, protein similarity, sequence-tagged site (STS) and transcript map interval (TXMAP) data, plus a summary table where each record represents a UniGene cluster. UniGene Tabulator enables full local management of UniGene data, allowing parsing, querying, indexing, retrieving, exporting and analysis of UniGene data in a relational database form, usable on Macintosh (OS X 10.3.9 or later) and Windows (2000, with service pack 4, XP, with service pack 2 or later) operating systems-based computers. The current release, including both the FileMaker runtime applications, is freely available at http://apollo11.isto.unibo.it/software/
I/O Router Placement and Fine-Grained Routing on Titan to Support Spider II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, Matthew A; Dillow, David; Oral, H Sarp
2014-01-01
The Oak Ridge Leadership Computing Facility (OLCF) introduced the concept of Fine-Grained Routing in 2008 to improve I/O performance between the Jaguar supercomputer and Spider, OLCF s center-wide Lustre file system. Fine-grained routing organizes I/O paths to minimize congestion. Jaguar has since been upgraded to Titan, providing more than a ten-fold improvement in peak performance. To support the center s increased computational capacity and I/O demand, the Spider file system has been replaced with Spider II. Building on the lessons learned from Spider, an improved method for placing LNET routers was developed and implemented for Spider II. The fine-grained routingmore » scripts and configuration have been updated to provide additional optimizations and better match the system setup. This paper presents a brief history of fine-grained routing at OLCF, an introduction to the architectures of Titan and Spider II, methods for placing routers in Titan, and details about the fine-grained routing configuration.« less
Optical mass memory system (AMM-13). AMM/DBMS interface control document
NASA Technical Reports Server (NTRS)
Bailey, G. A.
1980-01-01
The baseline for external interfaces of a 10 to the 13th power bit, optical archival mass memory system (AMM-13) is established. The types of interfaces addressed include data transfer; AMM-13, Data Base Management System, NASA End-to-End Data System computer interconnect; data/control input and output interfaces; test input data source; file management; and facilities interface.
UNIX-BASED DATA MANAGEMENT SYSTEM FOR PROPAGATION EXPERIMENTS
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
This collection of programs comprises The UNIX Based Data Management System for the Pilot Field Experiment (PiFEx) which is an attempt to mimic the Mobile Satellite (MSAT) scenario. The major purposes of PiFEx are to define the mobile communications channels and test the workability of new concepts used to design various components of the receiver system. The results of the PiFex experiment are large amounts of raw data which must be accessed according to a researcher's needs. This package provides a system to manage the PiFEx data in an interactive way. The system not only provides the file handling necessary to retrieve the desired data, but also several FORTRAN programs to generate some standard results pertaining to propagation data. This package assumes that the data file initially generated from the experiment has been already converted from binary to ASCII format. The Data Management system described here consists of programs divided into two categories: those programs that handle the PiFEx generated files and those that are used for number-crunching of these files. Five FORTRAN programs and one UNIX shell script file are used for file manipulation purposes. These activities include: calibration of the acquired data; and parsing of the large data file into datasets concerned with different aspects of the experiment such as the specific calibrated propagation data, dynamic and static loop error data, statistical data, and temperature and spatial data on the hardware used in the experiment. The five remaining FORTRAN programs are used to generate usable information about the data. Signal level probability, probability density of the signal fitting the Rician density function, frequency of the data's fade duration, and the Fourier transform of the data can all be generated from these data manipulation programs. In addition, a program is provided which generates a downloadable file from the signal levels and signal phases files for use with the plotting routine AKPLOT (NPO-16931). All programs in this package are written in either FORTRAN-77 or UNIX shell-scripts. The package does not include test data. The programs were developed in 1987 for use with a UNIX operating system on a DEC MicroVAX computer.
The Library and Human Memory Simulation Studies. Reports on File Organization Studies.
ERIC Educational Resources Information Center
Reilly, Kevin D.
This report describes digital computer simulation efforts in a study of memory systems for two important cases: that of the individual the brain; and that of society, the library. A neural system model is presented in which a complex system is produced by connecting simple hypothetical neurons whose states change under application of a…
Common Database Interface for Heterogeneous Software Engineering Tools.
1987-12-01
SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager
Ahmetoglu, Fuat; Keles, Ali; Simsek, Neslihan; Ocak, M Sinan; Yologlu, Saim
2015-01-01
This study was aimed to use micro-computed tomography (μ-CT) to evaluate the canal shaping properties of three nickel-titanium instruments, Self-Adjusting File (SAF), Reciproc, and Revo-S rotary file, in maxillary first molars. Thirty maxillary molars were scanned preoperatively by using micro-computed tomography (μ-CT) scans at 13,68 μm resolution. The teeth were randomly assigned to three groups (n = 10). The root canals were shaped with SAF, Reciproc, and Revo-S, respectively. The shaped root canals were rescanned. Changes in canal volumes and surface areas were compared with preoperative values. The data were analyzed using Kruskal-Wallis and Conover's post hoc tests, with p < .05 denoting a statistically significant difference. Preoperatively canal volumes and surface area were statistically similar among the three groups (p > .05). There were statistically significant differences in all measures comparing preoperative and postoperative canal models (p = 0.0001). These differences occurred after instrumentation among the three experimental groups showed no statistically significant difference for volume (p > .05). Surface area showed the similar activity in buccal canals in each of the three techniques whereas no statistically significant difference was detected among surface area, the SAF, and the Revo-S in the palatal (P) canal. Each of three shaping system showed the similar volume activity in all canals, but SAF and Revo-S provided more effectively root planning in comparison with Reciproc in P canal. © Wiley Periodicals, Inc.
Gray, A J; Beecher, D E; Olson, M V
1984-01-01
A stand-alone, interactive computer system has been developed that automates the analysis of ethidium bromide-stained agarose and acrylamide gels on which DNA restriction fragments have been separated by size. High-resolution digital images of the gels are obtained using a camera that contains a one-dimensional, 2048-pixel photodiode array that is mechanically translated through 2048 discrete steps in a direction perpendicular to the gel lanes. An automatic band-detection algorithm is used to establish the positions of the gel bands. A color-video graphics system, on which both the gel image and a variety of operator-controlled overlays are displayed, allows the operator to visualize and interact with critical stages of the analysis. The principal interactive steps involve defining the regions of the image that are to be analyzed and editing the results of the band-detection process. The system produces a machine-readable output file that contains the positions, intensities, and descriptive classifications of all the bands, as well as documentary information about the experiment. This file is normally further processed on a larger computer to obtain fragment-size assignments. Images PMID:6320097
Klett, T.R.; Le, P.A.
2013-01-01
This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).
A Geometry Based Infra-structure for Computational Analysis and Design
NASA Technical Reports Server (NTRS)
Haimes, Robert
1997-01-01
The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.
NASA Astrophysics Data System (ADS)
Pescaru, A.; Oanta, E.; Axinte, T.; Dascalescu, A.-D.
2015-11-01
Computer aided engineering is based on models of the phenomena which are expressed as algorithms. The implementations of the algorithms are usually software applications which are processing a large volume of numerical data, regardless the size of the input data. In this way, the finite element method applications used to have an input data generator which was creating the entire volume of geometrical data, starting from the initial geometrical information and the parameters stored in the input data file. Moreover, there were several data processing stages, such as: renumbering of the nodes meant to minimize the size of the band length of the system of equations to be solved, computation of the equivalent nodal forces, computation of the element stiffness matrix, assemblation of system of equations, solving the system of equations, computation of the secondary variables. The modern software application use pre-processing and post-processing programs to easily handle the information. Beside this example, CAE applications use various stages of complex computation, being very interesting the accuracy of the final results. Along time, the development of CAE applications was a constant concern of the authors and the accuracy of the results was a very important target. The paper presents the various computing techniques which were imagined and implemented in the resulting applications: finite element method programs, finite difference element method programs, applied general numerical methods applications, data generators, graphical applications, experimental data reduction programs. In this context, the use of the extended precision data types was one of the solutions, the limitations being imposed by the size of the memory which may be allocated. To avoid the memory-related problems the data was stored in files. To minimize the execution time, part of the file was accessed using the dynamic memory allocation facilities. One of the most important consequences of the paper is the design of a library which includes the optimized solutions previously tested, that may be used for the easily development of original CAE cross-platform applications. Last but not least, beside the generality of the data type solutions, there is targeted the development of a software library which may be used for the easily development of node-based CAE applications, each node having several known or unknown parameters, the system of equations being automatically generated and solved.
Network issues for large mass storage requirements
NASA Technical Reports Server (NTRS)
Perdue, James
1992-01-01
File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.
The Use Of Videography For Three-Dimensional Motion Analysis
NASA Astrophysics Data System (ADS)
Hawkins, D. A.; Hawthorne, D. L.; DeLozier, G. S.; Campbell, K. R.; Grabiner, M. D.
1988-02-01
Special video path editing capabilities with custom hardware and software, have been developed for use in conjunction with existing video acquisition hardware and firmware. This system has simplified the task of quantifying the kinematics of human movement. A set of retro-reflective markers are secured to a subject performing a given task (i.e. walking, throwing, swinging a golf club, etc.). Multiple cameras, a video processor, and a computer work station collect video data while the task is performed. Software has been developed to edit video files, create centroid data, and identify marker paths. Multi-camera path files are combined to form a 3D path file using the DLT method of cinematography. A separate program converts the 3D path file into kinematic data by creating a set of local coordinate axes and performing a series of coordinate transformations from one local system to the next. The kinematic data is then displayed for appropriate review and/or comparison.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.; Raffenetti, C.
NJE is communications software developed to enable a VAX VMS system to participate as an end-node in a standard IBM network by emulating the Network Job Entry (NJE) protocol. NJE supports job networking for the operating systems used on most large IBM-compatible computers (e.g., VM/370, MVS with JES2 or JES3, SVS, MVT with ASP or HASP). Files received by the VAX can be printed or saved in user-selected disk files. Files sent to the network can be routed to any network node for printing, punching, or job submission, or to a VM/370 user's virtual reader. Files sent from the VAXmore » are queued and transmitted asynchronously. No changes are required to the IBM software.DEC VAX11/780; VAX-11 FORTRAN 77 (99%) and MACRO-11 (1%); VMS 2.5; VAX11/780 with DUP-11 UNIBUS interface and 9600 baud synchronous modem..« less
NASA Astrophysics Data System (ADS)
Gipson, John
2011-07-01
I describe the proposed data structure for storing, archiving and processing VLBI data. In this scheme, most VLBI data is stored in NetCDF files. NetCDF has the advantage that there are interfaces to most common computer languages including Fortran, Fortran-90, C, C++, Perl, etc, and the most common operating systems including linux, Windows and Mac. The data files for a particular session are organized by special ASCII "wrapper" files which contain pointers to the data files. This allows great flexibility in the processing and analysis of VLBI data, and also allows for extending the types of data used, e.g., source maps. I discuss the use of the new format in calc/solve and other VLBI analysis packages. I also discuss plans for transitioning to the new structure.
A computer case definition for sudden cardiac death.
Chung, Cecilia P; Murray, Katherine T; Stein, C Michael; Hall, Kathi; Ray, Wayne A
2010-06-01
To facilitate studies of medications and sudden cardiac death, we developed and validated a computer case definition for these deaths. The study of community dwelling Tennessee Medicaid enrollees 30-74 years of age utilized a linked database with Medicaid inpatient/outpatient files, state death certificate files, and a state 'all-payers' hospital discharge file. The computerized case definition was developed from a retrospective cohort study of sudden cardiac deaths occurring between 1990 and 1993. Medical records for 926 potential cases had been adjudicated for this study to determine if they met the clinical definition for sudden cardiac death occurring in the community and were likely to be due to ventricular tachyarrhythmias. The computerized case definition included deaths with (1) no evidence of a terminal hospital admission/nursing home stay in any of the data sources; (2) an underlying cause of death code consistent with sudden cardiac death; and (3) no terminal procedures inconsistent with unresuscitated cardiac arrest. This definition was validated in an independent sample of 174 adjudicated deaths occurring between 1994 and 2005. The positive predictive value of the computer case definition was 86.0% in the development sample and 86.8% in the validation sample. The positive predictive value did not vary materially for deaths coded according to the ICO-9 (1994-1998, positive predictive value = 85.1%) or ICD-10 (1999-2005, 87.4%) systems. A computerized Medicaid database, linked with death certificate files and a state hospital discharge database, can be used for a computer case definition of sudden cardiac death. Copyright (c) 2009 John Wiley & Sons, Ltd.
Definition Of Touch-Sensitive Zones For Graphical Displays
NASA Technical Reports Server (NTRS)
Monroe, Burt L., III; Jones, Denise R.
1988-01-01
Touch zones defined simply by touching, while editing done automatically. Development of touch-screen interactive computing system, tedious task. Interactive Editor for Definition of Touch-Sensitive Zones computer program increases efficiency of human/machine communications by enabling user to define each zone interactively, minimizing redundancy in programming and eliminating need for manual computation of boundaries of touch areas. Information produced during editing process written to data file, to which access gained when needed by application program.
Lefkoff, L.J.; Gorelick, S.M.
1987-01-01
A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)
User’s Guide for SHIPINT - A Computer Program to Compute Two Ship Interaction in Waves
1996-08-01
P500693.PDF [Page: 1 of 84] Image Cover Sheet CLASSIFICATION SYSTEM NUMBER 500693 UNCLASSIFIED I llllll 111111111111111111111111111111111 TITLE...Halifax, Nova Scotia, Canada B3J 2X4 1 ---------· CONTRACTOR REPORT I I I I I iii;,"’: · 1 Defence Research Establishment Atlantic Canada...SUMMARY 1 Introduction 2 Coordinate Systems and Two Ship Motions 3 Flow Chart 4 Input Data File Description 4.1 shipint.in ........ . 4.2 paneLa.in
ERDC MSRC Resource. High Performance Computing for the Warfighter. Spring 2006
2006-01-01
named Ruby, and the HP/Compaq SC45, named Emerald , continue to add their unique sparkle to the ERDC MSRC computer infrastructure. ERDC invited the...configuration on B-52H purchased additional memory for the login nodes so that this part of the solution process could be done as a preprocessing step. On...application and system services. Of the service nodes, 10 are login nodes and 23 are input/output (I/O) server nodes for the Lustre file system (i.e., the
Teaching "Filing Rules"--Via Computer-Aided Instruction.
ERIC Educational Resources Information Center
Agneberg, Craig
A computer software package has been developed to teach and test students on the Rules for Alphabetical Filing of the Association of Records Managers and Administrators (ARMA). The following computer assisted instruction principles were used in developing the program: gaining attention, stating objectives, providing direction, reviewing…
The new CMS DAQ system for run-2 of the LHC
Bawej, Tomasz; Behrens, Ulf; Branson, James; ...
2015-05-21
The data acquisition (DAQ) system of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in ordermore » to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A Clos network based on 56 Gb/s FDR Infiniband has been chosen for the event builder with a throughput of ~ 4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. As a result, this paper presents the requirements, technical choices, and performance of the new system.« less
Comprehensive analysis of a Radiology Operations Management computer system.
Arenson, R L; London, J W
1979-11-01
The Radiology Operations Management computer system at the Hospital of the University of Pennsylvania is discussed. The scheduling and file room modules are based on the system at Massachusetts General Hospital. Patient delays are indicated by the patient tracking module. A reporting module allows CRT/keyboard entry by transcriptionists, entry of standard reports by radiologists using bar code labels, and entry by radiologists using a specialty designed diagnostic reporting terminal. Time-flow analyses demonstrate a significant improvement in scheduling, patient waiting, retrieval of radiographs, and report delivery. Recovery of previously lost billing contributes to the proved cost effectiveness of this system.
Fast probabilistic file fingerprinting for big data
2013-01-01
Background Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. Results We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Conclusions Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff. PMID:23445565
Al JABBARI, Youssef S.; TSAKIRIDIS, Peter; ELIADES, George; AL-HADLAQ, Solaiman M.; ZINELIS, Spiros
2012-01-01
Objective The aim of this study was to quantify the surface area, volume and specific surface area of endodontic files employing quantitative X-ray micro computed tomography (mXCT). Material and Methods Three sets (six files each) of the Flex-Master Ni-Ti system (Nº 20, 25 and 30, taper .04) were utilized in this study. The files were scanned by mXCT. The surface area and volume of all files were determined from the cutting tip up to 16 mm. The data from the surface area, volume and specific area were statistically evaluated using the one-way ANOVA and SNK multiple comparison tests at α=0.05, employing the file size as a discriminating variable. The correlation between the surface area and volume with nominal ISO sizes were tested employing linear regression analysis. Results The surface area and volume of Nº 30 files showed the highest value followed by Nº 25 and Nº 20 and the differences were statistically significant. The Nº 20 files showed a significantly higher specific surface area compared to Nº 25 and Nº 30. The increase in surface and volume towards higher file sizes follows a linear relationship with the nominal ISO sizes (r2=0.930 for surface area and r2=0.974 for volume respectively). Results indicated that the surface area and volume demonstrated an almost linear increase while the specific surface area exhibited an abrupt decrease towards higher sizes. Conclusions This study demonstrates that mXCT can be effectively applied to discriminate very small differences in the geometrical features of endodontic micro-instruments, while providing quantitative information for their geometrical properties. PMID:23329248
A Data Handling System for Modern and Future Fermilab Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Illingworth, R. A.
2014-01-01
Current and future Fermilab experiments such as Minerva, NOνA, and MicroBoone are now using an improved version of the Fermilab SAM data handling system. SAM was originally used by the CDF and D0 experiments for Run II of the Fermilab Tevatron to provide file metadata and location cataloguing, uploading of new files to tape storage, dataset management, file transfers between global processing sites, and processing history tracking. However SAM was heavily tailored to the Run II environment and required complex and hard to deploy client software, which made it hard to adapt to new experiments. The Fermilab Computing Sector hasmore » progressively updated SAM to use modern, standardized, technologies in order to more easily deploy it for current and upcoming Fermilab experiments, and to support the data preservation efforts of the Run II experiments.« less
Inertial Manifolds for Navier-Stokes Equations and Related Dynamical Systems
1991-05-31
Graphics IRIS (SGI). The RLE files for the animation are loaded to an Abekas and recorded to tape by Betacam . This computational work was done by using the...scripts and comments, are loaded to the Abekas-A60 digital image storage device, and then recorded to the Betacam BVW-75 analog tape recorder. Static...interfacing, huge data files are output to the Data Vault parallelly with little cost. In addition to the SGIs, Abekas, Betacam and Solitaire, the
Environment Behavior Models for Real-Time Reactive System Testing Automation
2006-09-01
by Muharrem Ugur Aksu September 2006 Co-Advisors: Mikhail Auguston Man-Tak Shing Approved for public release; distribution is......Muharrem Ugur Aksu Naval Postgraduate School Computer Science Department Date: 20 July 2006 File Name: Shuttle.cpp
NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations
NASA Astrophysics Data System (ADS)
Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.
2010-09-01
The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.
TFaNS-Tone Fan Noise Design/Prediction System: Users' Manual TFaNS Version 1.5
NASA Technical Reports Server (NTRS)
Topol, David A.; Huff, Dennis L. (Technical Monitor)
2003-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Glenn. The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. The first version of this design system was developed under a previous NASA contract. Several improvements have been made to TFaNS. This users' manual shows how to run this new system. TFaNS consists of the codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and writes them to files, CUP3D Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions, and AWAKEN CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so they can be used by the system. This report provides information on code input and file structure essential for potential users of TFaNS.
Umeda, Akira; Iwata, Yasushi; Okada, Yasumasa; Shimada, Megumi; Baba, Akiyasu; Minatogawa, Yasuyuki; Yamada, Takayasu; Chino, Masao; Watanabe, Takafumi; Akaishi, Makoto
2004-12-01
The high cost of digital echocardiographs and the large size of data files hinder the adoption of remote diagnosis of digitized echocardiography data. We have developed a low-cost digital filing system for echocardiography data. In this system, data from a conventional analog echocardiograph are captured using a personal computer (PC) equipped with an analog-to-digital converter board. Motion picture data are promptly compressed using a moving pictures expert group (MPEG) 4 codec. The digitized data with preliminary reports obtained in a rural hospital are then sent to cardiologists at distant urban general hospitals via the internet. The cardiologists can evaluate the data using widely available movie-viewing software (Windows Media Player). The diagnostic accuracy of this double-check system was confirmed by comparison with ordinary super-VHS videotapes. We have demonstrated that digitization of echocardiography data from a conventional analog echocardiograph and MPEG 4 compression can be performed using an ordinary PC-based system, and that this system enables highly efficient digital storage and remote diagnosis at low cost.
Code of Federal Regulations, 2013 CFR
2013-07-01
... are between 8:45 a.m. and 4:45 p.m., eastern standard or daylight saving time as appropriate during...; computation of time; representation of parties. 966.6 Section 966.6 Postal Service UNITED STATES POSTAL... time; representation of parties. (a) Filing. All documents required under this part must be filed by...
Code of Federal Regulations, 2014 CFR
2014-07-01
... are between 8:45 a.m. and 4:45 p.m., eastern standard or daylight saving time as appropriate during...; computation of time; representation of parties. 966.6 Section 966.6 Postal Service UNITED STATES POSTAL... time; representation of parties. (a) Filing. All documents required under this part must be filed by...
High performance data transfer
NASA Astrophysics Data System (ADS)
Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.
2017-10-01
The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostlund, Neil
This research showed the feasibility of applying the concepts of the Semantic Web to Computation Chemistry. We have created the first web portal (www.chemsem.com) that allows data created in the calculations of quantum chemistry, and other such chemistry calculations to be placed on the web in a way that makes the data accessible to scientists in a semantic form never before possible. The semantic web nature of the portal allows data to be searched, found, and used as an advance over the usual approach of a relational database. The semantic data on our portal has the nature of a Giantmore » Global Graph (GGG) that can be easily merged with related data and searched globally via a SPARQL Protocol and RDF Query Language (SPARQL) that makes global searches for data easier than with traditional methods. Our Semantic Web Portal requires that the data be understood by a computer and hence defined by an ontology (vocabulary). This ontology is used by the computer in understanding the data. We have created such an ontology for computational chemistry (purl.org/gc) that encapsulates a broad knowledge of the field of computational chemistry. We refer to this ontology as the Gainesville Core. While it is perhaps the first ontology for computational chemistry and is used by our portal, it is only a start of what must be a long multi-partner effort to define computational chemistry. In conjunction with the above efforts we have defined a new potential file standard (Common Standard for eXchange – CSX for computational chemistry data). This CSX file is the precursor of data in the Resource Description Framework (RDF) form that the semantic web requires. Our portal translates CSX files (as well as other computational chemistry data files) into RDF files that are part of the graph database that the semantic web employs. We propose a CSX file as a convenient way to encapsulate computational chemistry data.« less
NASA ARCH- A FILE ARCHIVAL SYSTEM FOR THE DEC VAX
NASA Technical Reports Server (NTRS)
Scott, P. J.
1994-01-01
The function of the NASA ARCH system is to provide a permanent storage area for files that are infrequently accessed. The NASA ARCH routines were designed to provide a simple mechanism by which users can easily store and retrieve files. The user treats NASA ARCH as the interface to a black box where files are stored. There are only five NASA ARCH user commands, even though NASA ARCH employs standard VMS directives and the VAX BACKUP utility. Special care is taken to provide the security needed to insure file integrity over a period of years. The archived files may exist in any of three storage areas: a temporary buffer, the main buffer, and a magnetic tape library. When the main buffer fills up, it is transferred to permanent magnetic tape storage and deleted from disk. Files may be restored from any of the three storage areas. A single file, multiple files, or entire directories can be stored and retrieved. archived entities hold the same name, extension, version number, and VMS file protection scheme as they had in the user's account prior to archival. NASA ARCH is capable of handling up to 7 directory levels. Wildcards are supported. User commands include TEMPCOPY, DISKCOPY, DELETE, RESTORE, and DIRECTORY. The DIRECTORY command searches a directory of savesets covering all three archival areas, listing matches according to area, date, filename, or other criteria supplied by the user. The system manager commands include 1) ARCHIVE- to transfer the main buffer to duplicate magnetic tapes, 2) REPORTto determine when the main buffer is full enough to archive, 3) INCREMENT- to back up the partially filled main buffer, and 4) FULLBACKUP- to back up the entire main buffer. On-line help files are provided for all NASA ARCH commands. NASA ARCH is written in DEC VAX DCL for interactive execution and has been implemented on a DEC VAX computer operating under VMS 4.X. This program was developed in 1985.
Fortran Program for X-Ray Photoelectron Spectroscopy Data Reformatting
NASA Technical Reports Server (NTRS)
Abel, Phillip B.
1989-01-01
A FORTRAN program has been written for use on an IBM PC/XT or AT or compatible microcomputer (personal computer, PC) that converts a column of ASCII-format numbers into a binary-format file suitable for interactive analysis on a Digital Equipment Corporation (DEC) computer running the VGS-5000 Enhanced Data Processing (EDP) software package. The incompatible floating-point number representations of the two computers were compared, and a subroutine was created to correctly store floating-point numbers on the IBM PC, which can be directly read by the DEC computer. Any file transfer protocol having provision for binary data can be used to transmit the resulting file from the PC to the DEC machine. The data file header required by the EDP programs for an x ray photoelectron spectrum is also written to the file. The user is prompted for the relevant experimental parameters, which are then properly coded into the format used internally by all of the VGS-5000 series EDP packages.
Turbomachinery Forced Response Prediction System (FREPS): User's Manual
NASA Technical Reports Server (NTRS)
Morel, M. R.; Murthy, D. V.
1994-01-01
The turbomachinery forced response prediction system (FREPS), version 1.2, is capable of predicting the aeroelastic behavior of axial-flow turbomachinery blades. This document is meant to serve as a guide in the use of the FREPS code with specific emphasis on its use at NASA Lewis Research Center (LeRC). A detailed explanation of the aeroelastic analysis and its development is beyond the scope of this document, and may be found in the references. FREPS has been developed by the NASA LeRC Structural Dynamics Branch. The manual is divided into three major parts: an introduction, the preparation of input, and the procedure to execute FREPS. Part 1 includes a brief background on the necessity of FREPS, a description of the FREPS system, the steps needed to be taken before FREPS is executed, an example input file with instructions, presentation of the geometric conventions used, and the input/output files employed and produced by FREPS. Part 2 contains a detailed description of the command names needed to create the primary input file that is required to execute the FREPS code. Also, Part 2 has an example data file to aid the user in creating their own input files. Part 3 explains the procedures required to execute the FREPS code on the Cray Y-MP, a computer system available at the NASA LeRC.
Management Tools for the 80's.
ERIC Educational Resources Information Center
Seivert, Dick; Thomas, Frank B.
Two functions necessary for managing a computer center (organizing and controlling) are discussed with a focus on a nomenclature that was found to be useful for identifying parts of a system and was compatible with the definitions of most operational systems. For example, it is necessary to identify files, reports, and programs used in daily…
Library Information System Time-Sharing (LISTS) Project. Final Report.
ERIC Educational Resources Information Center
Black, Donald V.
The Library Information System Time-Sharing (LISTS) experiment was based on three innovations in data processing technology: (1) the advent of computer time-sharing on third-generation machines, (2) the development of general-purpose file-management software and (3) the introduction of large, library-oriented data bases. The main body of the…
Logical Aspects of Question-Answering by Computer.
ERIC Educational Resources Information Center
Kuhns, J. L.
The problem of computerized question-answering is discussed in this paper from the point of view of certain technical, although elementary, notions of logic. Although the work reported herein has general application to the design of information systems, it is specifically motivated by the RAND Relational Data File. This system, for which a…
Peregrine System | High-Performance Computing | NREL
) and longer-term (/projects) storage. These file systems are mounted on all nodes. Peregrine has three -2670 Xeon processors and 64 GB of memory. In addition to mounting the /home, /nopt, /projects and # cores/node Memory/node Peak (DP) performance per node 88 Intel Xeon E5-2670 "Sandy Bridge" 8
Inside EUREKA. The California Career Information System.
ERIC Educational Resources Information Center
Banaghan, Bill; And Others
A computerized career information system named EUREKA has been developed for California. It originated in 1975-76 under the direction of the Bay Area Computer Educators and since that time has received state and VEA funding. It consists of two major components, Quest and information files. Quest asks users twenty-one questions in order to…
Processing EOS MLS Level-2 Data
NASA Technical Reports Server (NTRS)
Snyder, W. Van; Wu, Dong; Read, William; Jiang, Jonathan; Wagner, Paul; Livesey, Nathaniel; Schwartz, Michael; Filipiak, Mark; Pumphrey, Hugh; Shippony, Zvi
2006-01-01
A computer program performs level-2 processing of thermal-microwave-radiance data from observations of the limb of the Earth by the Earth Observing System (EOS) Microwave Limb Sounder (MLS). The purpose of the processing is to estimate the composition and temperature of the atmosphere versus altitude from .8 to .90 km. "Level-2" as used here is a specialists f term signifying both vertical profiles of geophysical parameters along the measurement track of the instrument and processing performed by this or other software to generate such profiles. Designed to be flexible, the program is controlled via a configuration file that defines all aspects of processing, including contents of state and measurement vectors, configurations of forward models, measurement and calibration data to be read, and the manner of inverting the models to obtain the desired estimates. The program can operate in a parallel form in which one instance of the program acts a master, coordinating the work of multiple slave instances on a cluster of computers, each slave operating on a portion of the data. Optionally, the configuration file can be made to instruct the software to produce files of simulated radiances based on state vectors formed from sets of geophysical data-product files taken as input.
[In vitro study of shaping ability of single-file techniques in curved canals].
Zeng, Yajie; Gu, Lisha; Cai, Yanling; Chen, Dian; Wei, Xi
2014-11-01
To compare the shaping quality in curved canals of two single-file technique systems with other two traditional full-sequential systems. Eighty mature molar canals with the curvature between 20 and 45 degrees were randomly divided into four groups. Specimens in each group were prepared to size 25 at working length using A (Reciproc), B (OneShape), C (MTwo) and D (Revo S), respectively. Each canal was scanned by micro-computed tomography before and after preparation. Parameters including changes in dentine volume, percentage of uninstrumented area, degree and tendency of transportation were analyzed. The operating time was also recorded. In full canal length, there was no difference in canal dentine removal, instrumented percentage and transportation degree among four groups (P > 0.05). In the apical 4 mm region, group A removed more dentine [(2.14±0.76) mm(2) of canal surface area and (0.38 ± 0.15) mm(3) of canal volume] than groups B and C(P < 0.05). At 1 mm level, median of transportation degree of group A was 0.05 (0.03)mm, which was smaller than other groups (P < 0.05). Groups A and B took (86.3±24.6) s and (85.9±21.3) s, while groups C and D took (147.4±28.3) s and (126.3±27.7) s srespectively to finish preparation. Single file techniques were significantly faster than the two full-sequential systems (P < 0.01). Compared with the continuous rotary systems, the reciprocating single-file system A showed better apical shaping ability. Both single-file techniques were more efficient than full-sequential systems for curved canal preparation. Single-file techniques appear to be the effective and efficient method for curved canal preparation.
Using the Parallel Computing Toolbox with MATLAB on the Peregrine System |
parallel pool took %g seconds.\\n', toc) % "single program multiple data" spmd fprintf('Worker %d says Hello World!\\n', labindex) end delete(gcp); % close the parallel pool exit To run the script on a compute node, create the file helloWorld.sub: #!/bin/bash #PBS -l walltime=05:00 #PBS -l nodes=1 #PBS -N
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Experiences From NASA/Langley's DMSS Project
NASA Technical Reports Server (NTRS)
1996-01-01
There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at the NASA Langley Research Center (LaRC) has placed such a system into production use. This paper will present the experiences, both good and bad, we have had with this system since putting it into production usage. The system is comprised of: 1) National Storage Laboratory (NSL)/UniTree 2.1, 2) IBM 9570 HIPPI attached disk arrays (both RAID 3 and RAID 5), 3) IBM RS6000 server, 4) HIPPI/IPI3 third party transfers between the disk array systems and the supercomputer clients, a CRAY Y-MP and a CRAY 2, 5) a "warm spare" file server, 6) transition software to convert from CRAY's Data Migration Facility (DMF) based system to DMSS, 7) an NSC PS32 HIPPI switch, and 8) a STK 4490 robotic library accessed from the IBM RS6000 block mux interface. This paper will cover: the performance of the DMSS in the following areas: file transfer rates, migration and recall, and file manipulation (listing, deleting, etc.); the appropriateness of a workstation class of file server for NSL/UniTree with LaRC's present storage requirements in mind the role of the third party transfers between the supercomputers and the DMSS disk array systems in DMSS; a detailed comparison (both in performance and functionality) between the DMF and DMSS systems LaRC's enhancements to the NSL/UniTree system administration environment the mechanism for DMSS to provide file server redundancy the statistics on the availability of DMSS the design and experiences with the locally developed transparent transition software which allowed us to make over 1.5 million DMF files available to NSL/UniTree with minimal system outage
NetpathXL - An Excel Interface to the Program NETPATH
Parkhurst, David L.; Charlton, Scott R.
2008-01-01
NetpathXL is a revised version of NETPATH that runs under Windows? operating systems. NETPATH is a computer program that uses inverse geochemical modeling techniques to calculate net geochemical reactions that can account for changes in water composition between initial and final evolutionary waters in hydrologic systems. The inverse models also can account for the isotopic composition of waters and can be used to estimate radiocarbon ages of dissolved carbon in ground water. NETPATH relies on an auxiliary, database program, DB, to enter the chemical analyses and to perform speciation calculations that define total concentrations of elements, charge balance, and redox state of aqueous solutions that are then used in inverse modeling. Instead of DB, NetpathXL relies on Microsoft Excel? to enter the chemical analyses. The speciation calculation formerly included in DB is implemented within the program NetpathXL. A program DBXL can be used to translate files from the old DB format (.lon files) to NetpathXL spreadsheets, or to create new NetpathXL spreadsheets. Once users have a NetpathXL spreadsheet with the proper format, new spreadsheets can be generated by copying or saving NetpathXL spreadsheets. In addition, DBXL can convert NetpathXL spreadsheets to PHREEQC input files. New capabilities in PHREEQC (version 2.15) allow solution compositions to be written to a .lon file, and inverse models developed in PHREEQC to be written as NetpathXL .pat and model files. NetpathXL can open NetpathXL spreadsheets, NETPATH-format path files (.pat files), and NetpathXL-format path files (.pat files). Once the speciation calculations have been performed on a spreadsheet file or a .pat file has been opened, the NetpathXL calculation engine is identical to the original NETPATH. Development of models and viewing results in NetpathXL rely on keyboard entry as in NETPATH.
Cyber Contingency Analysis version 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
Contingency analysis based approach for quantifying and examining the resiliency of a cyber system in respect to confidentiality, integrity and availability. A graph representing an organization's cyber system and related resources is used for the availability contingency analysis. The mission critical paths associated with an organization are used to determine the consequences of a potential contingency. A node (or combination of nodes) are removed from the graph to analyze a particular contingency. The value of all mission critical paths that are disrupted by that contingency are used to quantify its severity. A total severity score can be calculated based onmore » the complete list of all these contingencies. A simple n1 analysis can be done in which only one node is removed at a time for the analysis. We can also compute nk analysis, where k is the number of nodes to simultaneously remove for analysis. A contingency risk score can also be computed, which takes the probability of the contingencies into account. In addition to availability, we can also quantify confidentiality and integrity scores for the system. These treat user accounts as potential contingencies. The amount (and type) of files that an account can read to is used to compute the confidentiality score. The amount (and type) of files that an account can write to is used to compute the integrity score. As with availability analysis, we can use this information to compute total severity scores in regards to confidentiality and integrity. We can also take probability into account to compute associated risk scores.« less
A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system
NASA Astrophysics Data System (ADS)
Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.
2014-06-01
The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.
Parkhurst, David L.; Kipp, Kenneth L.; Engesgaard, Peter; Charlton, Scott R.
2004-01-01
The computer program PHAST simulates multi-component, reactive solute transport in three-dimensional saturated ground-water flow systems. PHAST is a versatile ground-water flow and solute-transport simulator with capabilities to model a wide range of equilibrium and kinetic geochemical reactions. The flow and transport calculations are based on a modified version of HST3D that is restricted to constant fluid density and constant temperature. The geochemical reactions are simulated with the geochemical model PHREEQC, which is embedded in PHAST. PHAST is applicable to the study of natural and contaminated ground-water systems at a variety of scales ranging from laboratory experiments to local and regional field scales. PHAST can be used in studies of migration of nutrients, inorganic and organic contaminants, and radionuclides; in projects such as aquifer storage and recovery or engineered remediation; and in investigations of the natural rock-water interactions in aquifers. PHAST is not appropriate for unsaturated-zone flow, multiphase flow, density-dependent flow, or waters with high ionic strengths. A variety of boundary conditions are available in PHAST to simulate flow and transport, including specified-head, flux, and leaky conditions, as well as the special cases of rivers and wells. Chemical reactions in PHAST include (1) homogeneous equilibria using an ion-association thermodynamic model; (2) heterogeneous equilibria between the aqueous solution and minerals, gases, surface complexation sites, ion exchange sites, and solid solutions; and (3) kinetic reactions with rates that are a function of solution composition. The aqueous model (elements, chemical reactions, and equilibrium constants), minerals, gases, exchangers, surfaces, and rate expressions may be defined or modified by the user. A number of options are available to save results of simulations to output files. The data may be saved in three formats: a format suitable for viewing with a text editor; a format suitable for exporting to spreadsheets and post-processing programs; or in Hierarchical Data Format (HDF), which is a compressed binary format. Data in the HDF file can be visualized on Windows computers with the program Model Viewer and extracted with the utility program PHASTHDF; both programs are distributed with PHAST. Operator splitting of the flow, transport, and geochemical equations is used to separate the three processes into three sequential calculations. No iterations between transport and reaction calculations are implemented. A three-dimensional Cartesian coordinate system and finite-difference techniques are used for the spatial and temporal discretization of the flow and transport equations. The non-linear chemical equilibrium equations are solved by a Newton-Raphson method, and the kinetic reaction equations are solved by a Runge-Kutta or an implicit method for integrating ordinary differential equations. The PHAST simulator may require large amounts of memory and long Central Processing Unit (CPU) times. To reduce the long CPU times, a parallel version of PHAST has been developed that runs on a multiprocessor computer or on a collection of computers that are networked. The parallel version requires Message Passing Interface, which is currently (2004) freely available. The parallel version is effective in reducing simulation times. This report documents the use of the PHAST simulator, including running the simulator, preparing the input files, selecting the output files, and visualizing the results. It also presents four examples that verify the numerical method and demonstrate the capabilities of the simulator. PHAST requires three input files. Only the flow and transport file is described in detail in this report. The other two files, the chemistry data file and the database file, are identical to PHREEQC files and the detailed description of these files is found in the PHREEQC documentation.
A framework for development of an intelligent system for design and manufacturing of stamping dies
NASA Astrophysics Data System (ADS)
Hussein, H. M. A.; Kumar, S.
2014-07-01
An integration of computer aided design (CAD), computer aided process planning (CAPP) and computer aided manufacturing (CAM) is required for development of an intelligent system to design and manufacture stamping dies in sheet metal industries. In this paper, a framework for development of an intelligent system for design and manufacturing of stamping dies is proposed. In the proposed framework, the intelligent system is structured in form of various expert system modules for different activities of design and manufacturing of dies. All system modules are integrated with each other. The proposed system takes its input in form of a CAD file of sheet metal part, and then system modules automate all tasks related to design and manufacturing of stamping dies. Modules are coded using Visual Basic (VB) and developed on the platform of AutoCAD software.
1985-08-01
from the mainframe to the terminals is approximately 56k bits per second (21:3). Score: 8. Expandability. The number of terminals available to the 0...the systems controllers may access any files. For modem link up, a callback system is to be implemented to prevent unauthorized off post access (10:2
Virus Alert: Ten Steps to Safe Computing.
ERIC Educational Resources Information Center
Gunter, Glenda A.
1997-01-01
Discusses computer viruses and explains how to detect them; discusses virus protection and the need to update antivirus software; and offers 10 safe computing tips, including scanning floppy disks and commercial software, how to safely download files from the Internet, avoiding pirated software copies, and backing up files. (LRW)
DIRAC File Replica and Metadata Catalog
NASA Astrophysics Data System (ADS)
Tsaregorodtsev, A.; Poss, S.
2012-12-01
File replica and metadata catalogs are essential parts of any distributed data management system, which are largely determining its functionality and performance. A new File Catalog (DFC) was developed in the framework of the DIRAC Project that combines both replica and metadata catalog functionality. The DFC design is based on the practical experience with the data management system of the LHCb Collaboration. It is optimized for the most common patterns of the catalog usage in order to achieve maximum performance from the user perspective. The DFC supports bulk operations for replica queries and allows quick analysis of the storage usage globally and for each Storage Element separately. It supports flexible ACL rules with plug-ins for various policies that can be adopted by a particular community. The DFC catalog allows to store various types of metadata associated with files and directories and to perform efficient queries for the data based on complex metadata combinations. Definition of file ancestor-descendent relation chains is also possible. The DFC catalog is implemented in the general DIRAC distributed computing framework following the standard grid security architecture. In this paper we describe the design of the DFC and its implementation details. The performance measurements are compared with other grid file catalog implementations. The experience of the DFC Catalog usage in the CLIC detector project are discussed.
ChemEngine: harvesting 3D chemical structures of supplementary data from PDF files.
Karthikeyan, Muthukumarasamy; Vyas, Renu
2016-01-01
Digital access to chemical journals resulted in a vast array of molecular information that is now available in the supplementary material files in PDF format. However, extracting this molecular information, generally from a PDF document format is a daunting task. Here we present an approach to harvest 3D molecular data from the supporting information of scientific research articles that are normally available from publisher's resources. In order to demonstrate the feasibility of extracting truly computable molecules from PDF file formats in a fast and efficient manner, we have developed a Java based application, namely ChemEngine. This program recognizes textual patterns from the supplementary data and generates standard molecular structure data (bond matrix, atomic coordinates) that can be subjected to a multitude of computational processes automatically. The methodology has been demonstrated via several case studies on different formats of coordinates data stored in supplementary information files, wherein ChemEngine selectively harvested the atomic coordinates and interpreted them as molecules with high accuracy. The reusability of extracted molecular coordinate data was demonstrated by computing Single Point Energies that were in close agreement with the original computed data provided with the articles. It is envisaged that the methodology will enable large scale conversion of molecular information from supplementary files available in the PDF format into a collection of ready- to- compute molecular data to create an automated workflow for advanced computational processes. Software along with source codes and instructions available at https://sourceforge.net/projects/chemengine/files/?source=navbar.Graphical abstract.
ERIC Educational Resources Information Center
Aaron, Lynn S.; Roche, Catherine M.
2012-01-01
"Cloud computing" refers to the use of computing resources on the Internet instead of on individual personal computers. The field is expanding and has significant potential value for educators. This is discussed with a focus on four main functions: file storage, file synchronization, document creation, and collaboration--each of which has…
Code of Federal Regulations, 2012 CFR
2012-04-01
....223 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES COLLECTION, MAINTENANCE, USE, AND DISSEMINATION OF RECORDS OF IDENTIFIABLE PERSONAL... the Commission's systems of records on magnetic tape or disks, or computer files, copies of the...
Code of Federal Regulations, 2010 CFR
2010-04-01
....223 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES COLLECTION, MAINTENANCE, USE, AND DISSEMINATION OF RECORDS OF IDENTIFIABLE PERSONAL... the Commission's systems of records on magnetic tape or disks, or computer files, copies of the...
Code of Federal Regulations, 2014 CFR
2014-04-01
....223 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES COLLECTION, MAINTENANCE, USE, AND DISSEMINATION OF RECORDS OF IDENTIFIABLE PERSONAL... the Commission's systems of records on magnetic tape or disks, or computer files, copies of the...
Code of Federal Regulations, 2011 CFR
2011-04-01
....223 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES COLLECTION, MAINTENANCE, USE, AND DISSEMINATION OF RECORDS OF IDENTIFIABLE PERSONAL... the Commission's systems of records on magnetic tape or disks, or computer files, copies of the...
Code of Federal Regulations, 2013 CFR
2013-04-01
....223 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES COLLECTION, MAINTENANCE, USE, AND DISSEMINATION OF RECORDS OF IDENTIFIABLE PERSONAL... the Commission's systems of records on magnetic tape or disks, or computer files, copies of the...
Rapid Prototyping Enters Mainstream Manufacturing.
ERIC Educational Resources Information Center
Winek, Gary
1996-01-01
Explains rapid prototyping, a process that uses computer-assisted design files to create a three-dimensional object automatically, speeding the industrial design process. Five commercially available systems and two emerging types--the 3-D printing process and repetitive masking and depositing--are described. (SK)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-24
... models underlying STANS, time series of proportional changes in implied volatilities for a range of... computer systems used by OCC to calculate daily margin requirements. OCC has proposed at this time to clear...
Active Storage with Analytics Capabilities and I/O Runtime System for Petascale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhary, Alok
Computational scientists must understand results from experimental, observational and computational simulation generated data to gain insights and perform knowledge discovery. As systems approach the petascale range, problems that were unimaginable a few years ago are within reach. With the increasing volume and complexity of data produced by ultra-scale simulations and high-throughput experiments, understanding the science is largely hampered by the lack of comprehensive I/O, storage, acceleration of data manipulation, analysis, and mining tools. Scientists require techniques, tools and infrastructure to facilitate better understanding of their data, in particular the ability to effectively perform complex data analysis, statistical analysis and knowledgemore » discovery. The goal of this work is to enable more effective analysis of scientific datasets through the integration of enhancements in the I/O stack, from active storage support at the file system layer to MPI-IO and high-level I/O library layers. We propose to provide software components to accelerate data analytics, mining, I/O, and knowledge discovery for large-scale scientific applications, thereby increasing productivity of both scientists and the systems. Our approaches include 1) design the interfaces in high-level I/O libraries, such as parallel netCDF, for applications to activate data mining operations at the lower I/O layers; 2) Enhance MPI-IO runtime systems to incorporate the functionality developed as a part of the runtime system design; 3) Develop parallel data mining programs as part of runtime library for server-side file system in PVFS file system; and 4) Prototype an active storage cluster, which will utilize multicore CPUs, GPUs, and FPGAs to carry out the data mining workload.« less
1982-06-01
p*A C.._ _ __ _ _ A, d.tibutiou is unhimta 4 iit 84~ L0 TABLE OF CONTENTS APPENDIX SCOPE OF WORK B MERGE AND COST PROGRAM DOCUMENTATION C FATSCO... PROGRAM TO COMPUTE TIME SERIES FREQUENCY RELATIONSHIPS D HEC-DSS - TIME SERIES DATA FILE MANAGEMENT SYSTEM E PLAN 1 -TIM SERIES DATA PLOTS AND ANNUAL...University of Minnesota, utilized an early version of the Hydrologic Engineering * Center’s (HEC) EEC-5c Computer Program . EEC is a Corps of Engineers
EQS Goes R: Simulations for SEM Using the Package REQS
ERIC Educational Resources Information Center
Mair, Patrick; Wu, Eric; Bentler, Peter M.
2010-01-01
The REQS package is an interface between the R environment of statistical computing and the EQS software for structural equation modeling. The package consists of 3 main functions that read EQS script files and import the results into R, call EQS script files from R, and run EQS script files from R and import the results after EQS computations.…
Code of Federal Regulations, 2010 CFR
2010-07-01
... business hours are between 8:15 a.m. and 4:45 p.m., eastern standard or daylight saving time as appropriate...; computation of time; representation of parties. 966.6 Section 966.6 Postal Service UNITED STATES POSTAL... time; representation of parties. (a) Filing. All documents required under this part must be filed by...
Code of Federal Regulations, 2011 CFR
2011-07-01
... business hours are between 8:15 a.m. and 4:45 p.m., eastern standard or daylight saving time as appropriate...; computation of time; representation of parties. 966.6 Section 966.6 Postal Service UNITED STATES POSTAL... time; representation of parties. (a) Filing. All documents required under this part must be filed by...
Code of Federal Regulations, 2012 CFR
2012-07-01
... business hours are between 8:15 a.m. and 4:45 p.m., eastern standard or daylight saving time as appropriate...; computation of time; representation of parties. 966.6 Section 966.6 Postal Service UNITED STATES POSTAL... time; representation of parties. (a) Filing. All documents required under this part must be filed by...
Infrastructure for Training and Partnershipes: California Water and Coastal Ocean Resources
NASA Technical Reports Server (NTRS)
Siegel, David A.; Dozier, Jeffrey; Gautier, Catherine; Davis, Frank; Dickey, Tommy; Dunne, Thomas; Frew, James; Keller, Arturo; MacIntyre, Sally; Melack, John
2000-01-01
The purpose of this project was to advance the existing ICESS/Bren School computing infrastructure to allow scientists, students, and research trainees the opportunity to interact with environmental data and simulations in near-real time. Improvements made with the funding from this project have helped to strengthen the research efforts within both units, fostered graduate research training, and helped fortify partnerships with government and industry. With this funding, we were able to expand our computational environment in which computer resources, software, and data sets are shared by ICESS/Bren School faculty researchers in all areas of Earth system science. All of the graduate and undergraduate students associated with the Donald Bren School of Environmental Science and Management and the Institute for Computational Earth System Science have benefited from the infrastructure upgrades accomplished by this project. Additionally, the upgrades fostered a significant number of research projects (attached is a list of the projects that benefited from the upgrades). As originally proposed, funding for this project provided the following infrastructure upgrades: 1) a modem file management system capable of interoperating UNIX and NT file systems that can scale to 6.7 TB, 2) a Qualstar 40-slot tape library with two AIT tape drives and Legato Networker backup/archive software, 3) previously unavailable import/export capability for data sets on Zip, Jaz, DAT, 8mm, CD, and DLT media in addition to a 622Mb/s Internet 2 connection, 4) network switches capable of 100 Mbps to 128 desktop workstations, 5) Portable Batch System (PBS) computational task scheduler, and vi) two Compaq/Digital Alpha XP1000 compute servers each with 1.5 GB of RAM along with an SGI Origin 2000 (purchased partially using funds from this project along with funding from various other sources) to be used for very large computations, as required for simulation of mesoscale meteorology or climate.
Template based parallel checkpointing in a massively parallel computer system
Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN
2009-01-13
A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.
Extracting the Data From the LCM vk4 Formatted Output File
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less
Data Management System for the National Energy-Water System (NEWS) Assessment Framework
NASA Astrophysics Data System (ADS)
Corsi, F.; Prousevitch, A.; Glidden, S.; Piasecki, M.; Celicourt, P.; Miara, A.; Fekete, B. M.; Vorosmarty, C. J.; Macknick, J.; Cohen, S. M.
2015-12-01
Aiming at providing a comprehensive assessment of the water-energy nexus, the National Energy-Water System (NEWS) project requires the integration of data to support a modeling framework that links climate, hydrological, power production, transmission, and economical models. Large amounts of Georeferenced data has to be streamed to the components of the inter-disciplinary model to explore future challenges and tradeoffs in the US power production, based on climate scenarios, power plant locations and technologies, available water resources, ecosystem sustainability, and economic demand. We used open source and in-house build software components to build a system that addresses two major data challenges: On-the-fly re-projection, re-gridding, interpolation, extrapolation, nodata patching, merging, temporal and spatial aggregation, of static and time series datasets in virtually any file formats and file structures, and any geographic extent for the models I/O, directly at run time; Comprehensive data management based on metadata cataloguing and discovery in repositories utilizing the MAGIC Table (Manipulation and Geographic Inquiry Control database). This innovative concept allows models to access data on-the-fly by data ID, irrespective of file path, file structure, file format and regardless its GIS specifications. In addition, a web-based information and computational system is being developed to control the I/O of spatially distributed Earth system, climate, and hydrological, power grid, and economical data flow within the NEWS framework. The system allows scenario building, data exploration, visualization, querying, and manipulation any loaded gridded, point, and vector polygon dataset. The system has demonstrated its potential for applications in other fields of Earth science modeling, education, and outreach. Over time, this implementation of the system will provide near real-time assessment of various current and future scenarios of the water-energy nexus.
Computer program for maintenance of individual animal records in a nonhuman primate colony.
Kuehl, T J; Dukelow, W R
1977-06-01
A computer program was developed to maintain animal records for a nonhuman primate colony used in research. The program was designed for use with an existing laboratory notebook system. The computer program identifies each notebook entry containing information about each animal and keeps other information, including animal name, sex, species, projects to which the animal is assigned, location of the animal, dates and body weights. The program is interactive and easy to use. Information stored in the system is readily accessible to all investigators using the animals. In 17 months of use, 1382 master file entries were developed for 113 monkeys.
A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF
NASA Astrophysics Data System (ADS)
Deatrich, D. C.; Liu, S. X.; Tafirout, R.
2010-04-01
We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.
Crystallographic and general use programs for the XDS Sigma 5 computer
NASA Technical Reports Server (NTRS)
Snyder, R. L.
1973-01-01
Programs in basic FORTRAN 4 are described, which fall into three catagories: (1) interactive programs to be executed under time sharing (BTM); (2) non interactive programs which are executed in batch processing mode (BPM); and (3) large non interactive programs which require more memory than is available in the normal BPM/BTM operating system and must be run overnight on a special system called XRAY which releases about 45,000 words of memory to the user. Programs in catagories (1) and (2) are stored as FORTRAN source files in the account FSNYDER. Programs in catagory (3) are stored in the XRAY system as load modules. The type of file in account FSNYDER is identified by the first two letters in the name.
An integrated software system for geometric correction of LANDSAT MSS imagery
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Esilva, A. J. F. M.; Camara-Neto, G.; Serra, P. R. M.; Desousa, R. C. M.; Mitsuo, Fernando Augusta, II
1984-01-01
A system for geometrically correcting LANDSAT MSS imagery includes all phases of processing, from receiving a raw computer compatible tape (CCT) to the generation of a corrected CCT (or UTM mosaic). The system comprises modules for: (1) control of the processing flow; (2) calculation of satellite ephemeris and attitude parameters, (3) generation of uncorrected files from raw CCT data; (4) creation, management and maintenance of a ground control point library; (5) determination of the image correction equations, using attitude and ephemeris parameters and existing ground control points; (6) generation of corrected LANDSAT file, using the equations determined beforehand; (7) union of LANDSAT scenes to produce and UTM mosaic; and (8) generation of output tape, in super-structure format.
Securing the User's Work Environment
NASA Technical Reports Server (NTRS)
Cardo, Nicholas P.
2004-01-01
High performance computing at the Numerical Aerospace Simulation Facility at NASA Ames Research Center includes C90's, J90's and Origin 2000's. Not only is it necessary to protect these systems from outside attacks, but also to provide a safe working environment on the systems. With the right tools, security anomalies in the user s work environment can be deleted and corrected. Validating proper ownership of files against user s permissions, will reduce the risk of inadvertent data compromise. The detection of extraneous directories and files hidden amongst user home directories is important for identifying potential compromises. The first runs of these utilities detected over 350,000 files with problems. With periodic scans, automated correction of problems takes only minutes. Tools for detecting these types of problems as well as their development techniques will be discussed with emphasis on consistency, portability and efficiency for both UNICOS and IRIX.
DOCU-TEXT: A tool before the data dictionary
NASA Technical Reports Server (NTRS)
Carter, B.
1983-01-01
DOCU-TEXT, a proprietary software package that aids in the production of documentation for a data processing organization and can be installed and operated only on IBM computers is discussed. In organizing information that ultimately will reside in a data dictionary, DOCU-TEXT proved to be a useful documentation tool in extracting information from existing production jobs, procedure libraries, system catalogs, control data sets and related files. DOCU-TEXT reads these files to derive data that is useful at the system level. The output of DOCU-TEXT is a series of user selectable reports. These reports can reflect the interactions within a single job stream, a complete system, or all the systems in an installation. Any single report, or group of reports, can be generated in an independent documentation pass.
UNIX: A Tool for Information Management.
ERIC Educational Resources Information Center
Frey, Dean
1989-01-01
Describes UNIX, a computer operating system that supports multi-task and multi-user operations. Characteristics that make it especially suitable for library applications are discussed, including a hierarchical file structure and utilities for text processing, database activities, and bibliographic work. Sources of information on hardware…
THE EPANET PROGRAMMER'S TOOLKIT FOR ANALYSIS OF WATER DISTRIBUTION SYSTEMS
The EPANET Programmer's Toolkit is a collection of functions that helps simplify computer programming of water distribution network analyses. the functions can be used to read in a pipe network description file, modify selected component properties, run multiple hydraulic and wa...
ERIC Educational Resources Information Center
Beckstead, David
1996-01-01
Explores the educational possibilities inherent in combining Musical Instrument Digital Interface (MIDI) with communications technology. A MIDI system (a combination synthesizer and computer) allows students to compose, record, experiment, and correct at one site. A MIDI file can be sent via e-mail to others for comments. (MJP)
A computer program for two-particle generalized coefficients of fractional parentage
NASA Astrophysics Data System (ADS)
Deveikis, A.; Juodagalvis, A.
2008-10-01
We present a FORTRAN90 program GCFP for the calculation of the generalized coefficients of fractional parentage (generalized CFPs or GCFP). The approach is based on the observation that the multi-shell CFPs can be expressed in terms of single-shell CFPs, while the latter can be readily calculated employing a simple enumeration scheme of antisymmetric A-particle states and an efficient method of construction of the idempotent matrix eigenvectors. The program provides fast calculation of GCFPs for a given particle number and produces results possessing numerical uncertainties below the desired tolerance. A single j-shell is defined by four quantum numbers, (e,l,j,t). A supplemental C++ program parGCFP allows calculation to be done in batches and/or in parallel. Program summaryProgram title:GCFP, parGCFP Catalogue identifier: AEBI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 199 No. of bytes in distributed program, including test data, etc.: 88 658 Distribution format: tar.gz Programming language: FORTRAN 77/90 ( GCFP), C++ ( parGCFP) Computer: Any computer with suitable compilers. The program GCFP requires a FORTRAN 77/90 compiler. The auxiliary program parGCFP requires GNU-C++ compatible compiler, while its parallel version additionally requires MPI-1 standard libraries Operating system: Linux (Ubuntu, Scientific) (all programs), also checked on Windows XP ( GCFP, serial version of parGCFP) RAM: The memory demand depends on the computation and output mode. If this mode is not 4, the program GCFP demands the following amounts of memory on a computer with Linux operating system. It requires around 2 MB of RAM for the A=12 system at E⩽2. Computation of the A=50 particle system requires around 60 MB of RAM at E=0 and ˜70 MB at E=2 (note, however, that the calculation of this system will take a very long time). If the computation and output mode is set to 4, the memory demands by GCFP are significantly larger. Calculation of GCFPs of A=12 system at E=1 requires 145 MB. The program parGCFP requires additional 2.5 and 4.5 MB of memory for the serial and parallel version, respectively. Classification: 17.18 Nature of problem: The program GCFP generates a list of two-particle coefficients of fractional parentage for several j-shells with isospin. Solution method: The method is based on the observation that multishell coefficients of fractional parentage can be expressed in terms of single-shell CFPs [1]. The latter are calculated using the algorithm [2,3] for a spectral decomposition of an antisymmetrization operator matrix Y. The coefficients of fractional parentage are those eigenvectors of the antisymmetrization operator matrix Y that correspond to unit eigenvalues. A computer code for these coefficients is available [4]. The program GCFP offers computation of two-particle multishell coefficients of fractional parentage. The program parGCFP allows a batch calculation using one input file. Sets of GCFPs are independent and can be calculated in parallel. Restrictions:A<86 when E=0 (due to the memory constraints); small numbers of particles allow significantly higher excitations, though the shell with j⩾11/2 cannot get full (it is the implementation constraint). Unusual features: Using the program GCFP it is possible to determine allowed particle configurations without the GCFP computation. The GCFPs can be calculated either for all particle configurations at once or for a specified particle configuration. The values of GCFPs can be printed out with a complete specification in either one file or with the parent and daughter configurations printed in separate files. The latter output mode requires additional time and RAM memory. It is possible to restrict the ( J,T) values of the considered particle configurations. (Here J is the total angular momentum and T is the total isospin of the system.) The program parGCFP produces several result files the number of which equals to the number of particle configurations. To work correctly, the program GCFP needs to be compiled to read parameters from the standard input (the default setting). Running time: It depends on the size of the problem. The minimum time is required, if the computation and output mode ( CompMode) is not 4, but the resulting file is larger. A system with A=12 particles at E=0 (all 9411 GCFPs) took around 1 sec on a Pentium4 2.8 GHz processor with 1 MB L2 cache. The program required about 14 min to calculate all 1.3×10 GCFPs of E=1. The time for all 5.5×10 GCFPs of E=2 was about 53 hours. For this number of particles, the calculation time of both E=0 and E=1 with CompMode = 1 and 4 is nearly the same, when no other processes are running. The case of E=2 could not be calculated with CompMode = 4, because the RAM memory was insufficient. In general, the latter CompMode requires a longer computation time, although the resulting files are smaller in size. The program parGCFP puts virtually no time overhead. Its parallel version speeds-up the calculation. However, the results need to be collected from several files created for each configuration. References: [1] J. Levinsonas, Works of Lithuanian SSR Academy of Sciences 4 (1957) 17. [2] A. Deveikis, A. Bončkus, R. Kalinauskas, Lithuanian Phys. J. 41 (2001) 3. [3] A. Deveikis, R.K. Kalinauskas, B.R. Barrett, Ann. Phys. 296 (2002) 287. [4] A. Deveikis, Comput. Phys. Comm. 173 (2005) 186. (CPC Catalogue ID. ADWI_v1_0)
Information Metacatalog for a Grid
NASA Technical Reports Server (NTRS)
Kolano, Paul
2007-01-01
SWIM is a Software Information Metacatalog that gathers detailed information about the software components and packages installed on a grid resource. Information is currently gathered for Executable and Linking Format (ELF) executables and shared libraries, Java classes, shell scripts, and Perl and Python modules. SWIM is built on top of the POUR framework, which is described in the preceding article. SWIM consists of a set of Perl modules for extracting software information from a system, an XML schema defining the format of data that can be added by users, and a POUR XML configuration file that describes how these elements are used to generate periodic, on-demand, and user-specified information. Periodic software information is derived mainly from the package managers used on each system. SWIM collects information from native package managers in FreeBSD, Solaris, and IRX as well as the RPM, Perl, and Python package managers on multiple platforms. Because not all software is available, or installed in package form, SWIM also crawls the set of relevant paths from the File System Hierarchy Standard that defines the standard file system structure used by all major UNIX distributions. Using these two techniques, the vast majority of software installed on a system can be located. SWIM computes the same information gathered by the periodic routines for specific files on specific hosts, and locates software on a system given only its name and type.
NASA Langley Research Center's distributed mass storage system
NASA Technical Reports Server (NTRS)
Pao, Juliet Z.; Humes, D. Creig
1993-01-01
There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.
Peregrine System User Basics | High-Performance Computing | NREL
peregrine.hpc.nrel.gov or to one of the login nodes. Example commands to access Peregrine from a Linux or Mac OS X system Code Example Create a file called hello.F90 containing the following code: program hello write(6 information by enclosing it in brackets < >. For example: $ ssh -Y
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-04
... Operations regarding a TRACE-Eligible Security when such security is not in the TRACE system, and to... using any facility or system that FINRA operates or controls.\\8\\ The fee is similar to the Computer-to... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-64364; File No. SR-FINRA-2011-012] Self...
NASA Technical Reports Server (NTRS)
Paul, C. K.; Landini, A. J.; Diegert, C.
1975-01-01
The Santa Monica mountains of Los Angeles consist primarily of complexly folded sedimentary marine strata with igneous and metamorphic rocks at the eastern end of the mountains. With the increased development of the Santa Monicas, a study was conducted to determine the critical land use data items in the mountains. Two information systems developed in parallel are described. One capitalizes on the City's present computer line printer system, and the second utilizes map overlay techniques on an interactive computer terminal. Results concerning population, housing, and land improvement illustrate the successful linking of ordinal and nominal data files in the interactive system.-
An Advanced Commanding and Telemetry System
NASA Astrophysics Data System (ADS)
Hill, Maxwell G. G.
The Loral Instrumentation System 500 configured as an Advanced Commanding and Telemetry System (ACTS) supports the acquisition of multiple telemetry downlink streams, and simultaneously supports multiple uplink command streams for today's satellite vehicles. By using industry and federal standards, the system is able to support, without relying on a host computer, a true distributed dataflow architecture that is complemented by state-of-the-art RISC-based workstations and file servers.
Astrochem: Abundances of chemical species in the interstellar medium
NASA Astrophysics Data System (ADS)
Maret, Sébastien; Bergin, Edwin A.
2015-07-01
Astrochem computes the abundances of chemical species in the interstellar medium, as function of time. It studies the chemistry in a variety of astronomical objects, including diffuse clouds, dense clouds, photodissociation regions, prestellar cores, protostars, and protostellar disks. Astrochem reads a network of chemical reactions from a text file, builds up a system of kinetic rates equations, and solves it using a state-of-the-art stiff ordinary differential equation (ODE) solver. The Jacobian matrix of the system is computed implicitly, so the resolution of the system is extremely fast: large networks containing several thousands of reactions are usually solved in a few seconds. A variety of gas phase process are considered, as well as simple gas-grain interactions, such as the freeze-out and the desorption via several mechanisms (thermal desorption, cosmic-ray desorption and photo-desorption). The computed abundances are written in a HDF5 file, and can be plotted in different ways with the tools provided with Astrochem. Chemical reactions and their rates are written in a format which is meant to be easy to read and to edit. A tool to convert the chemical networks from the OSU and KIDA databases into this format is also provided. Astrochem is written in C, and its source code is distributed under the terms of the GNU General Public License (GPL).
Spectrum orbit utilization program technical manual SOUP5 Version 3.8
NASA Technical Reports Server (NTRS)
Davidson, J.; Ottey, H. R.; Sawitz, P.; Zusman, F. S.
1984-01-01
The underlying engineering and mathematical models as well as the computational methods used by the SOUP5 analysis programs, which are part of the R2BCSAT-83 Broadcast Satellite Computational System, are described. Included are the algorithms used to calculate the technical parameters and references to the relevant technical literature. The system provides the following capabilities: requirements file maintenance, data base maintenance, elliptical satellite beam fitting to service areas, plan synthesis from specified requirements, plan analysis, and report generation/query. Each of these functions are briefly described.
Smart command recognizer (SCR) - For development, test, and implementation of speech commands
NASA Technical Reports Server (NTRS)
Simpson, Carol A.; Bunnell, John W.; Krones, Robert R.
1988-01-01
The SCR, a rapid prototyping system for the development, testing, and implementation of speech commands in a flight simulator or test aircraft, is described. A single unit performs all functions needed during these three phases of system development, while the use of common software and speech command data structure files greatly reduces the preparation time for successive development phases. As a smart peripheral to a simulation or flight host computer, the SCR interprets the pilot's spoken input and passes command codes to the simulation or flight computer.
NASA Technical Reports Server (NTRS)
Rogers, James L.; Feyock, Stefan; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
The purpose of this research effort is to investigate the benefits that might be derived from applying artificial intelligence tools in the area of conceptual design. Therefore, the emphasis is on the artificial intelligence aspects of conceptual design rather than structural and optimization aspects. A prototype knowledge-based system, called STRUTEX, was developed to initially configure a structure to support point loads in two dimensions. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user by integrating a knowledge base interface and inference engine, a data base interface, and graphics while keeping the knowledge base and data base files separate. The system writes a file which can be input into a structural synthesis system, which combines structural analysis and optimization.
Developing a Hadoop-based Middleware for Handling Multi-dimensional NetCDF
NASA Astrophysics Data System (ADS)
Li, Z.; Yang, C. P.; Schnase, J. L.; Duffy, D.; Lee, T. J.
2014-12-01
Climate observations and model simulations are collecting and generating vast amounts of climate data, and these data are ever-increasing and being accumulated in a rapid speed. Effectively managing and analyzing these data are essential for climate change studies. Hadoop, a distributed storage and processing framework for large data sets, has attracted increasing attentions in dealing with the Big Data challenge. The maturity of Infrastructure as a Service (IaaS) of cloud computing further accelerates the adoption of Hadoop in solving Big Data problems. However, Hadoop is designed to process unstructured data such as texts, documents and web pages, and cannot effectively handle the scientific data format such as array-based NetCDF files and other binary data format. In this paper, we propose to build a Hadoop-based middleware for transparently handling big NetCDF data by 1) designing a distributed climate data storage mechanism based on POSIX-enabled parallel file system to enable parallel big data processing with MapReduce, as well as support data access by other systems; 2) modifying the Hadoop framework to transparently processing NetCDF data in parallel without sequencing or converting the data into other file formats, or loading them to HDFS; and 3) seamlessly integrating Hadoop, cloud computing and climate data in a highly scalable and fault-tolerance framework.
Interactive display of molecular models using a microcomputer system
NASA Technical Reports Server (NTRS)
Egan, J. T.; Macelroy, R. D.
1980-01-01
A simple, microcomputer-based, interactive graphics display system has been developed for the presentation of perspective views of wire frame molecular models. The display system is based on a TERAK 8510a graphics computer system with a display unit consisting of microprocessor, television display and keyboard subsystems. The operating system includes a screen editor, file manager, PASCAL and BASIC compilers and command options for linking and executing programs. The graphics program, written in USCD PASCAL, involves the centering of the coordinate system, the transformation of centered model coordinates into homogeneous coordinates, the construction of a viewing transformation matrix to operate on the coordinates, clipping invisible points, perspective transformation and scaling to screen coordinates; commands available include ZOOM, ROTATE, RESET, and CHANGEVIEW. Data file structure was chosen to minimize the amount of disk storage space. Despite the inherent slowness of the system, its low cost and flexibility suggests general applicability.
Processing of meteorological data with ultrasonic thermoanemometers
NASA Astrophysics Data System (ADS)
Telminov, A. E.; Bogushevich, A. Ya.; Korolkov, V. A.; Botygin, I. A.
2017-11-01
The article describes a software system intended for supporting scientific researches of the atmosphere during the processing of data gathered by multi-level ultrasonic complexes for automated monitoring of meteorological and turbulent parameters in the ground layer of the atmosphere. The system allows to process files containing data sets of temperature instantaneous values, three orthogonal components of wind speed, humidity and pressure. The processing task execution is done in multiple stages. During the first stage, the system executes researcher's query for meteorological parameters. At the second stage, the system computes series of standard statistical meteorological field properties, such as averages, dispersion, standard deviation, asymmetry coefficients, excess, correlation etc. The third stage is necessary to prepare for computing the parameters of atmospheric turbulence. The computation results are displayed to user and stored at hard drive.
ERIC Educational Resources Information Center
Batt, Russell H., Ed.
1989-01-01
Describes two chemistry computer programs: (1) "Eureka: A Chemistry Problem Solver" (problem files may be written by the instructor, MS-DOS 2.0, IBM with 384K); and (2) "PC-File+" (database management, IBM with 416K and two floppy drives). (MVL)
The growth of the UniTree mass storage system at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Tarshish, Adina; Salmon, Ellen
1993-01-01
In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw the growth of near-online data from nil to nearly three terabytes, a doubling of the number of CPU's on the facility's Cray YMP (the primary data source for UniTree), and the necessity for an aggressive regimen for repacking sparse tapes and hierarchical 'vaulting' of old files to freestanding tape. Connectivity was enhanced as well with the addition of UltraNet HiPPI. This paper describes the increasing demands placed on the storage system's performance and throughput that resulted from the significant augmentation of compute-server processor power and network speed.
2013-09-01
to a XML file, a code that Bonine in [21] developed for a similar purpose. Using the StateRover XML log file import tool, we are able to generate a...C. Bonine , M. Shing, T.W. Otani, “Computer-aided process and tools for mobile software acquisition,” NPS, Monterey, CA, Tech. Rep. NPS-SE-13...C10P07R05– 075, 2013. [21] C. Bonine , “Specification, validation and verification of mobile application behavior,” M.S. thesis, Dept. Comp. Science, NPS
Understanding Customer Dissatisfaction with Underutilized Distributed File Servers
NASA Technical Reports Server (NTRS)
Riedel, Erik; Gibson, Garth
1996-01-01
An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance, as perceived by users, the response time of distributed operations must improve. In this paper we analyze measurements of an Andrew file system (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server's overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to period of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server central processing unit (CPU) use. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has to potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst period should be reduced.
Developing Intranets: Practical Issues for Implementation and Design.
ERIC Educational Resources Information Center
Trowbridge, Dave
1996-01-01
An intranet is a system which has "domesticated" the technologies of the Internet for specific organizational settings and goals. Although the adaptability of Hypertext Markup Language to intranets is sometimes limited, implementing various protocols and technologies enable organizations to share files among heterogeneous computers,…
Knowledge Representation: A Brief Review.
ERIC Educational Resources Information Center
Vickery, B. C.
1986-01-01
Reviews different structures and techniques of knowledge representation: structure of database records and files, data structures in computer programming, syntatic and semantic structure of natural language, knowledge representation in artificial intelligence, and models of human memory. A prototype expert system that makes use of some of these…
Visual behavior characterization for intrusion and misuse detection
NASA Astrophysics Data System (ADS)
Erbacher, Robert F.; Frincke, Deborah
2001-05-01
As computer and network intrusions become more and more of a concern, the need for better capabilities, to assist in the detection and analysis of intrusions also increase. System administrators typically rely on log files to analyze usage and detect misuse. However, as a consequence of the amount of data collected by each machine, multiplied by the tens or hundreds of machines under the system administrator's auspices, the entirety of the data available is neither collected nor analyzed. This is compounded by the need to analyze network traffic data as well. We propose a methodology for analyzing network and computer log information visually based on the analysis of the behavior of the users. Each user's behavior is the key to determining their intent and overriding activity, whether they attempt to hide their actions or not. Proficient hackers will attempt to hide their ultimate activities, which hinders the reliability of log file analysis. Visually analyzing the users''s behavior however, is much more adaptable and difficult to counteract.
NASA Astrophysics Data System (ADS)
Casajus, A.; Ciba, K.; Fernandez, V.; Graciani, R.; Hamar, V.; Mendez, V.; Poss, S.; Sapunov, M.; Stagni, F.; Tsaregorodtsev, A.; Ubeda, M.
2012-12-01
The DIRAC Project was initiated to provide a data processing system for the LHCb Experiment at CERN. It provides all the necessary functionality and performance to satisfy the current and projected future requirements of the LHCb Computing Model. A considerable restructuring of the DIRAC software was undertaken in order to turn it into a general purpose framework for building distributed computing systems that can be used by various user communities in High Energy Physics and other scientific application domains. The CLIC and ILC-SID detector projects started to use DIRAC for their data production system. The Belle Collaboration at KEK, Japan, has adopted the Computing Model based on the DIRAC system for its second phase starting in 2015. The CTA Collaboration uses DIRAC for the data analysis tasks. A large number of other experiments are starting to use DIRAC or are evaluating this solution for their data processing tasks. DIRAC services are included as part of the production infrastructure of the GISELA Latin America grid. Similar services are provided for the users of the France-Grilles and IBERGrid National Grid Initiatives in France and Spain respectively. The new communities using DIRAC started to provide important contributions to its functionality. Among recent additions can be mentioned the support of the Amazon EC2 computing resources as well as other Cloud management systems; a versatile File Replica Catalog with File Metadata capabilities; support for running MPI jobs in the pilot based Workload Management System. Integration with existing application Web Portals, like WS-PGRADE, is demonstrated. In this paper we will describe the current status of the DIRAC Project, recent developments of its framework and functionality as well as the status of the rapidly evolving community of the DIRAC users.
Interactive real-time media streaming with reliable communication
NASA Astrophysics Data System (ADS)
Pan, Xunyu; Free, Kevin M.
2014-02-01
Streaming media is a recent technique for delivering multimedia information from a source provider to an end- user over the Internet. The major advantage of this technique is that the media player can start playing a multimedia file even before the entire file is transmitted. Most streaming media applications are currently implemented based on the client-server architecture, where a server system hosts the media file and a client system connects to this server system to download the file. Although the client-server architecture is successful in many situations, it may not be ideal to rely on such a system to provide the streaming service as users may be required to register an account using personal information in order to use the service. This is troublesome if a user wishes to watch a movie simultaneously while interacting with a friend in another part of the world over the Internet. In this paper, we describe a new real-time media streaming application implemented on a peer-to-peer (P2P) architecture in order to overcome these challenges within a mobile environment. When using the peer-to-peer architecture, streaming media is shared directly between end-users, called peers, with minimal or no reliance on a dedicated server. Based on the proposed software pɛvμa (pronounced [revma]), named for the Greek word meaning stream, we can host a media file on any computer and directly stream it to a connected partner. To accomplish this, pɛvμa utilizes the Microsoft .NET Framework and Windows Presentation Framework, which are widely available on various types of windows-compatible personal computers and mobile devices. With specially designed multi-threaded algorithms, the application can stream HD video at speeds upwards of 20 Mbps using the User Datagram Protocol (UDP). Streaming and playback are handled using synchronized threads that communicate with one another once a connection is established. Alteration of playback, such as pausing playback or tracking to a different spot in the media file, will be reflected in all media streams. These techniques are designed to allow users at different locations to simultaneously view a full length HD video and interactively control the media streaming session. To create a sustainable media stream with high quality, our system supports UDP packet loss recovery at high transmission speed using custom File- Buffers. Traditional real-time streaming protocols such as Real-time Transport Protocol/RTP Control Protocol (RTP/RTCP) provide no such error recovery mechanism. Finally, the system also features an Instant Messenger that allows users to perform social interactions with one another while they enjoy a media file. The ultimate goal of the application is to offer users a hassle free way to watch a media file over long distances without having to upload any personal information into a third party database. Moreover, the users can communicate with each other and stream media directly from one mobile device to another while maintaining an independence from traditional sign up required by most streaming services.
Information management and analysis system for groundwater data in Thailand
NASA Astrophysics Data System (ADS)
Gill, D.; Luckananurung, P.
1992-01-01
The Ground Water Division of the Thai Department of Mineral Resources maintains a large archive of groundwater data with information on some 50,000 water wells. Each well file contains information on well location, well completion, borehole geology, water levels, water quality, and pumping tests. In order to enable efficient use of this information a computer-based system for information management and analysis was created. The project was sponsored by the United Nations Development Program and the Thai Department of Mineral Resources. The system was designed to serve users who lack prior training in automated data processing. Access is through a friendly user/system dialogue. Tasks are segmented into a number of logical steps, each of which is managed by a separate screen. Selective retrieval is possible by four different methods of area definition and by compliance with user-specified constraints on any combination of database variables. The main types of outputs are: (1) files of retrieved data, screened according to users' specifications; (2) an assortment of pre-formatted reports; (3) computed geochemical parameters and various diagrams of water chemistry derived therefrom; (4) bivariate scatter diagrams and linear regression analysis; (5) posting of data and computed results on maps; and (6) hydraulic aquifer characteristics as computed from pumping tests. Data are entered directly from formatted screens. Most records can be copied directly from hand-written documents. The database-management program performs data integrity checks in real time, enabling corrections at the time of input. The system software can be grouped into: (1) database administration and maintenance—these functions are carried out by the SIR/DBMS software package; (2) user communication interface for task definition and execution control—the interface is written in the operating system command language (VMS/DCL) and in FORTRAN 77; and (3) scientific data-processing programs, written in FORTRAN 77. The system was implemented on a DEC MicroVAX II computer.
NASA Technical Reports Server (NTRS)
Jefferies, K.
1994-01-01
OFFSET is a ray tracing computer code for optical analysis of a solar collector. The code models the flux distributions within the receiver cavity produced by reflections from the solar collector. It was developed to model the offset solar collector of the solar dynamic electric power system being developed for Space Station Freedom. OFFSET has been used to improve the understanding of the collector-receiver interface and to guide the efforts of NASA contractors also researching the optical components of the power system. The collector for Space Station Freedom consists of 19 hexagonal panels each containing 24 triangular, reflective facets. Current research is geared toward optimizing flux distribution inside the receiver via changes in collector design and receiver orientation. OFFSET offers many options for experimenting with the design of the system. The offset parabolic collector model configuration is determined by an input file of facet corner coordinates. The user may choose other configurations by changing this file, but to simulate collectors that have other than 19 groups of 24 triangular facets would require modification of the FORTRAN code. Each of the roughly 500 facets in the assembled collector may be independently aimed to smooth out, or tailor, the flux distribution on the receiver's wall. OFFSET simulates the effects of design changes such as in receiver aperture location, tilt angle, and collector facet contour. Unique features of OFFSET include: 1) equations developed to pseudo-randomly select ray originating sources on the Sun which appear evenly distributed and include solar limb darkening; 2) Cone-optics technique used to add surface specular error to the ray originating sources to determine the apparent ray sources of the reflected sun; 3) choice of facet reflective surface contour -- spherical, ideal parabolic, or toroidal; 4) Gaussian distributions of radial and tangential components of surface slope error added to the surface normals at the ten nodal points on each facet; and 5) color contour plots of receiver incident flux distribution generated by PATRAN processing of FORTRAN computer code output. OFFSET output includes a file of input data for confirmation, a PATRAN results file containing the values necessary to plot the flux distribution at the receiver surface, a PATRAN results file containing the intensity distribution on a 40 x 40 cm area of the receiver aperture plane, a data file containing calculated information on the system configuration, a file including the X-Y coordinates of the target points of each collector facet on the aperture opening, and twelve P/PLOT input data files to allow X-Y plotting of various results data. OFFSET is written in FORTRAN (70%) for the IBM VM operating system. The code contains PATRAN statements (12%) and P/PLOT statements (18%) for generating plots. Once the program has been run on VM (or an equivalent system), the PATRAN and P/PLOT files may be transferred to a DEC VAX (or equivalent system) with access to PATRAN for PATRAN post processing. OFFSET was written in 1988 and last updated in 1989. PATRAN is a registered trademark of PDA Engineering. IBM is a registered trademark of International Business Machines Corporation. DEC VAX is a registered trademark of Digital Equipment Corporation.
29 CFR 4000.1 - What are these filing rules about?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false What are these filing rules about? 4000.1 Section 4000.1 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION GENERAL FILING, ISSUANCE, COMPUTATION OF TIME, AND RECORD RETENTION Filing Rules § 4000.1 What are these filing rules about...
An analysis of file migration in a UNIX supercomputing environment
NASA Technical Reports Server (NTRS)
Miller, Ethan L.; Katz, Randy H.
1992-01-01
The super computer center at the National Center for Atmospheric Research (NCAR) migrates large numbers of files to and from its mass storage system (MSS) because there is insufficient space to store them on the Cray supercomputer's local disks. This paper presents an analysis of file migration data collected over two years. The analysis shows that requests to the MSS are periodic, with one day and one week periods. Read requests to the MSS account for the majority of the periodicity; as write requests are relatively constant over the course of a week. Additionally, reads show a far greater fluctuation than writes over a day and week since reads are driven by human users while writes are machine-driven.
Manual for Getdata Version 3.1: a FORTRAN Utility Program for Time History Data
NASA Technical Reports Server (NTRS)
Maine, Richard E.
1987-01-01
This report documents version 3.1 of the GetData computer program. GetData is a utility program for manipulating files of time history data, i.e., data giving the values of parameters as functions of time. The most fundamental capability of GetData is extracting selected signals and time segments from an input file and writing the selected data to an output file. Other capabilities include converting file formats, merging data from several input files, time skewing, interpolating to common output times, and generating calculated output signals as functions of the input signals. This report also documents the interface standards for the subroutines used by GetData to read and write the time history files. All interface to the data files is through these subroutines, keeping the main body of GetData independent of the precise details of the file formats. Different file formats can be supported by changes restricted to these subroutines. Other computer programs conforming to the interface standards can call the same subroutines to read and write files in compatible formats.