Sample records for shared file system

  1. Reliable file sharing in distributed operating system using web RTC

    NASA Astrophysics Data System (ADS)

    Dukiya, Rajesh

    2017-12-01

    Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.

  2. Parallel file system with metadata distributed across partitioned key-value store c

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  3. An Ephemeral Burst-Buffer File System for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Moody, Adam; Yu, Weikuan

    BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.

  4. Spindle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-04-04

    Spindle is software infrastructure that solves file system scalabiltiy problems associated with starting dynamically linked applications in HPC environments. When an HPC applications starts up thousands of pricesses at once, and those processes simultaneously access a shared file system to look for shared libraries, it can cause significant performance problems for both the application and other users. Spindle scalably coordinates the distribution of shared libraries to an application to avoid hammering the shared file system.

  5. The Jade File System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rao, Herman Chung-Hwa

    1991-01-01

    File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its design. The prototype consists of interfaces to the Unix File System, the Sun Network File System, and the File Transfer Protocol.

  6. Total Parenteral Nutrition

    MedlinePlus

    ... Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess ...

  7. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  8. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  9. Policy enabled information sharing system

    DOEpatents

    Jorgensen, Craig R.; Nelson, Brian D.; Ratheal, Steve W.

    2014-09-02

    A technique for dynamically sharing information includes executing a sharing policy indicating when to share a data object responsive to the occurrence of an event. The data object is created by formatting a data file to be shared with a receiving entity. The data object includes a file data portion and a sharing metadata portion. The data object is encrypted and then automatically transmitted to the receiving entity upon occurrence of the event. The sharing metadata portion includes metadata characterizing the data file and referenced in connection with the sharing policy to determine when to automatically transmit the data object to the receiving entity.

  10. Please Move Inactive Files Off the /projects File System | High-Performance

    Science.gov Websites

    Computing | NREL Please Move Inactive Files Off the /projects File System Please Move Inactive Files Off the /projects File System January 11, 2018 The /projects file system is a shared resource . This year this has created a space crunch - the file system is now about 90% full and we need your help

  11. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less

  12. Students' Acceptance of File Sharing Systems as a Tool for Sharing Course Materials: The Case of Google Drive

    ERIC Educational Resources Information Center

    Sadik, Alaa

    2017-01-01

    Students' perceptions about both ease of use and usefulness are fundamental factors in determining their acceptance and successful use of technology in higher education. File sharing systems are one of these technologies and can be used to manage and deliver course materials and coordinate virtual teams. The aim of this study is to explore how…

  13. Parallel checksumming of data chunks of a shared data object using a log-structured file system

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-09-06

    Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.

  14. SHARAF: The Canadian Shared Authority File Project.

    ERIC Educational Resources Information Center

    MacIntosh, Helen

    1982-01-01

    Describes history, operating procedures, and current activities of group of users of the University of Toronto Library Automation System (UTLAS) who cooperated with each other, the bibliographic utility, and the National Library of Canada to produce an automated authority control system, termed Shared Authority File (SHARAF). Five references are…

  15. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    NASA Astrophysics Data System (ADS)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16 channels with 240MB/s network throughput.When it is integrated,sharing system can provide 1020MB/s write speed simultaneously.When the master storage server fails, the backup storage server takes over the normal service.The literacy of client will not be affected,in which switching time is less than 5s.The design and integrated storage system meet users requirements. Anyway, all-fiber way is too expensive in SAN; SCSI hard disk transfer rate may still be the bottleneck in the development of the entire storage system. Stornext can provide users with efficient sharing, management, automatic archiving of large numbers of files and hardware solutions. It occupies a leading position in big data management. Storage is the most popular sharing shareware, and there are drawbacks in Stornext: Firstly, Stornext software is expensive, in which charge by the sites. When the network scale is large, the purchase cost will be very high. Secondly, the parameters of Stornext software are more demands on the skills of technical staff. If there is a problem, it is difficult to exclude.

  16. Digital Libraries: The Next Generation in File System Technology.

    ERIC Educational Resources Information Center

    Bowman, Mic; Camargo, Bill

    1998-01-01

    Examines file sharing within corporations that use wide-area, distributed file systems. Applications and user interactions strongly suggest that the addition of services typically associated with digital libraries (content-based file location, strongly typed objects, representation of complex relationships between documents, and extrinsic…

  17. P2P watch: personal health information detection in peer-to-peer file-sharing networks.

    PubMed

    Sokolova, Marina; El Emam, Khaled; Arbuckle, Luk; Neri, Emilio; Rose, Sean; Jonker, Elizabeth

    2012-07-09

    Users of peer-to-peer (P2P) file-sharing networks risk the inadvertent disclosure of personal health information (PHI). In addition to potentially causing harm to the affected individuals, this can heighten the risk of data breaches for health information custodians. Automated PHI detection tools that crawl the P2P networks can identify PHI and alert custodians. While there has been previous work on the detection of personal information in electronic health records, there has been a dearth of research on the automated detection of PHI in heterogeneous user files. To build a system that accurately detects PHI in files sent through P2P file-sharing networks. The system, which we call P2P Watch, uses a pipeline of text processing techniques to automatically detect PHI in files exchanged through P2P networks. P2P Watch processes unstructured texts regardless of the file format, document type, and content. We developed P2P Watch to extract and analyze PHI in text files exchanged on P2P networks. We labeled texts as PHI if they contained identifiable information about a person (eg, name and date of birth) and specifics of the person's health (eg, diagnosis, prescriptions, and medical procedures). We evaluated the system's performance through its efficiency and effectiveness on 3924 files gathered from three P2P networks. P2P Watch successfully processed 3924 P2P files of unknown content. A manual examination of 1578 randomly selected files marked by the system as non-PHI confirmed that these files indeed did not contain PHI, making the false-negative detection rate equal to zero. Of 57 files marked by the system as PHI, all contained both personally identifiable information and health information: 11 files were PHI disclosures, and 46 files contained organizational materials such as unfilled insurance forms, job applications by medical professionals, and essays. PHI can be successfully detected in free-form textual files exchanged through P2P networks. Once the files with PHI are detected, affected individuals or data custodians can be alerted to take remedial action.

  18. SHOEBOX: A Personal File Handling System for Textual Data. Information System Language Studies, Number 23.

    ERIC Educational Resources Information Center

    Glantz, Richard S.

    Until recently, the emphasis in information storage and retrieval systems has been towards batch-processing of large files. In contrast, SHOEBOX is designed for the unformatted, personal file collection of the computer-naive individual. Operating through display terminals in a time-sharing, interactive environment on the IBM 360, the user can…

  19. Tuning HDF5 subfiling performance on parallel file systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less

  20. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  1. P2P Watch: Personal Health Information Detection in Peer-to-Peer File-Sharing Networks

    PubMed Central

    El Emam, Khaled; Arbuckle, Luk; Neri, Emilio; Rose, Sean; Jonker, Elizabeth

    2012-01-01

    Background Users of peer-to-peer (P2P) file-sharing networks risk the inadvertent disclosure of personal health information (PHI). In addition to potentially causing harm to the affected individuals, this can heighten the risk of data breaches for health information custodians. Automated PHI detection tools that crawl the P2P networks can identify PHI and alert custodians. While there has been previous work on the detection of personal information in electronic health records, there has been a dearth of research on the automated detection of PHI in heterogeneous user files. Objective To build a system that accurately detects PHI in files sent through P2P file-sharing networks. The system, which we call P2P Watch, uses a pipeline of text processing techniques to automatically detect PHI in files exchanged through P2P networks. P2P Watch processes unstructured texts regardless of the file format, document type, and content. Methods We developed P2P Watch to extract and analyze PHI in text files exchanged on P2P networks. We labeled texts as PHI if they contained identifiable information about a person (eg, name and date of birth) and specifics of the person’s health (eg, diagnosis, prescriptions, and medical procedures). We evaluated the system’s performance through its efficiency and effectiveness on 3924 files gathered from three P2P networks. Results P2P Watch successfully processed 3924 P2P files of unknown content. A manual examination of 1578 randomly selected files marked by the system as non-PHI confirmed that these files indeed did not contain PHI, making the false-negative detection rate equal to zero. Of 57 files marked by the system as PHI, all contained both personally identifiable information and health information: 11 files were PHI disclosures, and 46 files contained organizational materials such as unfilled insurance forms, job applications by medical professionals, and essays. Conclusions PHI can be successfully detected in free-form textual files exchanged through P2P networks. Once the files with PHI are detected, affected individuals or data custodians can be alerted to take remedial action. PMID:22776692

  2. Sharing lattice QCD data over a widely distributed file system

    NASA Astrophysics Data System (ADS)

    Amagasa, T.; Aoki, S.; Aoki, Y.; Aoyama, T.; Doi, T.; Fukumura, K.; Ishii, N.; Ishikawa, K.-I.; Jitsumoto, H.; Kamano, H.; Konno, Y.; Matsufuru, H.; Mikami, Y.; Miura, K.; Sato, M.; Takeda, S.; Tatebe, O.; Togawa, H.; Ukawa, A.; Ukita, N.; Watanabe, Y.; Yamazaki, T.; Yoshie, T.

    2015-12-01

    JLDG is a data-grid for the lattice QCD (LQCD) community in Japan. Several large research groups in Japan have been working on lattice QCD simulations using supercomputers distributed over distant sites. The JLDG provides such collaborations with an efficient method of data management and sharing. File servers installed on 9 sites are connected to the NII SINET VPN and are bound into a single file system with the GFarm. The file system looks the same from any sites, so that users can do analyses on a supercomputer on a site, using data generated and stored in the JLDG at a different site. We present a brief description of hardware and software of the JLDG, including a recently developed subsystem for cooperating with the HPCI shared storage, and report performance and statistics of the JLDG. As of April 2015, 15 research groups (61 users) store their daily research data of 4.7PB including replica and 68 million files in total. Number of publications for works which used the JLDG is 98. The large number of publications and recent rapid increase of disk usage convince us that the JLDG has grown up into a useful infrastructure for LQCD community in Japan.

  3. Solving data-at-rest for the storage and retrieval of files in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Knobler, Ron; Scheffel, Peter; Williams, Jonathan; Gaj, Kris; Kaps, Jens-Peter

    2013-05-01

    Based on current trends for both military and commercial applications, the use of mobile devices (e.g. smartphones and tablets) is greatly increasing. Several military applications consist of secure peer to peer file sharing without a centralized authority. For these military applications, if one or more of these mobile devices are lost or compromised, sensitive files can be compromised by adversaries, since COTS devices and operating systems are used. Complete system files cannot be stored on a device, since after compromising a device, an adversary can attack the data at rest, and eventually obtain the original file. Also after a device is compromised, the existing peer to peer system devices must still be able to access all system files. McQ has teamed with the Cryptographic Engineering Research Group at George Mason University to develop a custom distributed file sharing system to provide a complete solution to the data at rest problem for resource constrained embedded systems and mobile devices. This innovative approach scales very well to a large number of network devices, without a single point of failure. We have implemented the approach on representative mobile devices as well as developed an extensive system simulator to benchmark expected system performance based on detailed modeling of the network/radio characteristics, CONOPS, and secure distributed file system functionality. The simulator is highly customizable for the purpose of determining expected system performance for other network topologies and CONOPS.

  4. 75 FR 68657 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... LLC To Increase the Maximum Order Size Accepted by Floor Broker Systems From 25,000,000 Shares to 99... order size accepted by Floor broker systems from 25,000,000 shares to 99,000,000 shares. The text of the... systems shall accept a maximum order size of 99,000,000, an increase from the current 25,000,000 share...

  5. 75 FR 68656 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... Shares to 99,000,000 Shares November 2, 2010. Pursuant to Section 19(b)(1) \\1\\ of the Securities Exchange... size accepted by Floor broker systems from 25,000,000 shares to 99,000,000 shares. The text of the... systems shall accept a maximum order size of 99,000,000, an increase from the current 25,000,000 share...

  6. The eGenVar data management system—cataloguing and sharing sensitive data and metadata for the life sciences

    PubMed Central

    Razick, Sabry; Močnik, Rok; Thomas, Laurent F.; Ryeng, Einar; Drabløs, Finn; Sætrom, Pål

    2014-01-01

    Systematic data management and controlled data sharing aim at increasing reproducibility, reducing redundancy in work, and providing a way to efficiently locate complementing or contradicting information. One method of achieving this is collecting data in a central repository or in a location that is part of a federated system and providing interfaces to the data. However, certain data, such as data from biobanks or clinical studies, may, for legal and privacy reasons, often not be stored in public repositories. Instead, we describe a metadata cataloguing system and a software suite for reporting the presence of data from the life sciences domain. The system stores three types of metadata: file information, file provenance and data lineage, and content descriptions. Our software suite includes both graphical and command line interfaces that allow users to report and tag files with these different metadata types. Importantly, the files remain in their original locations with their existing access-control mechanisms in place, while our system provides descriptions of their contents and relationships. Our system and software suite thereby provide a common framework for cataloguing and sharing both public and private data. Database URL: http://bigr.medisin.ntnu.no/data/eGenVar/ PMID:24682735

  7. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  8. Library Information System Time-Sharing (LISTS) Project. Final Report.

    ERIC Educational Resources Information Center

    Black, Donald V.

    The Library Information System Time-Sharing (LISTS) experiment was based on three innovations in data processing technology: (1) the advent of computer time-sharing on third-generation machines, (2) the development of general-purpose file-management software and (3) the introduction of large, library-oriented data bases. The main body of the…

  9. Lessons Learned in Deploying the World s Largest Scale Lustre File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillow, David A; Fuller, Douglas; Wang, Feiyi

    2010-01-01

    The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing themore » file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.« less

  10. Characterizing parallel file-access patterns on a large-scale multiprocessor

    NASA Technical Reports Server (NTRS)

    Purakayastha, Apratim; Ellis, Carla Schlatter; Kotz, David; Nieuwejaar, Nils; Best, Michael

    1994-01-01

    Rapid increases in the computational speeds of multiprocessors have not been matched by corresponding performance enhancements in the I/O subsystem. To satisfy the large and growing I/O requirements of some parallel scientific applications, we need parallel file systems that can provide high-bandwidth and high-volume data transfer between the I/O subsystem and thousands of processors. Design of such high-performance parallel file systems depends on a thorough grasp of the expected workload. So far there have been no comprehensive usage studies of multiprocessor file systems. Our CHARISMA project intends to fill this void. The first results from our study involve an iPSC/860 at NASA Ames. This paper presents results from a different platform, the CM-5 at the National Center for Supercomputing Applications. The CHARISMA studies are unique because we collect information about every individual read and write request and about the entire mix of applications running on the machines. The results of our trace analysis lead to recommendations for parallel file system design. First the file system should support efficient concurrent access to many files, and I/O requests from many jobs under varying load conditions. Second, it must efficiently manage large files kept open for long periods. Third, it should expect to see small requests predominantly sequential access patterns, application-wide synchronous access, no concurrent file-sharing between jobs appreciable byte and block sharing between processes within jobs, and strong interprocess locality. Finally, the trace data suggest that node-level write caches and collective I/O request interfaces may be useful in certain environments.

  11. 47 CFR 27.1180 - The cost-sharing formula.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-sharing formula. (a) An AWS licensee that relocates a BRS system with which it interferes is entitled to... this section. (b) C is the actual cost of relocating the system, and includes, but is not limited to... (design/path survey); installation; systems testing; FCC filing costs; site acquisition and civil works...

  12. 47 CFR 27.1180 - The cost-sharing formula.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... § 27.1180 The cost-sharing formula. (a) An AWS licensee that relocates a BRS system with which it... forth in paragraph (b) of this section. (b) C is the actual cost of relocating the system, and includes... equipment; engineering costs (design/path survey); installation; systems testing; FCC filing costs; site...

  13. 47 CFR 27.1180 - The cost-sharing formula.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-sharing formula. (a) An AWS licensee that relocates a BRS system with which it interferes is entitled to... this section. (b) C is the actual cost of relocating the system, and includes, but is not limited to... (design/path survey); installation; systems testing; FCC filing costs; site acquisition and civil works...

  14. Experimental Directory Structure (Exdir): An Alternative to HDF5 Without Introducing a New File Format

    PubMed Central

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E.; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders

    2018-01-01

    Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. With the publication of Exdir, we invite the scientific community to join the development to create an open specification that will serve as many needs as possible and as a foundation for open access to and exchange of data. PMID:29706879

  15. Experimental Directory Structure (Exdir): An Alternative to HDF5 Without Introducing a New File Format.

    PubMed

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders

    2018-01-01

    Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. With the publication of Exdir, we invite the scientific community to join the development to create an open specification that will serve as many needs as possible and as a foundation for open access to and exchange of data.

  16. Usage analysis of user files in UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1987-01-01

    Presented is a user-oriented analysis of short term file usage in a 4.2 BSD UNIX environment. The key aspect of this analysis is a characterization of users and files, which is a departure from the traditional approach of analyzing file references. Two characterization measures are employed: accesses-per-byte (combining fraction of a file referenced and number of references) and file size. This new approach is shown to distinguish differences in files as well as users, which cam be used in efficient file system design, and in creating realistic test workloads for simulations. A multi-stage gamma distribution is shown to closely model the file usage measures. Even though overall file sharing is small, some files belonging to a bulletin board system are accessed by many users, simultaneously and otherwise. Over 50% of users referenced files owned by other users, and over 80% of all files were involved in such references. Based on the differences in files and users, suggestions to improve the system performance were also made.

  17. Highway Safety Information System guidebook for the Maine state data files. Volume 2 : single variable tabulations

    DOT National Transportation Integrated Search

    2012-10-01

    The United States and European Union (EU) share many of the same transportation research issues, challenges, and goals. They also share a belief that cooperative vehicle (also termed connected vehicle) systems, based on vehicle-to-vehicle and vehicle...

  18. Advancing Collaboration through Hydrologic Data and Model Sharing

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.

    2015-12-01

    HydroShare is an online, collaborative system for open sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined primarily by standardized metadata, content data models for each resource type, and an overarching resource data model based on the Open Archives Initiative's Object Reuse and Exchange (OAI-ORE) standard and a hierarchical file packaging system called "BagIt". HydroShare expands the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated to include geospatial and multidimensional space-time datasets commonly used in hydrology. HydroShare also includes new capability for sharing models, model components, and analytical tools and will take advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. It also supports web services and server/cloud based computation operating on resources for the execution of hydrologic models and analysis and visualization of hydrologic data. HydroShare uses iRODS as a network file system for underlying storage of datasets and models. Collaboration is enabled by casting datasets and models as "social objects". Social functions include both private and public sharing, formation of collaborative groups of users, and value-added annotation of shared datasets and models. The HydroShare web interface and social media functions were developed using the Django web application framework coupled to iRODS. Data visualization and analysis is supported through the Tethys Platform web GIS software stack. Links to external systems are supported by RESTful web service interfaces to HydroShare's content. This presentation will introduce the HydroShare functionality developed to date and describe ongoing development of functionality to support collaboration and integration of data and models.

  19. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  20. How strong are passwords used to protect personal health information in clinical trials?

    PubMed

    El Emam, Khaled; Moreau, Katherine; Jonker, Elizabeth

    2011-02-11

    Findings and statements about how securely personal health information is managed in clinical research are mixed. The objective of our study was to evaluate the security of practices used to transfer and share sensitive files in clinical trials. Two studies were performed. First, 15 password-protected files that were transmitted by email during regulated Canadian clinical trials were obtained. Commercial password recovery tools were used on these files to try to crack their passwords. Second, interviews with 20 study coordinators were conducted to understand file-sharing practices in clinical trials for files containing personal health information. We were able to crack the passwords for 93% of the files (14/15). Among these, 13 files contained thousands of records with sensitive health information on trial participants. The passwords tended to be relatively weak, using common names of locations, animals, car brands, and obvious numeric sequences. Patient information is commonly shared by email in the context of query resolution. Files containing personal health information are shared by email and, by posting them on shared drives with common passwords, to facilitate collaboration. If files containing sensitive patient information must be transferred by email, mechanisms to encrypt them and to ensure that password strength is high are necessary. More sophisticated collaboration tools are required to allow file sharing without password sharing. We provide recommendations to implement these practices.

  1. How Strong are Passwords Used to Protect Personal Health Information in Clinical Trials?

    PubMed Central

    Moreau, Katherine; Jonker, Elizabeth

    2011-01-01

    Background Findings and statements about how securely personal health information is managed in clinical research are mixed. Objective The objective of our study was to evaluate the security of practices used to transfer and share sensitive files in clinical trials. Methods Two studies were performed. First, 15 password-protected files that were transmitted by email during regulated Canadian clinical trials were obtained. Commercial password recovery tools were used on these files to try to crack their passwords. Second, interviews with 20 study coordinators were conducted to understand file-sharing practices in clinical trials for files containing personal health information. Results We were able to crack the passwords for 93% of the files (14/15). Among these, 13 files contained thousands of records with sensitive health information on trial participants. The passwords tended to be relatively weak, using common names of locations, animals, car brands, and obvious numeric sequences. Patient information is commonly shared by email in the context of query resolution. Files containing personal health information are shared by email and, by posting them on shared drives with common passwords, to facilitate collaboration. Conclusion If files containing sensitive patient information must be transferred by email, mechanisms to encrypt them and to ensure that password strength is high are necessary. More sophisticated collaboration tools are required to allow file sharing without password sharing. We provide recommendations to implement these practices. PMID:21317106

  2. Social Influences on User Behavior in Group Information Repositories

    ERIC Educational Resources Information Center

    Rader, Emilee Jeanne

    2009-01-01

    Group information repositories are systems for organizing and sharing files kept in a central location that all group members can access. These systems are often assumed to be tools for storage and control of files and their metadata, not tools for communication. The purpose of this research is to better understand user behavior in group…

  3. A Next-Generation Parallel File System Environment for the OLCF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillow, David A; Fuller, Douglas; Gunasekaran, Raghul

    2012-01-01

    When deployed in 2008/2009 the Spider system at the Oak Ridge National Laboratory s Leadership Computing Facility (OLCF) was the world s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF s diverse computational environment, Spider has since become a blueprint for shared Lustre environments deployed worldwide. Designed to support the parallel I/O requirements of the Jaguar XT5 system and other smallerscale platforms at the OLCF, the upgrade to the Titan XK6 heterogeneous system will begin to push the limits of Spider s originalmore » design by mid 2013. With a doubling in total system memory and a 10x increase in FLOPS, Titan will require both higher bandwidth and larger total capacity. Our goal is to provide a 4x increase in total I/O bandwidth from over 240GB=sec today to 1TB=sec and a doubling in total capacity. While aggregate bandwidth and total capacity remain important capabilities, an equally important goal in our efforts is dramatically increasing metadata performance, currently the Achilles heel of parallel file systems at leadership. We present in this paper an analysis of our current I/O workloads, our operational experiences with the Spider parallel file systems, the high-level design of our Spider upgrade, and our efforts in developing benchmarks that synthesize our performance requirements based on our workload characterization studies.« less

  4. 75 FR 21080 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-22

    ... the new fee schedule will continue to reward those who have been using the MatchPoint system for share... into the MatchPoint system. The proposed fee schedule, which will be effective upon filing, rewards all...Point\\SM\\ System April 15, 2010. Pursuant to Section 19(b)(1) \\1\\ of the Securities Exchange Act of 1934...

  5. Data sharing system for lithography APC

    NASA Astrophysics Data System (ADS)

    Kawamura, Eiichi; Teranishi, Yoshiharu; Shimabara, Masanori

    2007-03-01

    We have developed a simple and cost-effective data sharing system between fabs for lithography advanced process control (APC). Lithography APC requires process flow, inter-layer information, history information, mask information and so on. So, inter-APC data sharing system has become necessary when lots are to be processed in multiple fabs (usually two fabs). The development cost and maintenance cost also have to be taken into account. The system handles minimum information necessary to make trend prediction for the lots. Three types of data have to be shared for precise trend prediction. First one is device information of the lots, e.g., process flow of the device and inter-layer information. Second one is mask information from mask suppliers, e.g., pattern characteristics and pattern widths. Last one is history data of the lots. Device information is electronic file and easy to handle. The electronic file is common between APCs and uploaded into the database. As for mask information sharing, mask information described in common format is obtained via Wide Area Network (WAN) from mask-vender will be stored in the mask-information data server. This information is periodically transferred to one specific lithography-APC server and compiled into the database. This lithography-APC server periodically delivers the mask-information to every other lithography-APC server. Process-history data sharing system mainly consists of function of delivering process-history data. In shipping production lots to another fab, the product-related process-history data is delivered by the lithography-APC server from the shipping site. We have confirmed the function and effectiveness of data sharing systems.

  6. Collaborative Sharing of Multidimensional Space-time Data Using HydroShare

    NASA Astrophysics Data System (ADS)

    Gan, T.; Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Idaszak, R.; Yi, H.; Blanton, B.

    2015-12-01

    HydroShare is a collaborative environment being developed for sharing hydrological data and models. It includes capability to upload data in many formats as resources that can be shared. The HydroShare data model for resources uses a specific format for the representation of each type of data and specifies metadata common to all resource types as well as metadata unique to specific resource types. The Network Common Data Form (NetCDF) was chosen as the format for multidimensional space-time data in HydroShare. NetCDF is widely used in hydrological and other geoscience modeling because it contains self-describing metadata and supports the creation of array-oriented datasets that may include three spatial dimensions, a time dimension and other user defined dimensions. For example, NetCDF may be used to represent precipitation or surface air temperature fields that have two dimensions in space and one dimension in time. This presentation will illustrate how NetCDF files are used in HydroShare. When a NetCDF file is loaded into HydroShare, header information is extracted using the "ncdump" utility. Python functions developed for the Django web framework on which HydroShare is based, extract science metadata present in the NetCDF file, saving the user from having to enter it. Where the file follows Climate Forecast (CF) convention and Attribute Convention for Dataset Discovery (ACDD) standards, metadata is thus automatically populated. Users also have the ability to add metadata to the resource that may not have been present in the original NetCDF file. HydroShare's metadata editing functionality then writes this science metadata back into the NetCDF file to maintain consistency between the science metadata in HydroShare and the metadata in the NetCDF file. This further helps researchers easily add metadata information following the CF and ACDD conventions. Additional data inspection and subsetting functions were developed, taking advantage of Python and command line libraries for working with NetCDF files. We describe the design and implementation of these features and illustrate how NetCDF files from a modeling application may be curated in HydroShare and thus enhance reproducibility of the associated research. We also discuss future development planned for multidimensional space-time data in HydroShare.

  7. File-Sharing among College Students: Moral and Legal Implications

    ERIC Educational Resources Information Center

    Cockrum, Colton Dwayne

    2010-01-01

    This study was designed to explore the phenomenon of college students who illegally file-share. The main research question was, "What are the experiences of college students who file-share and what are their perspectives on the moral and legal implications for doing so?" Data were collected from six students using interviews, focus…

  8. Integrating hydrologic modeling web services with online data sharing to prepare, store, and execute models in hydrology

    NASA Astrophysics Data System (ADS)

    Gan, T.; Tarboton, D. G.; Dash, P. K.; Gichamo, T.; Horsburgh, J. S.

    2017-12-01

    Web based apps, web services and online data and model sharing technology are becoming increasingly available to support research. This promises benefits in terms of collaboration, platform independence, transparency and reproducibility of modeling workflows and results. However, challenges still exist in real application of these capabilities and the programming skills researchers need to use them. In this research we combined hydrologic modeling web services with an online data and model sharing system to develop functionality to support reproducible hydrologic modeling work. We used HydroDS, a system that provides web services for input data preparation and execution of a snowmelt model, and HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. To make the web services easy to use, we developed a HydroShare app (based on the Tethys platform) to serve as a browser based user interface for HydroDS. In this integration, HydroDS receives web requests from the HydroShare app to process the data and execute the model. HydroShare supports storage and sharing of the results generated by HydroDS web services. The snowmelt modeling example served as a use case to test and evaluate this approach. We show that, after the integration, users can prepare model inputs or execute the model through the web user interface of the HydroShare app without writing program code. The model input/output files and metadata describing the model instance are stored and shared in HydroShare. These files include a Python script that is automatically generated by the HydroShare app to document and reproduce the model input preparation workflow. Once stored in HydroShare, inputs and results can be shared with other users, or published so that other users can directly discover, repeat or modify the modeling work. This approach provides a collaborative environment that integrates hydrologic web services with a data and model sharing system to enable model development and execution. The entire system comprised of the HydroShare app, HydroShare and HydroDS web services is open source and contributes to capability for web based modeling research.

  9. A Document-Based EHR System That Controls the Disclosure of Clinical Documents Using an Access Control List File Based on the HL7 CDA Header.

    PubMed

    Takeda, Toshihiro; Ueda, Kanayo; Nakagawa, Akito; Manabe, Shirou; Okada, Katsuki; Mihara, Naoki; Matsumura, Yasushi

    2017-01-01

    Electronic health record (EHR) systems are necessary for the sharing of medical information between care delivery organizations (CDOs). We developed a document-based EHR system in which all of the PDF documents that are stored in our electronic medical record system can be disclosed to selected target CDOs. An access control list (ACL) file was designed based on the HL7 CDA header to manage the information that is disclosed.

  10. Social Networking Adapted for Distributed Scientific Collaboration

    NASA Technical Reports Server (NTRS)

    Karimabadi, Homa

    2012-01-01

    Share is a social networking site with novel, specially designed feature sets to enable simultaneous remote collaboration and sharing of large data sets among scientists. The site will include not only the standard features found on popular consumer-oriented social networking sites such as Facebook and Myspace, but also a number of powerful tools to extend its functionality to a science collaboration site. A Virtual Observatory is a promising technology for making data accessible from various missions and instruments through a Web browser. Sci-Share augments services provided by Virtual Observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase science returns from NASA missions. Sci-Share also enables better utilization of NASA s high-performance computing resources by providing an easy and central mechanism to access and share large files on users space or those saved on mass storage. The most common means of remote scientific collaboration today remains the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. Each of these tools has well-known limitations. Sci-Share transforms the social networking paradigm into a scientific collaboration environment by offering powerful tools for cooperative discourse and digital content sharing. Sci-Share differentiates itself by serving as an online repository for users digital content with the following unique features: a) Sharing of any file type, any size, from anywhere; b) Creation of projects and groups for controlled sharing; c) Module for sharing files on HPC (High Performance Computing) sites; d) Universal accessibility of staged files as embedded links on other sites (e.g. Facebook) and tools (e.g. e-mail); e) Drag-and-drop transfer of large files, replacing awkward e-mail attachments (and file size limitations); f) Enterprise-level data and messaging encryption; and g) Easy-to-use intuitive workflow.

  11. Hierarchical Data Distribution Scheme for Peer-to-Peer Networks

    NASA Astrophysics Data System (ADS)

    Bhushan, Shashi; Dave, M.; Patel, R. B.

    2010-11-01

    In the past few years, peer-to-peer (P2P) networks have become an extremely popular mechanism for large-scale content sharing. P2P systems have focused on specific application domains (e.g. music files, video files) or on providing file system like capabilities. P2P is a powerful paradigm, which provides a large-scale and cost-effective mechanism for data sharing. P2P system may be used for storing data globally. Can we implement a conventional database on P2P system? But successful implementation of conventional databases on the P2P systems is yet to be reported. In this paper we have presented the mathematical model for the replication of the partitions and presented a hierarchical based data distribution scheme for the P2P networks. We have also analyzed the resource utilization and throughput of the P2P system with respect to the availability, when a conventional database is implemented over the P2P system with variable query rate. Simulation results show that database partitions placed on the peers with higher availability factor perform better. Degradation index, throughput, resource utilization are the parameters evaluated with respect to the availability factor.

  12. Fair-share scheduling algorithm for a tertiary storage system

    NASA Astrophysics Data System (ADS)

    Jakl, Pavel; Lauret, Jérôme; Šumbera, Michal

    2010-04-01

    Any experiment facing Peta bytes scale problems is in need for a highly scalable mass storage system (MSS) to keep a permanent copy of their valuable data. But beyond the permanent storage aspects, the sheer amount of data makes complete data-set availability onto live storage (centralized or aggregated space such as the one provided by Scalla/Xrootd) cost prohibitive implying that a dynamic population from MSS to faster storage is needed. One of the most challenging aspects of dealing with MSS is the robotic tape component. If a robotic system is used as the primary storage solution, the intrinsically long access times (latencies) can dramatically affect the overall performance. To speed the retrieval of such data, one could organize the requests according to criterion with an aim to deliver maximal data throughput. However, such approaches are often orthogonal to fair resource allocation and a trade-off between quality of service, responsiveness and throughput is necessary for achieving an optimal and practical implementation of a truly faire-share oriented file restore policy. Starting from an explanation of the key criterion of such a policy, we will present evaluations and comparisons of three different MSS file restoration algorithms which meet fair-share requirements, and discuss their respective merits. We will quantify their impact on a typical file restoration cycle for the RHIC/STAR experimental setup and this, within a development, analysis and production environment relying on a shared MSS service [1].

  13. The inadvertent disclosure of personal health information through peer-to-peer file sharing programs

    PubMed Central

    Neri, Emilio; Jonker, Elizabeth; Sokolova, Marina; Peyton, Liam; Neisa, Angelica; Scassa, Teresa

    2010-01-01

    Objective There has been a consistent concern about the inadvertent disclosure of personal information through peer-to-peer file sharing applications, such as Limewire and Morpheus. Examples of personal health and financial information being exposed have been published. We wanted to estimate the extent to which personal health information (PHI) is being disclosed in this way, and compare that to the extent of disclosure of personal financial information (PFI). Design After careful review and approval of our protocol by our institutional research ethics board, files were downloaded from peer-to-peer file sharing networks and manually analyzed for the presence of PHI and PFI. The geographic region of the IP addresses was determined, and classified as either USA or Canada. Measurement We estimated the proportion of files that contain personal health and financial information for each region. We also estimated the proportion of search terms that return files with personal health and financial information. We ascertained and discuss the ethical issues related to this study. Results Approximately 0.4% of Canadian IP addresses had PHI, as did 0.5% of US IP addresses. There was more disclosure of financial information, at 1.7% of Canadian IP addresses and 4.7% of US IP addresses. An analysis of search terms used in these file sharing networks showed that a small percentage of the terms would return PHI and PFI files (ie, there are people successfully searching for PFI and PHI on the peer-to-peer file sharing networks). Conclusion There is a real risk of inadvertent disclosure of PHI through peer-to-peer file sharing networks, although the risk is not as large as for PFI. Anyone keeping PHI on their computers should avoid installing file sharing applications on their computers, or if they have to use such tools, actively manage the risks of inadvertent disclosure of their, their family's, their clients', or patients' PHI. PMID:20190057

  14. Dagik: A Quick Look System of the Geospace Data in KML format

    NASA Astrophysics Data System (ADS)

    Yoshida, D.; Saito, A.

    2007-12-01

    Dagik (Daily Geospace data in KML) is a quick look plot sharing system using Google Earth as a data browser. It provides daily data lists that contain network links to the KML/KMZ files of various geospace data. KML is a markup language to display data on Google Earth, and KMZ is a compressed file of KML. Users can browse the KML/KMZ files with the following procedures: 1) download "dagik.kml" from Dagik homepage (http://www- step.kugi.kyoto-u.ac.jp/dagik/), and open it with Google Earth, 2) select date, 3) select data type to browse. Dagik is a collection of network links to KML/KMZ files. The daily Dagik files are available since 1957, though they contain only the geomagnetic index data in the early periods. There are three activities of Dagik. The first one is the generation of the daily data lists, the second is to provide several useful tools, such as observatory lists, and the third is to assist researchers to make KML/KMZ data plots. To make the plot browsing easy, there are three rules for Dagik plot format: 1) one file contains one UT day data, 2) use common plot panel size, 3) share the data list. There are three steps to join Dagik as a plot provider: 1) make KML/KMZ files of the data, 2) put the KML/KMZ files on Web, 3) notice Dagik group the URL address and description of the files. The KML/KMZ files will be included in Dagik data list. As of September 2007, quick looks of several geosphace data, such as GPS total electron content data, ionosonde data, magnetometer data, FUV imaging data by a satellite, ground-based airglow data, and satellite footprint data, are available. The system of Dagik is introduced in the presentation. u.ac.jp/dagik/

  15. 76 FR 49520 - Self-Regulatory Organizations; Chicago Stock Exchange, Inc.; Notice of Filing of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-10

    ...- Institutional Broker units should not use trading or order management systems which permit them to share... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-65035; File No. SR-CHX-2011-20] Self-Regulatory... Unit Within the Firm August 4, 2011. Pursuant to Section 19(b)(1) of the Securities Exchange Act of...

  16. 77 FR 3527 - Self-Regulatory Organizations; Chicago Stock Exchange, Inc.; Notice of Filing of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-24

    ... order management systems which permit them to share information about orders or transactions being... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-66177; File No. SR-CHX-2012-02] Self-Regulatory.... Pursuant to Section 19(b)(1) of the Securities Exchange Act of 1934 (``Act'') \\1\\ and Rule 19b-4 thereunder...

  17. The Scalable Checkpoint/Restart Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  18. A peer-to-peer music sharing system based on query-by-humming

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Chang, Xinglong; Zhao, Zheng; Zhang, Yebin; Shi, Qingwei

    2007-09-01

    Today, the main traffic in peer-to-peer (P2P) network is still multimedia files including large numbers of music files. The study of Music Information Retrieval (MIR) brings out many encouraging achievements in music search area. Nevertheless, the research of music search based on MIR in P2P network is still insufficient. Query by Humming (QBH) is one MIR technology studied for years. In this paper, we present a server based P2P music sharing system which is based on QBH and integrated with a Hierarchical Index Structure (HIS) to enhance the relation between surface data and potential information. HIS automatically evolving depends on the music related items carried by each peer such as midi files, lyrics and so forth. Instead of adding large amount of redundancy, the system generates a bit of index for multiple search input which improves the traditional keyword-based text search mode largely. When network bandwidth, speed, etc. are no longer a bottleneck of internet serve, the accessibility and accuracy of information provided by internet are being more concerned by end users.

  19. 47 CFR 27.1164 - The cost-sharing formula.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... control equipment; engineering costs (design/path survey); installation; systems testing; FCC filing costs... plant upgrade (if required); electrical grounding systems; Heating Ventilation and Air Conditioning (HVAC) (if required); alternate transport equipment; and leased facilities. Increased recurring costs...

  20. 47 CFR 27.1164 - The cost-sharing formula.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... control equipment; engineering costs (design/path survey); installation; systems testing; FCC filing costs... plant upgrade (if required); electrical grounding systems; Heating Ventilation and Air Conditioning (HVAC) (if required); alternate transport equipment; and leased facilities. Increased recurring costs...

  1. Considerations of persistence and security in CHOICES, an object-oriented operating system

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Madany, Peter W.

    1990-01-01

    The current design of the CHOICES persistent object implementation is summarized, and research in progress is outlined. CHOICES is implemented as an object-oriented system, and persistent objects appear to simplify and unify many functions of the system. It is demonstrated that persistent data can be accessed through an object-oriented file system model as efficiently as by an existing optimized commercial file system. The object-oriented file system can be specialized to provide an object store for persistent objects. The problems that arise in building an efficient persistent object scheme in a 32-bit virtual address space that only uses paging are described. Despite its limitations, the solution presented allows quite large numbers of objects to be active simultaneously, and permits sharing and efficient method calls.

  2. 47 CFR 27.1164 - The cost-sharing formula.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... systems; Heating Ventilation and Air Conditioning (HVAC) (if required); alternate transport equipment; and.../path survey); installation; systems testing; FCC filing costs; site acquisition and civil works; zoning... defined as the actual costs associated with providing a replacement system, such as equipment and...

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haynes, R.A.

    The Network File System (NFS) is used in UNIX-based networks to provide transparent file sharing between heterogeneous systems. Although NFS is well-known for being weak in security, it is widely used and has become a de facto standard. This paper examines the user authentication shortcomings of NFS and the approach Sandia National Laboratories has taken to strengthen it with Kerberos. The implementation on a Cray Y-MP8/864 running UNICOS is described and resource/performance issues are discussed. 4 refs., 4 figs.

  4. Project BALLOTS: Bibliographic Automation of Large Library Operations Using a Time-Sharing System. Progress Report (3/27/69 - 6/26/69).

    ERIC Educational Resources Information Center

    Veaner, Allen B.

    Project BALLOTS is a large-scale library automation development project of the Stanford University Libraries which has demonstrated the feasibility of conducting on-line interactive searches of complex bibliographic files, with a large number of users working simultaneously in the same or different files. This report documents the continuing…

  5. Discovery in a World of Mashups

    NASA Astrophysics Data System (ADS)

    King, T. A.; Ritschel, B.; Hourcle, J. A.; Moon, I. S.

    2014-12-01

    When the first digital information was stored electronically, discovery of what existed was through file names and the organization of the file system. With the advent of networks, digital information was shared on a wider scale, but discovery remained based on file and folder names. With a growing number of information sources, named based discovery quickly became ineffective. The keyword based search engine was one of the first types of a mashup in the world of Web 1.0. Embedded links from one document to another with prescribed relationships between files and the world of Web 2.0 was formed. Search engines like Google used the links to improve search results and a worldwide mashup was formed. While a vast improvement, the need for semantic (meaning rich) discovery was clear, especially for the discovery of scientific data. In response, every science discipline defined schemas to describe their type of data. Some core schemas where shared, but most schemas are custom tailored even though they share many common concepts. As with the networking of information sources, science increasingly relies on data from multiple disciplines. So there is a need to bring together multiple sources of semantically rich information. We explore how harvesting, conceptual mapping, facet based search engines, search term promotion, and style sheets can be combined to create the next generation of mashups in the emerging world of Web 3.0. We use NASA's Planetary Data System and NASA's Heliophysics Data Environment to illustrate how to create a multi-discipline mash-up.

  6. 47 CFR 24.243 - The cost-sharing formula.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...; monitoring or control equipment; engineering costs (design/path survey); installation; systems testing; FCC filing costs; site acquisition and civil works; zoning costs; training; disposal of old equipment; test...

  7. 78 FR 11258 - Self-Regulatory Organizations; Chicago Stock Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-15

    ....0022/share in all Derivative Securities Products priced $1.00/share or more executed in the Regular....0022/share in all Derivative Securities Products priced $1.00/share or more executed in the Regular... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-68894; File No. SR-CHX-2013-06] Self-Regulatory...

  8. 47 CFR 27.1164 - The cost-sharing formula.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...); installation; systems testing; FCC filing costs; site acquisition and civil works; zoning costs; training... upgrades for interference control; power plant upgrade (if required); electrical grounding systems; Heating Ventilation and Air Conditioning (HVAC) (if required); alternate transport equipment; and leased facilities...

  9. Space vehicle field unit and ground station system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judd, Stephen; Dallmann, Nicholas; Delapp, Jerry

    A field unit and ground station may use commercial off-the-shelf (COTS) components and share a common architecture, where differences in functionality are governed by software. The field units and ground stations may be easy to deploy, relatively inexpensive, and be relatively easy to operate. A novel file system may be used where datagrams of a file may be stored across multiple drives and/or devices. The datagrams may be received out of order and reassembled at the receiving device.

  10. Space vehicle field unit and ground station system

    DOEpatents

    Judd, Stephen; Dallmann, Nicholas; Delapp, Jerry; Proicou, Michael; Seitz, Daniel; Michel, John; Enemark, Donald

    2016-10-25

    A field unit and ground station may use commercial off-the-shelf (COTS) components and share a common architecture, where differences in functionality are governed by software. The field units and ground stations may be easy to deploy, relatively inexpensive, and be relatively easy to operate. A novel file system may be used where datagrams of a file may be stored across multiple drives and/or devices. The datagrams may be received out of order and reassembled at the receiving device.

  11. Balancing data sharing requirements for analyses with data sensitivity

    USGS Publications Warehouse

    Jarnevich, C.S.; Graham, J.J.; Newman, G.J.; Crall, A.W.; Stohlgren, T.J.

    2007-01-01

    Data sensitivity can pose a formidable barrier to data sharing. Knowledge of species current distributions from data sharing is critical for the creation of watch lists and an early warning/rapid response system and for model generation for the spread of invasive species. We have created an on-line system to synthesize disparate datasets of non-native species locations that includes a mechanism to account for data sensitivity. Data contributors are able to mark their data as sensitive. This data is then 'fuzzed' in mapping applications and downloaded files to quarter-quadrangle grid cells, but the actual locations are available for analyses. We propose that this system overcomes the hurdles to data sharing posed by sensitive data. ?? 2006 Springer Science+Business Media B.V.

  12. On-Line Systems: Promise and Pitfalls

    ERIC Educational Resources Information Center

    Cuadra, Carlos A.

    1971-01-01

    The virtues of interactive systems are speed, intimacy, and - if time-sharing is involved - economy. The major problems are the cost of the large computers and files necessary for bibliographic data, the still-high cost of communications, and the generally poor design of the user-system interfaces. (Author)

  13. Steganography on multiple MP3 files using spread spectrum and Shamir's secret sharing

    NASA Astrophysics Data System (ADS)

    Yoeseph, N. M.; Purnomo, F. A.; Riasti, B. K.; Safiie, M. A.; Hidayat, T. N.

    2016-11-01

    The purpose of steganography is how to hide data into another media. In order to increase security of data, steganography technique is often combined with cryptography. The weakness of this combination technique is the data was centralized. Therefore, a steganography technique is develop by using combination of spread spectrum and secret sharing technique. In steganography with secret sharing, shares of data is created and hidden in several medium. Medium used to concealed shares were MP3 files. Hiding technique used was Spread Spectrum. Secret sharing scheme used was Shamir's Secret Sharing. The result showed that steganography with spread spectrum combined with Shamir's Secret Share using MP3 files as medium produce a technique that could hid data into several cover. To extract and reconstruct the data hidden in stego object, it is needed the amount of stego object which more or equal to its threshold. Furthermore, stego objects were imperceptible and robust.

  14. 76 FR 56248 - Self-Regulatory Organizations; Chicago Stock Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... this tiered schedule, there were three volume-based Tiers and the rate of applicable take fees and provide credits varied based upon the Tier into which a Participant falls. \\5\\ Through its filing on....0026/share to $0.0025/share for the lowest Tier of activity, from $0.0028/share to $0.0027/share in the...

  15. 47 CFR 27.1164 - The cost-sharing formula.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... control equipment; engineering costs (design/path survey); installation; systems testing; FCC filing costs; site acquisition and civil works; zoning costs; training; disposal of old equipment; test equipment... a replacement system, such as equipment and engineering expenses. C may not exceed $250,000 per...

  16. Expanding Software Productivity and Power while Reducing Costs.

    ERIC Educational Resources Information Center

    Winer, Ellen N.

    1988-01-01

    Microcomputer efficiency and software economy can be achieved through file transfer and data sharing. Costs can be reduced by purchasing computer systems that allow for expansion and portability of data. (MLF)

  17. 75 FR 14478 - Self-Regulatory Organizations; The Options Clearing Corporation; Notice of Filing of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-25

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-61692; File No. SR-OCC-2010-03] Self[dash]Regulatory Organizations; The Options Clearing Corporation; Notice of Filing of Proposed Rule Change Relating to ETFS Palladium Shares and ETFS Platinum Shares Correction In notice document 2010-5914 beginning...

  18. Architecture and method for a burst buffer using flash technology

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung

    2016-03-15

    A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less

  20. Barriers to success: physical separation optimizes event-file retrieval in shared workspaces.

    PubMed

    Klempova, Bibiana; Liepelt, Roman

    2017-07-08

    Sharing tasks with other persons can simplify our work and life, but seeing and hearing other people's actions may also be very distracting. The joint Simon effect (JSE) is a standard measure of referential response coding when two persons share a Simon task. Sequential modulations of the joint Simon effect (smJSE) are interpreted as a measure of event-file processing containing stimulus information, response information and information about the just relevant control-state active in a given social situation. This study tested effects of physical (Experiment 1) and virtual (Experiment 2) separation of shared workspaces on referential coding and event-file processing using a joint Simon task. In Experiment 1, participants performed this task in individual (go-nogo), joint and standard Simon task conditions with and without a transparent curtain (physical separation) placed along the imagined vertical midline of the monitor. In Experiment 2, participants performed the same tasks with and without receiving background music (virtual separation). For response times, physical separation enhanced event-file retrieval indicated by an enlarged smJSE in the joint Simon task with curtain than without curtain (Experiment1), but did not change referential response coding. In line with this, we also found evidence for enhanced event-file processing through physical separation in the joint Simon task for error rates. Virtual separation did neither impact event-file processing, nor referential coding, but generally slowed down response times in the joint Simon task. For errors, virtual separation hampered event-file processing in the joint Simon task. For the cognitively more demanding standard two-choice Simon task, we found music to have a degrading effect on event-file retrieval for response times. Our findings suggest that adding a physical separation optimizes event-file processing in shared workspaces, while music seems to lead to a more relaxed task processing mode under shared task conditions. In addition, music had an interfering impact on joint error processing and more generally when dealing with a more complex task in isolation.

  1. Mapping DICOM to OpenDocument format

    NASA Astrophysics Data System (ADS)

    Yu, Cong; Yao, Zhihong

    2009-02-01

    In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.

  2. Strategies for Sharing Seismic Data Among Multiple Computer Platforms

    NASA Astrophysics Data System (ADS)

    Baker, L. M.; Fletcher, J. B.

    2001-12-01

    Seismic waveform data is readily available from a variety of sources, but it often comes in a distinct, instrument-specific data format. For example, data may be from portable seismographs, such as those made by Refraction Technology or Kinemetrics, from permanent seismograph arrays, such as the USGS Parkfield Dense Array, from public data centers, such as the IRIS Data Center, or from personal communication with other researchers through e-mail or ftp. A computer must be selected to import the data - usually whichever is the most suitable for reading the originating format. However, the computer best suited for a specific analysis may not be the same. When copies of the data are then made for analysis, a proliferation of copies of the same data results, in possibly incompatible, computer-specific formats. In addition, if an error is detected and corrected in one copy, or some other change is made, all the other copies must be updated to preserve their validity. Keeping track of what data is available, where it is located, and which copy is authoritative requires an effort that is easy to neglect. We solve this problem by importing waveform data to a shared network file server that is accessible to all our computers on our campus LAN. We use a Network Appliance file server running Sun's Network File System (NFS) software. Using an NFS client software package on each analysis computer, waveform data can then be read by our MatLab or Fortran applications without first copying the data. Since there is a single copy of the waveform data in a single location, the NFS file system hierarchy provides an implicit complete waveform data catalog and the single copy is inherently authoritative. Another part of our solution is to convert the original data into a blocked-binary format (known historically as USGS DR100 or VFBB format) that is interpreted by MatLab or Fortran library routines available on each computer so that the idiosyncrasies of each machine are not visible to the user. Commercial software packages, such as MatLab, also have the ability to share data in their own formats across multiple computer platforms. Our Fortran applications can create plot files in Adobe PostScript, Illustrator, and Portable Document Format (PDF) formats. Vendor support for reading these files is readily available on multiple computer platforms. We will illustrate by example our strategies for sharing seismic data among our multiple computer platforms, and we will discuss our positive and negative experiences. We will include our solutions for handling the different byte ordering, floating-point formats, and text file ``end-of-line'' conventions on the various computer platforms we use (6 different operating systems on 5 processor architectures).

  3. Application of XML in DICOM

    NASA Astrophysics Data System (ADS)

    You, Xiaozhen; Yao, Zhihong

    2005-04-01

    As a standard of communication and storage for medical digital images, DICOM has been playing a very important role in integration of hospital information. In DICOM, tags are expressed by numbers, and only standard data elements can be shared by looking up Data Dictionary while private tags can not. As such, a DICOM file's readability and extensibility is limited. In addition, reading DICOM files needs special software. In our research, we introduced XML into DICOM, defining an XML-based DICOM special transfer format, XML-DCM, a DICOM storage format, X-DCM, as well as developing a program package to realize format interchange among DICOM, XML-DCM, and X-DCM. XML-DCM is based on the DICOM structure while replacing numeric tags with accessible XML character string tags. The merits are as following: a) every character string tag of XML-DCM has explicit meaning, so users can understand standard data elements and those private data elements easily without looking up the Data Dictionary. In this way, the readability and data sharing of DICOM files are greatly improved; b) According to requirements, users can set new character string tags with explicit meaning to their own system to extend the capacity of data elements; c) User can read the medical image and associated information conveniently through IE, ultimately enlarging the scope of data sharing. The application of storage format X-DCM will reduce data redundancy and save storage memory. The result of practical application shows that XML-DCM does favor integration and share of medical image data among different systems or devices.

  4. A President Tries To Settle the Controversy over File Sharing.

    ERIC Educational Resources Information Center

    Carlson, Scott

    2003-01-01

    Describes how Graham B. Spanier, president of Pennsylvania State University, wants to end the dispute over file sharing on college campuses. One of his suggestions involves a deal with the music industry. (EV)

  5. Reducing I/O variability using dynamic I/O path characterization in petascale storage systems

    DOE PAGES

    Son, Seung Woo; Sehrish, Saba; Liao, Wei-keng; ...

    2016-11-01

    In petascale systems with a million CPU cores, scalable and consistent I/O performance is becoming increasingly difficult to sustain mainly because of I/O variability. Furthermore, the I/O variability is caused by concurrently running processes/jobs competing for I/O or a RAID rebuild when a disk drive fails. We present a mechanism that stripes across a selected subset of I/O nodes with the lightest workload at runtime to achieve the highest I/O bandwidth available in the system. In this paper, we propose a probing mechanism to enable application-level dynamic file striping to mitigate I/O variability. We also implement the proposed mechanism inmore » the high-level I/O library that enables memory-to-file data layout transformation and allows transparent file partitioning using subfiling. Subfiling is a technique that partitions data into a set of files of smaller size and manages file access to them, making data to be treated as a single, normal file to users. Here, we demonstrate that our bandwidth probing mechanism can successfully identify temporally slower I/O nodes without noticeable runtime overhead. Experimental results on NERSC’s systems also show that our approach isolates I/O variability effectively on shared systems and improves overall collective I/O performance with less variation.« less

  6. 76 FR 27114 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-10

    ... CRD Processing Fee, the NASD Annual System Processing Fee, and the NYSE Arca Transfer/Re-license... Fees, the NASD Annual System Processing Fee, and the NYSE Arca Transfer/Re-license Individual Fee. Fees... Options Regulatory Surveillance Authority (``ORSA'') national market system plan and in doing so shares...

  7. ISA-TAB-Nano: a specification for sharing nanomaterial research data in spreadsheet-based format.

    PubMed

    Thomas, Dennis G; Gaheen, Sharon; Harper, Stacey L; Fritts, Martin; Klaessig, Fred; Hahn-Dantona, Elizabeth; Paik, David; Pan, Sue; Stafford, Grace A; Freund, Elaine T; Klemm, Juli D; Baker, Nathan A

    2013-01-14

    The high-throughput genomics communities have been successfully using standardized spreadsheet-based formats to capture and share data within labs and among public repositories. The nanomedicine community has yet to adopt similar standards to share the diverse and multi-dimensional types of data (including metadata) pertaining to the description and characterization of nanomaterials. Owing to the lack of standardization in representing and sharing nanomaterial data, most of the data currently shared via publications and data resources are incomplete, poorly-integrated, and not suitable for meaningful interpretation and re-use of the data. Specifically, in its current state, data cannot be effectively utilized for the development of predictive models that will inform the rational design of nanomaterials. We have developed a specification called ISA-TAB-Nano, which comprises four spreadsheet-based file formats for representing and integrating various types of nanomaterial data. Three file formats (Investigation, Study, and Assay files) have been adapted from the established ISA-TAB specification; while the Material file format was developed de novo to more readily describe the complexity of nanomaterials and associated small molecules. In this paper, we have discussed the main features of each file format and how to use them for sharing nanomaterial descriptions and assay metadata. The ISA-TAB-Nano file formats provide a general and flexible framework to record and integrate nanomaterial descriptions, assay data (metadata and endpoint measurements) and protocol information. Like ISA-TAB, ISA-TAB-Nano supports the use of ontology terms to promote standardized descriptions and to facilitate search and integration of the data. The ISA-TAB-Nano specification has been submitted as an ASTM work item to obtain community feedback and to provide a nanotechnology data-sharing standard for public development and adoption.

  8. ISA-TAB-Nano: A Specification for Sharing Nanomaterial Research Data in Spreadsheet-based Format

    PubMed Central

    2013-01-01

    Background and motivation The high-throughput genomics communities have been successfully using standardized spreadsheet-based formats to capture and share data within labs and among public repositories. The nanomedicine community has yet to adopt similar standards to share the diverse and multi-dimensional types of data (including metadata) pertaining to the description and characterization of nanomaterials. Owing to the lack of standardization in representing and sharing nanomaterial data, most of the data currently shared via publications and data resources are incomplete, poorly-integrated, and not suitable for meaningful interpretation and re-use of the data. Specifically, in its current state, data cannot be effectively utilized for the development of predictive models that will inform the rational design of nanomaterials. Results We have developed a specification called ISA-TAB-Nano, which comprises four spreadsheet-based file formats for representing and integrating various types of nanomaterial data. Three file formats (Investigation, Study, and Assay files) have been adapted from the established ISA-TAB specification; while the Material file format was developed de novo to more readily describe the complexity of nanomaterials and associated small molecules. In this paper, we have discussed the main features of each file format and how to use them for sharing nanomaterial descriptions and assay metadata. Conclusion The ISA-TAB-Nano file formats provide a general and flexible framework to record and integrate nanomaterial descriptions, assay data (metadata and endpoint measurements) and protocol information. Like ISA-TAB, ISA-TAB-Nano supports the use of ontology terms to promote standardized descriptions and to facilitate search and integration of the data. The ISA-TAB-Nano specification has been submitted as an ASTM work item to obtain community feedback and to provide a nanotechnology data-sharing standard for public development and adoption. PMID:23311978

  9. Privacy Impact Assessment for the eDiscovery Service

    EPA Pesticide Factsheets

    This system collects Logical Evidence Files, which include data from workstations, laptops, SharePoint and document repositories. Learn how the data is collected, used, who has access, the purpose of data collection, and record retention policies.

  10. Smartfiles: An OO approach to data file interoperability

    NASA Technical Reports Server (NTRS)

    Haines, Matthew; Mehrotra, Piyush; Vanrosendale, John

    1995-01-01

    Data files for scientific and engineering codes typically consist of a series of raw data values whose descriptions are buried in the programs that interact with these files. In this situation, making even minor changes in the file structure or sharing files between programs (interoperability) can only be done after careful examination of the data file and the I/O statement of the programs interacting with this file. In short, scientific data files lack self-description, and other self-describing data techniques are not always appropriate or useful for scientific data files. By applying an object-oriented methodology to data files, we can add the intelligence required to improve data interoperability and provide an elegant mechanism for supporting complex, evolving, or multidisciplinary applications, while still supporting legacy codes. As a result, scientists and engineers should be able to share datasets with far greater ease, simplifying multidisciplinary applications and greatly facilitating remote collaboration between scientists.

  11. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  12. Files synchronization from a large number of insertions and deletions

    NASA Astrophysics Data System (ADS)

    Ellappan, Vijayan; Kumari, Savera

    2017-11-01

    Synchronization between different versions of files is becoming a major issue that most of the applications are facing. To make the applications more efficient a economical algorithm is developed from the previously used algorithm of “File Loading Algorithm”. I am extending this algorithm in three ways: First, dealing with non-binary files, Second backup is generated for uploaded files and lastly each files are synchronized with insertions and deletions. User can reconstruct file from the former file with minimizing the error and also provides interactive communication by eliminating the frequency without any disturbance. The drawback of previous system is overcome by using synchronization, in which multiple copies of each file/record is created and stored in backup database and is efficiently restored in case of any unwanted deletion or loss of data. That is, to introduce a protocol that user B may use to reconstruct file X from file Y with suitably low probability of error. Synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Drop box synchronize between local copies and cloud backups each time users make changes to local versions. Similarly, synchronization tools are necessary in mobile devices. Specialized synchronization algorithms are used for video and sound editing. Synchronization tools are also capable of performing data duplication.

  13. An Open Software Platform for Sharing Water Resource Models, Code and Data

    NASA Astrophysics Data System (ADS)

    Knox, Stephen; Meier, Philipp; Mohamed, Khaled; Korteling, Brett; Matrosov, Evgenii; Huskova, Ivana; Harou, Julien; Rosenberg, David; Tilmant, Amaury; Medellin-Azuara, Josue; Wicks, Jon

    2016-04-01

    The modelling of managed water resource systems requires new approaches in the face of increasing future uncertainty. Water resources management models, even if applied to diverse problem areas, use common approaches such as representing the problem as a network of nodes and links. We propose a data management software platform, called Hydra, that uses this commonality to allow multiple models using a node-link structure to be managed and run using a single software system. Hydra's user interface allows users to manage network topology and associated data. Hydra feeds this data directly into a model, importing from and exporting to different file formats using Apps. An App connects Hydra to a custom model, a modelling system such as GAMS or MATLAB or to different file formats such as MS Excel, CSV and ESRI Shapefiles. Hydra allows users to manage their data in a single, consistent place. Apps can be used to run domain-specific models and allow users to work with their own required file formats. The Hydra App Store offers a collaborative space where model developers can publish, review and comment on Apps, models and data. Example Apps and open-source libraries are available in a variety of languages (Python, Java and .NET). The App Store can act as a hub for water resource modellers to view and share Apps, models and data easily. This encourages an ecosystem of development using a shared platform, resulting in more model integration and potentially greater unity within resource modelling communities. www.hydraplatform.org www.hydraappstore.com

  14. Doing Your Science While You're in Orbit

    NASA Astrophysics Data System (ADS)

    Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.

    2010-11-01

    Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.

  15. Purple L1 Milestone Review Panel GPFS Functionality and Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loewe, W E

    2006-12-01

    The GPFS deliverable for the Purple system requires the functionality and performance necessary for ASC I/O needs. The functionality includes POSIX and MPIIO compatibility, and multi-TB file capability across the entire machine. The bandwidth performance required is 122.15 GB/s, as necessary for productive and defensive I/O requirements, and the metadata performance requirement is 5,000 file stats per second. To determine success for this deliverable, several tools are employed. For functionality testing of POSIX, 10TB-files, and high-node-count capability, the parallel file system bandwidth performance test IOR is used. IOR is an MPI-coordinated application that can write and then read to amore » single shared file or to an individual file per process and check the data integrity of the file(s). The MPIIO functionality is tested with the MPIIO test suite from the MPICH library. Bandwidth performance is tested using IOR for the required 122.15 GB/s sustained write. All IOR tests are performanced with data checking enabled. Metadata performance is tested after ''aging'' the file system with 80% data block usage and 20% inode usage. The fdtree metadata test is expected to create/remove a large directory/file structure in under 20 minutes time, akin to interactive metadata usage. Multiple (10) instances of ''ls -lR'', each performing over 100K stats, are run concurrently in different large directories to demonstrate 5,000 stats/sec.« less

  16. Design and development of an international clinical data exchange system: the international layer function of the Dolphin Project

    PubMed Central

    Zhou, Tian-shu; Chu, Jian; Araki, Kenji; Yoshihara, Hiroyuki

    2011-01-01

    Objective At present, most clinical data are exchanged between organizations within a regional system. However, people traveling abroad may need to visit a hospital, which would make international exchange of clinical data very useful. Background Since 2007, a collaborative effort to achieve clinical data sharing has been carried out at Zhejiang University in China and Kyoto University and Miyazaki University in Japan; each is running a regional clinical information center. Methods An international layer system named Global Dolphin was constructed with several key services, sharing patients' health information between countries using a medical markup language (MML). The system was piloted with 39 test patients. Results The three regions above have records for 966 000 unique patients, which are available through Global Dolphin. Data exchanged successfully from Japan to China for the 39 study patients include 1001 MML files and 152 images. The MML files contained 197 free text-type paragraphs that needed human translation. Discussion The pilot test in Global Dolphin demonstrates that patient information can be shared across countries through international health data exchange. To achieve cross-border sharing of clinical data, some key issues had to be addressed: establishment of a super directory service across countries; data transformation; and unique one—language translation. Privacy protection was also taken into account. The system is now ready for live use. Conclusion The project demonstrates a means of achieving worldwide accessibility of medical data, by which the integrity and continuity of patients' health information can be maintained. PMID:21571747

  17. 78 FR 13726 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Amendments No. 1 and No. 2...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-28

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-68973; File No. SR-NYSEArca-2012-66] Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Amendments No. 1 and No. 2 and Order Granting Accelerated Approval of a Proposed Rule Change as Modified by Amendments No. 1 and No. 2 To List and Trade Shares of the iShares Copper Trust...

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duro, Francisco Rodrigo; Garcia Blas, Javier; Isaila, Florin

    This paper explores novel techniques for improving the performance of many-task workflows based on the Swift scripting language. We propose novel programmer options for automated distributed data placement and task scheduling. These options trigger a data placement mechanism used for distributing intermediate workflow data over the servers of Hercules, a distributed key-value store that can be used to cache file system data. We demonstrate that these new mechanisms can significantly improve the aggregated throughput of many-task workflows with up to 86x, reduce the contention on the shared file system, exploit the data locality, and trade off locality and load balance.

  19. 78 FR 6382 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... shares other than 100.\\8\\ Moreover, the concept of listing and trading parallel options products of... terms of the Options Disclosure Document. With regard to the impact of this proposal on system capacity... Authority (``OPRA'') have the necessary systems capacity to handle the potential additional traffic...

  20. 78 FR 6391 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... shares other than 100.\\8\\ Moreover, the concept of listing and trading parallel options products of... system capacity, the Exchange has analyzed its capacity and represents that it and the Options Price Reporting Authority (``OPRA'') have the necessary systems capacity to handle the potential additional...

  1. Incorporating Brokers within Collaboration Environments

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; de Torcy, A.

    2013-12-01

    A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.

  2. Secure Peer-to-Peer Networks for Scientific Information Sharing

    NASA Technical Reports Server (NTRS)

    Karimabadi, Homa

    2012-01-01

    The most common means of remote scientific collaboration today includes the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. With the growth of broadband Internet, there has been a desire to share large files (movies, files, scientific data files) over the Internet. Email has limits on the size of files that can be attached and transmitted. FTP is often used to share large files, but this requires the user to set up an FTP site for which it is hard to set group privileges, it is not straightforward for everyone, and the content is not searchable. Peer-to-peer technology (P2P), which has been overwhelmingly successful in popular content distribution, is the basis for development of a scientific collaboratory called Scientific Peer Network (SciPerNet). This technology combines social networking with P2P file sharing. SciPerNet will be a standalone application, written in Java and Swing, thus insuring portability to a number of different platforms. Some of the features include user authentication, search capability, seamless integration with a data center, the ability to create groups and social networks, and on-line chat. In contrast to P2P networks such as Gnutella, Bit Torrent, and others, SciPerNet incorporates three design elements that are critical to application of P2P for scientific purposes: User authentication, Data integrity validation, Reliable searching SciPerNet also provides a complementary solution to virtual observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase scientific returns from NASA missions. As such, SciPerNet can serve a two-fold purpose for NASA: a cost-savings software as well as a productivity tool for scientists working with data from NASA missions.

  3. Representing Hydrologic Models as HydroShare Resources to Facilitate Model Sharing and Collaboration

    NASA Astrophysics Data System (ADS)

    Castronova, A. M.; Goodall, J. L.; Mbewe, P.

    2013-12-01

    The CUAHSI HydroShare project is a collaborative effort that aims to provide software for sharing data and models within the hydrologic science community. One of the early focuses of this work has been establishing metadata standards for describing models and model-related data as HydroShare resources. By leveraging this metadata definition, a prototype extension has been developed to create model resources that can be shared within the community using the HydroShare system. The extension uses a general model metadata definition to create resource objects, and was designed so that model-specific parsing routines can extract and populate metadata fields from model input and output files. The long term goal is to establish a library of supported models where, for each model, the system has the ability to extract key metadata fields automatically, thereby establishing standardized model metadata that will serve as the foundation for model sharing and collaboration within HydroShare. The Soil Water & Assessment Tool (SWAT) is used to demonstrate this concept through a case study application.

  4. Global Data Spatially Interrelate System for Scientific Big Data Spatial-Seamless Sharing

    NASA Astrophysics Data System (ADS)

    Yu, J.; Wu, L.; Yang, Y.; Lei, X.; He, W.

    2014-04-01

    A good data sharing system with spatial-seamless services will prevent the scientists from tedious, boring, and time consuming work of spatial transformation, and hence encourage the usage of the scientific data, and increase the scientific innovation. Having been adopted as the framework of Earth datasets by Group on Earth Observation (GEO), Earth System Spatial Grid (ESSG) is potential to be the spatial reference of the Earth datasets. Based on the implementation of ESSG, SDOG-ESSG, a data sharing system named global data spatially interrelate system (GASE) was design to make the data sharing spatial-seamless. The architecture of GASE was introduced. The implementation of the two key components, V-Pools, and interrelating engine, and the prototype is presented. Any dataset is firstly resampled into SDOG-ESSG, and is divided into small blocks, and then are mapped into hierarchical system of the distributed file system in V-Pools, which together makes the data serving at a uniform spatial reference and at a high efficiency. Besides, the datasets from different data centres are interrelated by the interrelating engine at the uniform spatial reference of SDOGESSG, which enables the system to sharing the open datasets in the internet spatial-seamless.

  5. [Study of sharing platform of web-based enhanced extracorporeal counterpulsation hemodynamic waveform data].

    PubMed

    Huang, Mingbo; Hu, Ding; Yu, Donglan; Zheng, Zhensheng; Wang, Kuijian

    2011-12-01

    Enhanced extracorporeal counterpulsation (EECP) information consists of both text and hemodynamic waveform data. At present EECP text information has been successfully managed through Web browser, while the management and sharing of hemodynamic waveform data through Internet has not been solved yet. In order to manage EECP information completely, based on the in-depth analysis of EECP hemodynamic waveform file of digital imaging and communications in medicine (DICOM) format and its disadvantages in Internet sharing, we proposed the use of the extensible markup language (XML), which is currently the Internet popular data exchange standard, as the storage specification for the sharing of EECP waveform data. Then we designed a web-based sharing system of EECP hemodynamic waveform data via ASP. NET 2.0 platform. Meanwhile, we specifically introduced the four main system function modules and their implement methods, including DICOM to XML conversion module, EECP waveform data management module, retrieval and display of EECP waveform module and the security mechanism of the system.

  6. Privacy-Preserving and Secure Sharing of PHR in the Cloud.

    PubMed

    Zhang, Leyou; Wu, Qing; Mu, Yi; Zhang, Jingxia

    2016-12-01

    As a new summarized record of an individual's medical data and information, Personal Health Record (PHR) can be accessible online. The owner can control fully his/her PHR files to be shared with different users such as doctors, clinic agents, and friends. However, in an open network environment like in the Cloud, these sensitive privacy information may be gotten by those unauthorized parties and users. In this paper, we consider how to achieve PHR data confidentiality and provide fine-grained access control of PHR files in the public Cloud based on Attribute Based Encryption(ABE). Differing from previous works, we also consider the privacy preserving of the receivers since the attributes of the receivers relate to their identity or medical information, which would make some sensitive data exposed to third services. Anonymous ABE(AABE) not only enforces the security of PHR of the owners but also preserves the privacy of the receivers. But a normal AABE with a single private key generation(PKG) center may not match a PHR system in the hierarchical architecture. Therefore, we discuss not only the construction of the PHR sharing system base on AABE but also how to construct the PHR sharing system based on the hierarchical AABE. The proposed schemes(especially based on hierarchical AABE) have many advantages over the available such as short public keys, constant-size private keys, which overcome the weaknesses in the existing works. In the standard model, the introduced schemes achieve compact security in the prime order groups.

  7. Using Python on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    was not designed for use in a shared computing environment. The following example creates a new Python is run. For example an environment.yml file can be created on the developer's laptop and used on the

  8. SSeCloud: Using secret sharing scheme to secure keys

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Huang, Yang; Yang, Disheng; Zhang, Yuzhen; Liu, Hengchang

    2017-08-01

    With the use of cloud storage services, one of the concerns is how to protect sensitive data securely and privately. While users enjoy the convenience of data storage provided by semi-trusted cloud storage providers, they are confronted with all kinds of risks at the same time. In this paper, we present SSeCloud, a secure cloud storage system that improves security and usability by applying secret sharing scheme to secure keys. The system encrypts uploading files on the client side and splits encrypted keys into three shares. Each of them is respectively stored by users, cloud storage providers and the alternative third trusted party. Any two of the parties can reconstruct keys. Evaluation results of prototype system show that SSeCloud provides high security without too much performance penalty.

  9. Ada 9X Project Revision Request Report. Supplement 1

    DTIC Science & Technology

    1990-01-01

    Non-portable use of operating system primitives or of Ada run time system internals. POSSIBLE SOLUTIONS: Mandate that compilers recognize tasks that...complex than a simple operating system file, the compiler vendor must provide routines to manipulate it (create, copy, move etc .) as a single entity... system , to support fault tolerance, load sharing, change of system operating mode etc . It is highly desirable that such important software be written in

  10. Illegal File Sharing 101

    ERIC Educational Resources Information Center

    Wada, Kent

    2008-01-01

    Much of higher education's unease arises from the cost of dealing with illegal file sharing. Illinois State University, for example, calculated a cost of $76 to process a first claim of copyright infringement and $146 for a second. Responses range from simply passing along claims to elaborate programs architected with specific goals in mind.…

  11. 75 FR 47333 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-05

    ... 100 shares) orders in a separate, dedicated system, but will trade them on the Display Book system... Rule 501(a)--NYSE Amex Equities, the Exchange proposes to amend the definition of the term ``Closing... proposes to add language to conform the definition of a ``Stop Order'' for Nasdaq Securities with that for...

  12. Taking advantage of HTML5 browsers to realize the concepts of session state and workflow sharing in web-tool applications

    NASA Astrophysics Data System (ADS)

    Suftin, I.; Read, J. S.; Walker, J.

    2013-12-01

    Scientists prefer not having to be tied down to a specific machine or operating system in order to analyze local and remote data sets or publish work. Increasingly, analysis has been migrating to decentralized web services and data sets, using web clients to provide the analysis interface. While simplifying workflow access, analysis, and publishing of data, the move does bring with it its own unique set of issues. Web clients used for analysis typically offer workflows geared towards a single user, with steps and results that are often difficult to recreate and share with others. Furthermore, workflow results often may not be easily used as input for further analysis. Older browsers further complicate things by having no way to maintain larger chunks of information, often offloading the job of storage to the back-end server or trying to squeeze it into a cookie. It has been difficult to provide a concept of "session storage" or "workflow sharing" without a complex orchestration of the back-end for storage depending on either a centralized file system or database. With the advent of HTML5, browsers gained the ability to store more information through the use of the Web Storage API (a browser-cookie holds a maximum of 4 kilobytes). Web Storage gives us the ability to store megabytes of arbitrary data in-browser either with an expiration date or just for a session. This allows scientists to create, update, persist and share their workflow without depending on the backend to store session information, providing the flexibility for new web-based workflows to emerge. In the DSASWeb portal ( http://cida.usgs.gov/DSASweb/ ), using these techniques, the representation of every step in the analyst's workflow is stored as plain-text serialized JSON, which we can generate as a text file and provide to the analyst as an upload. This file may then be shared with others and loaded back into the application, restoring the application to the state it was in when the session file was generated. A user may then view results produced during that session or go back and alter input parameters, creating new results and producing new, unique sessions which they can then again share. This technique not only provides independence for the user to manage their session as they like, but also allows much greater freedom for the application provider to scale out without having to worry about carrying over user information or maintaining it in a central location.

  13. CephFS: a new generation storage platform for Australian high energy physics

    NASA Astrophysics Data System (ADS)

    Borges, G.; Crosby, S.; Boland, L.

    2017-10-01

    This paper presents an implementation of a Ceph file system (CephFS) use case at the ARC Center of Excellence for Particle Physics at the Terascale (CoEPP). CoEPP’s CephFS provides a posix-like file system on top of a Ceph RADOS object store, deployed on commodity hardware and without single points of failure. By delivering a unique file system namespace at different CoEPP centres spread across Australia, local HEP researchers can store, process and share data independently of their geographical locations. CephFS is also used as the back-end file system for a WLCG ATLAS user area at the Australian Tier-2. Dedicated SRM and XROOTD services, deployed on top of CoEPP’s CephFS, integrates it in ATLAS data distributed operations. This setup, while allowing Australian HEP researchers to trigger data movement via ATLAS grid tools, also enables local posix-like read access providing greater control to scientists of their data flows. In this article we will present details on CoEPP’s Ceph/CephFS implementation and report performance I/O metrics collected during the testing/tuning phase of the system.

  14. Peer-to-Peer Content Distribution and Over-The-Top TV: An Analysis of Value Networks

    NASA Astrophysics Data System (ADS)

    de Boever, Jorn; de Grooff, Dirk

    The convergence of Internet and TV, i.e., the Over-The-Top TV (OTT TV) paradigm, created opportunities for P2P content distribution as these systems reduce bandwidth expenses for media companies. This resulted in the arrival of legal, commercial P2P systems which increases the importance of studying economic aspects of these business operations. This chapter examines the value networks of three cases (Kontiki, Zattoo and bittorrent) in order to compare how different actors position and distinguish themselves from competitors by creating value in different ways. The value networks of legal systems have different compositions depending on their market orientation - Business-to-Business (B2B) and/or Businessto- Consumer (B2C). In addition, legal systems differ from illegal systems as legal companies are not inclined to grant control to users, whereas users havemost control in value networks of illegal, self-organizing file sharing communities. In conclusion, the OTT TV paradigm made P2P technology a partner for the media industry rather than an enemy. However, we argue that the lack of control granted to users will remain a seed-bed for the success of illegal P2P file sharing communities.

  15. Developing Intranets: Practical Issues for Implementation and Design.

    ERIC Educational Resources Information Center

    Trowbridge, Dave

    1996-01-01

    An intranet is a system which has "domesticated" the technologies of the Internet for specific organizational settings and goals. Although the adaptability of Hypertext Markup Language to intranets is sometimes limited, implementing various protocols and technologies enable organizations to share files among heterogeneous computers,…

  16. Efficient Access to Massive Amounts of Tape-Resident Data

    NASA Astrophysics Data System (ADS)

    Yu, David; Lauret, Jérôme

    2017-10-01

    Randomly restoring files from tapes degrades the read performance primarily due to frequent tape mounts. The high latency and time-consuming tape mount and dismount is a major issue when accessing massive amounts of data from tape storage. BNL’s mass storage system currently holds more than 80 PB of data on tapes, managed by HPSS. To restore files from HPSS, we make use of a scheduler software, called ERADAT. This scheduler system was originally based on code from Oak Ridge National Lab, developed in the early 2000s. After some major modifications and enhancements, ERADAT now provides advanced HPSS resource management, priority queuing, resource sharing, web-browser visibility of real-time staging activities and advanced real-time statistics and graphs. ERADAT is also integrated with ACSLS and HPSS for near real-time mount statistics and resource control in HPSS. ERADAT is also the interface between HPSS and other applications such as the locally developed Data Carousel, providing fair resource-sharing policies and related capabilities. ERADAT has demonstrated great performance at BNL.

  17. Measuring a year of child pornography trafficking by U.S. computers on a peer-to-peer network.

    PubMed

    Wolak, Janis; Liberatore, Marc; Levine, Brian Neil

    2014-02-01

    We used data gathered via investigative "RoundUp" software to measure a year of online child pornography (CP) trafficking activity by U.S. computers on the Gnutella peer-to-peer network. The data include millions of observations of Internet Protocol addresses sharing known CP files, identified as such in previous law enforcement investigations. We found that 244,920 U.S. computers shared 120,418 unique known CP files on Gnutella during the study year. More than 80% of these computers shared fewer than 10 such files during the study year or shared files for fewer than 10 days. However, less than 1% of computers (n=915) made high annual contributions to the number of known CP files available on the network (100 or more files). If law enforcement arrested the operators of these high-contribution computers and took their files offline, the number of distinct known CP files available in the P2P network could be reduced by as much as 30%. Our findings indicate widespread low level CP trafficking by U.S. computers in one peer-to-peer network, while a small percentage of computers made high contributions to the problem. However, our measures were not comprehensive and should be considered lower bounds estimates. Nonetheless, our findings show that data can be systematically gathered and analyzed to develop an empirical grasp of the scope and characteristics of CP trafficking on peer-to-peer networks. Such measurements can be used to combat the problem. Further, investigative software tools can be used strategically to help law enforcement prioritize investigations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Research Using In Vivo Simulation of Meta-Organizational Shared Decision Making (SDM). Task 3: Testing the Shared Decision Making Framework in Vivo

    DTIC Science & Technology

    2011-12-01

    developed to address the two main research questions (see Annex A). Exact wording of the questions varied during interviews to accommodate the...centre at DMS 3rd floor. All electronic files (including digital audio and video recordings) with participant data are being encrypted and password...locked filing cabinet at the University of Ottawa. Electronic files will remain encrypted, password protected and stored on a server to which only the

  19. Issues to be resolved in Torrents—Future Revolutionised File Sharing

    NASA Astrophysics Data System (ADS)

    Thanekar, Sachin Arun

    2010-11-01

    Torrenting is a highly popular peer to peer file sharing activity that allows participants to send and receive files from other computers. As it is an advantageous technique as compare to traditional client server file sharing in terms of time, cost and speed, some drawbaks are also there. Content unavailability, lack of anonymity, leechers, cheaters and download speed consistency are the major problems to sort out. Efforts are needed to resolve these problems and to make this better application. Legal issues are also one of the measure factors of consideration. BitTorrent metafiles themselves do not store copyrighted data. Whether the publishers of BitTorrent metafiles violate copyrights by linking to copyrighted material is controversial. Various countries have taken legal action against websites that host BitTorrent trackers. Eg. Supernova.org, Torrentspy. Efforts are also needed to make such a useful protocol legal.

  20. Cryptonite: A Secure and Performant Data Repository on Public Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor

    2012-06-29

    Cloud storage has become immensely popular for maintaining synchronized copies of files and for sharing documents with collaborators. However, there is heightened concern about the security and privacy of Cloud-hosted data due to the shared infrastructure model and an implicit trust in the service providers. Emerging needs of secure data storage and sharing for domains like Smart Power Grids, which deal with sensitive consumer data, require the persistence and availability of Cloud storage but with client-controlled security and encryption, low key management overhead, and minimal performance costs. Cryptonite is a secure Cloud storage repository that addresses these requirements using amore » StrongBox model for shared key management.We describe the Cryptonite service and desktop client, discuss performance optimizations, and provide an empirical analysis of the improvements. Our experiments shows that Cryptonite clients achieve a 40% improvement in file upload bandwidth over plaintext storage using the Azure Storage Client API despite the added security benefits, while our file download performance is 5 times faster than the baseline for files greater than 100MB.« less

  1. 36 CFR 223.118 - Appeal process for small business timber sale set-aside program share recomputation decisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the Chief may designate. (e) Filing procedures. In order to file an appeal under this section, an... interested party in response to an appeal must be filed within 15 days after the close of the appeal filing... filing an appeal; however, when the filing period would expire on a Saturday, Sunday, or Federal holiday...

  2. 47 CFR 27.1180 - The cost-sharing formula.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...); towers and/or modifications; back-up power equipment; monitoring or control equipment; engineering costs (design/path survey); installation; systems testing; FCC filing costs; site acquisition and civil works... as equipment and engineering expenses. There is no cap on the actual costs of relocation. (c) An AWS...

  3. How Do We Know What Information Sharing Is Really Worth? Exploring Methodologies to Measure the Value of Information Sharing and Fusion Efforts

    DTIC Science & Technology

    2014-01-01

    maintained by the Criminal Justice Information Services (CJIS) section of the FBI and the Law Enforcement Online ( LEO ) system. The Regional...2013, pp. 20–26. Davis, Lois M., Michael Pollard, Kevin Ward, Jeremy M. Wilson, Danielle M. Varda, Lydia Hansell, and Paul S. Steinberg, Long-Term...default/files/2010_ITACG_Report_ Final_30Nov10.pdf Rhodes, William, Meg Chapman, Michael Shively, Christina Dyous, Dana Hunt, and Kristin Wheeler

  4. 78 FR 14867 - Self-Regulatory Organizations; National Stock Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... how the NSX System may execute certain types of Zero Display Reserve Orders \\4\\ that are pegged to the midpoint between the Protected BBO in subpennies. NSX Rule 11.3(c) provides that a Zero Display Reserve... the System rounds executions in securities priced less than $1.00 per share resulting from a Zero...

  5. Peregrine System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a

  6. 75 FR 13169 - Self-Regulatory Organizations; The Options Clearing Corporation; Notice of Filing of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-18

    ... interpretation with respect to the treatment and clearing of options and security futures on SPDR Gold Shares.\\2... amended the interpretation to extend similar treatment to options and security futures on iShares[supreg... rule filing SR-OCC-2009-20, which extended similar treatment to options and security futures on ETFS...

  7. 78 FR 25502 - Self-Regulatory Organizations; Miami International Securities Exchange LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-01

    ... Effectiveness of Proposed Rule Change To Increase the Position and Exercise Limits for Options on iShares MSCI... filing a proposal to amend its rules to increase the position and exercise limits for options on iShares... and Policies .01 to increase position and exercise limits, respectively, for EEM options. Position...

  8. 77 FR 38875 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-29

    ... participants to react to the execution (an effect known as ``market impact'' or ``information leakage''). As a... available shares and routing to other venues' shares will avoid the deleterious effect of market impact...-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing of Proposed Rule Change To Amend Rule...

  9. Professors Join the Fray as Supreme Court Hears Arguments in File-Sharing Case

    ERIC Educational Resources Information Center

    Foster, Andrea L.

    2005-01-01

    U.S. Supreme Court justices struggled in a lively debate with how to balance the competing interests of the entertainment industry and developers of file-sharing technology. Some justices sharply questioned whether it was fair to hold inventors of a distribution technology liable for copyright infringement, while others suggested that it was wrong…

  10. How Higher Education and Industry Can Move Forward on File Sharing

    ERIC Educational Resources Information Center

    Chronicle of Higher Education, 2008

    2008-01-01

    How should colleges deal with incidents of illegal file sharing on their campuses? At the Technology Forum, aspects of that question were discussed by Cheryl A. Elzy, dean of university libraries at Illinois State University; Jim Gibson, an associate professor of law at the University of Richmond; Stewart McLaurin, executive vice president for…

  11. Crystallographic and general use programs for the XDS Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Snyder, R. L.

    1973-01-01

    Programs in basic FORTRAN 4 are described, which fall into three catagories: (1) interactive programs to be executed under time sharing (BTM); (2) non interactive programs which are executed in batch processing mode (BPM); and (3) large non interactive programs which require more memory than is available in the normal BPM/BTM operating system and must be run overnight on a special system called XRAY which releases about 45,000 words of memory to the user. Programs in catagories (1) and (2) are stored as FORTRAN source files in the account FSNYDER. Programs in catagory (3) are stored in the XRAY system as load modules. The type of file in account FSNYDER is identified by the first two letters in the name.

  12. DAM-ing the Digital Flood

    ERIC Educational Resources Information Center

    Raths, David

    2008-01-01

    With the widespread digitization of art, photography, and music, plus the introduction of streaming video, many colleges and universities are realizing that they must develop or purchase systems to preserve their school's digitized objects; that they must create searchable databases so that researchers can find and share copies of digital files;…

  13. 75 FR 69717 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-15

    ... new Nasdaq product offerings, pending the resolution to this matter. Thus, offering a Managed Data... trading systems (``ATSs''), including dark pools and electronic communication networks (``ECNs''). Each.... A proliferation of dark pools and other ATSs operate profitably with fragmentary shares of...

  14. From files to SAF: 3D endodontic treatment is possible at last.

    PubMed

    Metzger, Zvi

    2011-01-01

    3D cleaning, shaping and obturation of root canals has always been the desired goal of endodontic treatment which in many cases is difficult to attain. The introduction of NiTi rotary files made a major change in endodontic practice, making treatment easier, safer and faster. Nevertheless, after 16 years of intensive development, most of these instruments still share several drawbacks, the major one being the inability to three-dimensionally clean and shape oval root canals. The Self-Adjusting File (SAF) System was designed to overcome many of the current drawbacks of rotary file systems. It is based on a hollow, highly compressible file that adapts itself three-dimensionally to the shape of a given root canal, including its cross section. The file is operated with vibratory in-and-out motion, with continuous irrigation delivered by a peristaltic pump through the hollow file. A uniform layer of dentin is removed from the whole circumference of the root canal, thus achieving the main goals of root canal treatment while preserving the remaining root dentin. The 3D scrubbing effect of the file, combined with the always fresh irrigant, result in unprecedentedly clean canals which facilitate in turn better obturation. More effective disinfection of flat-oval root canals is another goal which is simultaneously attained. The safety of the root-canal treatment is also greatly enhanced by the high mechanical stability of the SAF and by using a new concept of no-pressure irrigation. The SAF System gets the operator much closer to the long-desired goal of 3D root-canal treatment.

  15. [A new tool for retrieving clinical data from various sources].

    PubMed

    Nielsen, Erik Waage; Hovland, Anders; Strømsnes, Oddgeir

    2006-02-23

    A doctor's tool for extracting clinical data from various sources on groups of hospital patients into one file has been in demand. For this purpose we evaluated Qlikview. Based on clinical information required by two cardiologists, an IT specialist with thorough knowledge of the hospital's data system (www.dips.no) used 30 days to assemble one Qlikview file. Data was also assembled from a pre-hospital ambulance system. The 13 Mb Qlikview file held various information on 12430 patients admitted to the cardiac unit 26,287 times over the last 21 years. Included were also 530,912 clinical laboratory analyses from these patients during the past five years. Some information required by the cardiologists was inaccessible due to lack of coding or data storage. Some databases could not export their data. Others were encrypted by the software company. A major part of the required data could be extracted to Qlikview. Searches went fast in spite of the huge amount of data. Qlikview could assemble clinical information to doctors from different data systems. Doctors from different hospitals could share and further refine empty Qlikview files for their own use. When the file is assembled, doctors can, on their own, search for answers to constantly changing clinical questions, also at odd hours.

  16. Integration of DICOM and openEHR standards

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Yao, Zhihong; Liu, Lei

    2011-03-01

    The standard format for medical imaging storage and transmission is DICOM. openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records. Considering that the integration of DICOM and openEHR is beneficial to information sharing, on the basis of XML-based DICOM format, we developed a method of creating a DICOM Imaging Archetype in openEHR to enable the integration of DICOM and openEHR. Each DICOM file contains abundant imaging information. However, because reading a DICOM involves looking up the DICOM Data Dictionary, the readability of a DICOM file has been limited. openEHR has innovatively adopted two level modeling method, making clinical information divided into lower level, the information model, and upper level, archetypes and templates. But one critical challenge posed to the development of openEHR is the information sharing problem, especially in imaging information sharing. For example, some important imaging information cannot be displayed in an openEHR file. In this paper, to enhance the readability of a DICOM file and semantic interoperability of an openEHR file, we developed a method of mapping a DICOM file to an openEHR file by adopting the form of archetype defined in openEHR. Because an archetype has a tree structure, after mapping a DICOM file to an openEHR file, the converted information is structuralized in conformance with openEHR format. This method enables the integration of DICOM and openEHR and data exchange without losing imaging information between two standards.

  17. Supporting geoscience with graphical-user-interface Internet tools for the Macintosh

    NASA Astrophysics Data System (ADS)

    Robin, Bernard

    1995-07-01

    This paper describes a suite of Macintosh graphical-user-interface (GUI) software programs that can be used in conjunction with the Internet to support geoscience education. These software programs allow science educators to access and retrieve a large body of resources from an increasing number of network sites, taking advantage of the intuitive, simple-to-use Macintosh operating system. With these tools, educators easily can locate, download, and exchange not only text files but also sound resources, video movie clips, and software application files from their desktop computers. Another major advantage of these software tools is that they are available at no cost and may be distributed freely. The following GUI software tools are described including examples of how they can be used in an educational setting: ∗ Eudora—an e-mail program ∗ NewsWatcher—a newsreader ∗ TurboGopher—a Gopher program ∗ Fetch—a software application for easy File Transfer Protocol (FTP) ∗ NCSA Mosaic—a worldwide hypertext browsing program. An explosive growth of online archives currently is underway as new electronic sites are being added continuously to the Internet. Many of these resources may be of interest to science educators who learn they can share not only ASCII text files, but also graphic image files, sound resources, QuickTime movie clips, and hypermedia projects with colleagues from locations around the world. These powerful, yet simple to learn GUI software tools are providing a revolution in how knowledge can be accessed, retrieved, and shared.

  18. 75 FR 18554 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Amendment No. 1 and Order...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-12

    ... Approval of a Proposed Rule Change, as Modified by Amendment No. 1 Thereto, Relating to the Listing of Mars...-4 thereunder,\\2\\ a proposed rule change to list and trade shares (``Shares'') of the Mars Hill... with ``Mars Hill Global Relative Value ETF'' and that all references in the filing to ``HTE Asset...

  19. Determinants of unlawful file sharing: a scoping review.

    PubMed

    Watson, Steven James; Zizzo, Daniel John; Fleming, Piers

    2015-01-01

    We employ a scoping review methodology to consider and assess the existing evidence on the determinants of unlawful file sharing (UFS) transparently and systematically. Based on the evidence, we build a simple conceptual framework to model the psychological decision to engage in UFS, purchase legally or do nothing. We identify social, moral, experiential, technical, legal and financial utility sources of the decision to purchase or to file share. They interact in complex ways. We consider the strength of evidence within these areas and note patterns of results. There is good evidence for influences on UFS within each of the identified determinants, particularly for self-reported measures, with more behavioral research needed. There are also indications that the reasons for UFS differ across media; more studies exploring media other than music are required.

  20. Determinants of Unlawful File Sharing: A Scoping Review

    PubMed Central

    Watson, Steven James; Zizzo, Daniel John; Fleming, Piers

    2015-01-01

    We employ a scoping review methodology to consider and assess the existing evidence on the determinants of unlawful file sharing (UFS) transparently and systematically. Based on the evidence, we build a simple conceptual framework to model the psychological decision to engage in UFS, purchase legally or do nothing. We identify social, moral, experiential, technical, legal and financial utility sources of the decision to purchase or to file share. They interact in complex ways. We consider the strength of evidence within these areas and note patterns of results. There is good evidence for influences on UFS within each of the identified determinants, particularly for self-reported measures, with more behavioral research needed. There are also indications that the reasons for UFS differ across media; more studies exploring media other than music are required. PMID:26030384

  1. Sharing digital micrographs and other data files between computers.

    PubMed

    Entwistle, A

    2004-01-01

    It ought to be easy to exchange digital micrographs and other computer data files with a colleague even on another continent. In practice, this often is not the case. The advantages and disadvantages of various methods that are available for exchanging data files between computers are discussed. When possible, data should be transferred through computer networking. When data are to be exchanged locally between computers with similar operating systems, the use of a local area network is recommended. For computers in commercial or academic environments that have dissimilar operating systems or are more widely spaced, the use of FTPs is recommended. Failing this, posting the data on a website and transferring by hypertext transfer protocol is suggested. If peer to peer exchange between computers in domestic environments is needed, the use of Messenger services such as Microsoft Messenger or Yahoo Messenger is the method of choice. When it is not possible to transfer the data files over the internet, single use, writable CD ROMs are the best media for transferring data. If for some reason this is not possible, DVD-R/RW, DVD+R/RW, 100 MB ZIP disks and USB flash media are potentially useful media for exchanging data files.

  2. Casimage project: a digital teaching files authoring environment.

    PubMed

    Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman

    2004-04-01

    The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.

  3. Dataworks for GNSS: Software for Supporting Data Sharing and Federation of Geodetic Networks

    NASA Astrophysics Data System (ADS)

    Boler, F. M.; Meertens, C. M.; Miller, M. M.; Wier, S.; Rost, M.; Matykiewicz, J.

    2015-12-01

    Continuously-operating Global Navigation Satellite System (GNSS) networks are increasingly being installed globally for a wide variety of science and societal applications. GNSS enables Earth science research in areas including tectonic plate interactions, crustal deformation in response to loading by tectonics, magmatism, water and ice, and the dynamics of water - and thereby energy transfer - in the atmosphere at regional scale. The many individual scientists and organizations that set up GNSS stations globally are often open to sharing data, but lack the resources or expertise to deploy systems and software to manage and curate data and metadata and provide user tools that would support data sharing. UNAVCO previously gained experience in facilitating data sharing through the NASA-supported development of the Geodesy Seamless Archive Centers (GSAC) open source software. GSAC provides web interfaces and simple web services for data and metadata discovery and access, supports federation of multiple data centers, and simplifies transfer of data and metadata to long-term archives. The NSF supported the dissemination of GSAC to multiple European data centers forming the European Plate Observing System. To expand upon GSAC to provide end-to-end, instrument-to-distribution capability, UNAVCO developed Dataworks for GNSS with NSF funding to the COCONet project, and deployed this software on systems that are now operating as Regional GNSS Data Centers as part of the NSF-funded TLALOCNet and COCONet projects. Dataworks consists of software modules written in Python and Java for data acquisition, management and sharing. There are modules for GNSS receiver control and data download, a database schema for metadata, tools for metadata handling, ingest software to manage file metadata, data file management scripts, GSAC, scripts for mirroring station data and metadata from partner GSACs, and extensive software and operator documentation. UNAVCO plans to provide a cloud VM image of Dataworks that would allow standing up a Dataworks-enabled GNSS data center without requiring upfront investment in server hardware. By enabling data creators to organize their data and metadata for sharing, Dataworks helps scientists expand their data curation awareness and responsibility, and enhances data access for all.

  4. A trace-driven analysis of name and attribute caching in a distributed system

    NASA Technical Reports Server (NTRS)

    Shirriff, Ken W.; Ousterhout, John K.

    1992-01-01

    This paper presents the results of simulating file name and attribute caching on client machines in a distributed file system. The simulation used trace data gathered on a network of about 40 workstations. Caching was found to be advantageous: a cache on each client containing just 10 directories had a 91 percent hit rate on name look ups. Entry-based name caches (holding individual directory entries) had poorer performance for several reasons, resulting in a maximum hit rate of about 83 percent. File attribute caching obtained a 90 percent hit rate with a cache on each machine of the attributes for 30 files. The simulations show that maintaining cache consistency between machines is not a significant problem; only 1 in 400 name component look ups required invalidation of a remotely cached entry. Process migration to remote machines had little effect on caching. Caching was less successful in heavily shared and modified directories such as /tmp, but there weren't enough references to /tmp overall to affect the results significantly. We estimate that adding name and attribute caching to the Sprite operating system could reduce server load by 36 percent and the number of network packets by 30 percent.

  5. A database for TMT interface control documents

    NASA Astrophysics Data System (ADS)

    Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John

    2016-08-01

    The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.

  6. Quarterly Update, January-March 1992

    DTIC Science & Technology

    1992-03-01

    representations to support exploiting that commonality. The project used the Feature-Oriented Domain Analysis ( FODA ) method, developed by the project in 1990, in...9 FODA feature-oriented domain analysis ................................... 14 FTP file transfer protocol...concentrations, and product market share for 23 countries. Along with other SEI staff, members of the Rate Monotonic Analysis for Real-Time Systems (RMARTS

  7. 77 FR 1759 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-11

    ..., which Items have been prepared by the Exchange. The Commission is publishing this notice to solicit... Customer Gateway (``CCG'') that accesses the equity trading systems that it shares with its affiliates... increasing connectivity costs, including additional costs based on gateway software and hardware enhancements...

  8. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.

  9. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Bockelman, B.; Blomer, J.

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less

  10. Data Hemorrhages in the Health-Care Sector

    NASA Astrophysics Data System (ADS)

    Johnson, M. Eric

    Confidential data hemorrhaging from health-care providers pose financial risks to firms and medical risks to patients. We examine the consequences of data hemorrhages including privacy violations, medical fraud, financial identity theft, and medical identity theft. We also examine the types and sources of data hemorrhages, focusing on inadvertent disclosures. Through an analysis of leaked files, we examine data hemorrhages stemming from inadvertent disclosures on internet-based file sharing networks. We characterize the security risk for a group of health-care organizations using a direct analysis of leaked files. These files contained highly sensitive medical and personal information that could be maliciously exploited by criminals seeking to commit medical and financial identity theft. We also present evidence of the threat by examining user-issued searches. Our analysis demonstrates both the substantial threat and vulnerability for the health-care sector and the unique complexity exhibited by the US health-care system.

  11. Inadvertent Exposure to Pornography on the Internet: Implications of Peer-to-Peer File-Sharing Networks for Child Development and Families

    ERIC Educational Resources Information Center

    Greenfield, P.M.

    2004-01-01

    This essay comprises testimony to the Congressional Committee on Government Reform. The Committee's concern was the possibility of exposure to pornography when children and teens participate in peer-to-peer file-sharing networks, which are extremely popular in these age groups. A review of the relevant literature led to three major conclusions:…

  12. Storage resource manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Bakken, J.; Petravick, D.

    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less

  13. Globus Identity, Access, and Data Management: Platform Services for Collaborative Science

    NASA Astrophysics Data System (ADS)

    Ananthakrishnan, R.; Foster, I.; Wagner, R.

    2016-12-01

    Globus is software-as-a-service for research data management, developed at, and operated by, the University of Chicago. Globus, accessible at www.globus.org, provides high speed, secure file transfer; file sharing directly from existing storage systems; and data publication to institutional repositories. 40,000 registered users have used Globus to transfer tens of billions of files totaling hundreds of petabytes between more than 10,000 storage systems within campuses and national laboratories in the US and internationally. Web, command line, and REST interfaces support both interactive use and integration into applications and infrastructures. An important component of the Globus system is its foundational identity and access management (IAM) platform service, Globus Auth. Both Globus research data management and other applications use Globus Auth for brokering authentication and authorization interactions between end-users, identity providers, resource servers (services), and a range of clients, including web, mobile, and desktop applications, and other services. Compliant with important standards such as OAuth, OpenID, and SAML, Globus Auth provides mechanisms required for an extensible, integrated ecosystem of services and clients for the research and education community. It underpins projects such as the US National Science Foundation's XSEDE system, NCAR's Research Data Archive, and the DOE Systems Biology Knowledge Base. Current work is extending Globus services to be compliant with FEDRAMP standards for security assessment, authorization, and monitoring for cloud services. We will present Globus IAM solutions and give examples of Globus use in various projects for federated access to resources. We will also describe how Globus Auth and Globus research data management capabilities enable rapid development and low-cost operations of secure data sharing platforms that leverage Globus services and integrate them with local policy and security.

  14. easyDAS: Automatic creation of DAS servers

    PubMed Central

    2011-01-01

    Background The Distributed Annotation System (DAS) has proven to be a successful way to publish and share biological data. Although there are more than 750 active registered servers from around 50 organizations, setting up a DAS server comprises a fair amount of work, making it difficult for many research groups to share their biological annotations. Given the clear advantage that the generalized sharing of relevant biological data is for the research community it would be desirable to facilitate the sharing process. Results Here we present easyDAS, a web-based system enabling anyone to publish biological annotations with just some clicks. The system, available at http://www.ebi.ac.uk/panda-srv/easydas is capable of reading different standard data file formats, process the data and create a new publicly available DAS source in a completely automated way. The created sources are hosted on the EBI systems and can take advantage of its high storage capacity and network connection, freeing the data provider from any network management work. easyDAS is an open source project under the GNU LGPL license. Conclusions easyDAS is an automated DAS source creation system which can help many researchers in sharing their biological data, potentially increasing the amount of relevant biological data available to the scientific community. PMID:21244646

  15. 78 FR 32487 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-30

    ... and trade the shares of the following under NYSE Arca Equities Rule 8.600 (``Managed Fund Shares... proposes to list and trade the shares (``Shares'') of the PowerShares China A-Share Portfolio (``Fund... with the Commission as an open-end management investment company.\\6\\ \\4\\ A Managed Fund Share is a...

  16. Information Metacatalog for a Grid

    NASA Technical Reports Server (NTRS)

    Kolano, Paul

    2007-01-01

    SWIM is a Software Information Metacatalog that gathers detailed information about the software components and packages installed on a grid resource. Information is currently gathered for Executable and Linking Format (ELF) executables and shared libraries, Java classes, shell scripts, and Perl and Python modules. SWIM is built on top of the POUR framework, which is described in the preceding article. SWIM consists of a set of Perl modules for extracting software information from a system, an XML schema defining the format of data that can be added by users, and a POUR XML configuration file that describes how these elements are used to generate periodic, on-demand, and user-specified information. Periodic software information is derived mainly from the package managers used on each system. SWIM collects information from native package managers in FreeBSD, Solaris, and IRX as well as the RPM, Perl, and Python package managers on multiple platforms. Because not all software is available, or installed in package form, SWIM also crawls the set of relevant paths from the File System Hierarchy Standard that defines the standard file system structure used by all major UNIX distributions. Using these two techniques, the vast majority of software installed on a system can be located. SWIM computes the same information gathered by the periodic routines for specific files on specific hosts, and locates software on a system given only its name and type.

  17. 78 FR 2981 - Combined Notice of Filings #2

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-15

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 2 Take notice that the Commission received the following electric rate filings: Docket Numbers: ER13-388-001. Applicants: Sky River LLC. Description: Sky River LLC Request to Defer Action on Shared Facilities Agreement...

  18. 47 CFR 25.130 - Filing requirements for transmitting earth stations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... such earth station license applications must be filed electronically through the International Bureau... CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Earth Stations § 25.130 Filing... with § 25.203 shall be provided for earth stations transmitting in the frequency bands shared with...

  19. 47 CFR 25.130 - Filing requirements for transmitting earth stations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... such earth station license applications must be filed electronically through the International Bureau... CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Earth Stations § 25.130 Filing... with § 25.203 shall be provided for earth stations transmitting in the frequency bands shared with...

  20. 47 CFR 25.130 - Filing requirements for transmitting earth stations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... such earth station license applications must be filed electronically through the International Bureau... CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Earth Stations § 25.130 Filing... with § 25.203 shall be provided for earth stations transmitting in the frequency bands shared with...

  1. Advancing the Implementation of Hydrologic Models as Web-based Applications

    NASA Astrophysics Data System (ADS)

    Dahal, P.; Tarboton, D. G.; Castronova, A. M.

    2017-12-01

    Advanced computer simulations are required to understand hydrologic phenomenon such as rainfall-runoff response, groundwater hydrology, snow hydrology, etc. Building a hydrologic model instance to simulate a watershed requires investment in data (diverse geospatial datasets such as terrain, soil) and computer resources, typically demands a wide skill set from the analyst, and the workflow involved is often difficult to reproduce. This work introduces a web-based prototype infrastructure in the form of a web application that provides researchers with easy to use access to complete hydrological modeling functionality. This includes creating the necessary geospatial and forcing data, preparing input files for a model by applying complex data preprocessing, running the model for a user defined watershed, and saving the results to a web repository. The open source Tethys Platform was used to develop the web app front-end Graphical User Interface (GUI). We used HydroDS, a webservice that provides data preparation processing capability to support backend computations used by the app. Results are saved in HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. The TOPographic Kinematic APproximation and Integration (TOPKAPI) model served as the example for which we developed a complete hydrologic modeling service to demonstrate the approach. The final product is a complete modeling system accessible through the web to create input files, and run the TOPKAPI hydrologic model for a watershed of interest. We are investigating similar functionality for the preparation of input to Regional Hydro-Ecological Simulation System (RHESSys). Key Words: hydrologic modeling, web services, hydrologic information system, HydroShare, HydroDS, Tethys Platform

  2. Twiddlenet: Metadata Tagging and Data Dissemination in Mobile Device Networks

    DTIC Science & Technology

    2007-09-01

    hosting a distributed data dissemination application. Stated simply, there are a multitude of handheld devices on the market that can communicate in...content ( UGC ) across a network of distributed devices. This sharing is accomplished through the use of descriptive metadata tags that are assigned to a...file once it has been shared. These metadata files are uploaded to a centralized portal and arranged for efficient UGC location and searching

  3. The collaboratory for MS3D: a new cyberinfrastructure for the structural elucidation of biological macromolecules and their assemblies using mass spectrometry-based approaches.

    PubMed

    Yu, Eizadora T; Hawkins, Arie; Kuntz, Irwin D; Rahn, Larry A; Rothfuss, Andrew; Sale, Kenneth; Young, Malin M; Yang, Christine L; Pancerella, Carmen M; Fabris, Daniele

    2008-11-01

    Modern biomedical research is evolving with the rapid growth of diverse data types, biophysical characterization methods, computational tools and extensive collaboration among researchers spanning various communities and having complementary backgrounds and expertise. Collaborating researchers are increasingly dependent on shared data and tools made available by other investigators with common interests, thus forming communities that transcend the traditional boundaries of the single research laboratory or institution. Barriers, however, remain to the formation of these virtual communities, usually due to the steep learning curve associated with becoming familiar with new tools, or with the difficulties associated with transferring data between tools. Recognizing the need for shared reference data and analysis tools, we are developing an integrated knowledge environment that supports productive interactions among researchers. Here we report on our current collaborative environment, which focuses on bringing together structural biologists working in the area of mass spectrometric based methods for the analysis of tertiary and quaternary macromolecular structures (MS3D) called the Collaboratory for MS3D (C-MS3D). C-MS3D is a Web-portal designed to provide collaborators with a shared work environment that integrates data storage and management with data analysis tools. Files are stored and archived along with pertinent meta data in such a way as to allow file handling to be tracked (data provenance) and data files to be searched using keywords and modification dates. While at this time the portal is designed around a specific application, the shared work environment is a general approach to building collaborative work groups. The goal of this is to not only provide a common data sharing and archiving system, but also to assist in the building of new collaborations and to spur the development of new tools and technologies.

  4. The key image and case log application: new radiology software for teaching file creation and case logging that incorporates elements of a social network.

    PubMed

    Rowe, Steven P; Siddiqui, Adeel; Bonekamp, David

    2014-07-01

    To create novel radiology key image software that is easy to use for novice users, incorporates elements adapted from social networking Web sites, facilitates resident and fellow education, and can serve as the engine for departmental sharing of interesting cases and follow-up studies. Using open-source programming languages and software, radiology key image software (the key image and case log application, KICLA) was developed. This system uses a lightweight interface with the institutional picture archiving and communications systems and enables the storage of key images, image series, and cine clips. It was designed to operate with minimal disruption to the radiologists' daily workflow. Many features of the user interface have been inspired by social networking Web sites, including image organization into private or public folders, flexible sharing with other users, and integration of departmental teaching files into the system. We also review the performance, usage, and acceptance of this novel system. KICLA was implemented at our institution and achieved widespread popularity among radiologists. A large number of key images have been transmitted to the system since it became available. After this early experience period, the most commonly encountered radiologic modalities are represented. A survey distributed to users revealed that most of the respondents found the system easy to use (89%) and fast at allowing them to record interesting cases (100%). Hundred percent of respondents also stated that they would recommend a system such as KICLA to their colleagues. The system described herein represents a significant upgrade to the Digital Imaging and Communications in Medicine teaching file paradigm with efforts made to maximize its ease of use and inclusion of characteristics inspired by social networking Web sites that allow the system additional functionality such as individual case logging. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  5. The national drug abuse treatment clinical trials network data share project: website design, usage, challenges, and future directions.

    PubMed

    Shmueli-Blumberg, Dikla; Hu, Lian; Allen, Colleen; Frasketi, Michael; Wu, Li-Tzy; Vanveldhuisen, Paul

    2013-01-01

    There are many benefits of data sharing, including the promotion of new research from effective use of existing data, replication of findings through re-analysis of pooled data files, meta-analysis using individual patient data, and reinforcement of open scientific inquiry. A randomized controlled trial is considered as the 'gold standard' for establishing treatment effectiveness, but clinical trial research is very costly, and sharing data is an opportunity to expand the investment of the clinical trial beyond its original goals at minimal costs. We describe the goals, developments, and usage of the Data Share website (http://www.ctndatashare.org) for the National Drug Abuse Treatment Clinical Trials Network (CTN) in the United States, including lessons learned, limitations, and major revisions, and considerations for future directions to improve data sharing. Data management and programming procedures were conducted to produce uniform and Health Insurance Portability and Accountability Act (HIPAA)-compliant de-identified research data files from the completed trials of the CTN for archiving, managing, and sharing on the Data Share website. Since its inception in 2006 and through October 2012, nearly 1700 downloads from 27 clinical trials have been accessed from the Data Share website, with the use increasing over the years. Individuals from 31 countries have downloaded data from the website, and there have been at least 13 publications derived from analyzing data through the public Data Share website. Minimal control over data requests and usage has resulted in little information and lack of control regarding how the data from the website are used. Lack of uniformity in data elements collected across CTN trials has limited cross-study analyses. The Data Share website offers researchers easy access to de-identified data files with the goal to promote additional research and identify new findings from completed CTN studies. To maximize the utility of the website, ongoing collaborative efforts are needed to standardize the core measures used for data collection in the CTN studies with the goal to increase their comparability and to facilitate the ability to pool data files for cross-study analyses.

  6. The National Drug Abuse Treatment Clinical Trials Network Data Share Project: Website Design, Usage, Challenges and Future Directions

    PubMed Central

    Shmueli-Blumberg, Dikla; Hu, Lian; Allen, Colleen; Frasketi, Michael; Wu, Li-Tzy; VanVeldhuisen, Paul

    2014-01-01

    Background The are many benefits of data sharing, including the promotion of new research from effective use of existing data, replication of findings through re-analysis of pooled data files, meta-analysis using individual patient data, and reinforcement of open scientific inquiry. A randomized controlled trial is considered as the “gold standard” for establishing treatment effectiveness, but clinical trial research is very costly and sharing data is an opportunity to expand the investment of the clinical trial beyond its original goals at minimal costs. Purpose We describe the goals, developments, and usage of the Data Share website (www.ctndatashare.org) for the National Drug Abuse Treatment Clinical Trials Network (CTN) in the US, including lessons learned, limitations and major revisions and considerations for future directions to improve data sharing. Methods Data management and programming procedures were conducted to produce uniform and Health Insurance Portability and Accountability Act (HIPAA)-compliant de-identified research data files from the completed trials of the CTN for archiving, managing, and sharing on the Data Share website. Results Since its inception in 2006 and through October 2012, nearly 1700 downloads from 27 clinical trials have been accessed from the Data Share website, with the use increasing over the years. Individuals from 31 countries have downloaded data from the website, and there have been at least 13 publications derived from analyzing data through the public Data Share website. Limitations Minimal control over data requests and usage has resulted in little information and lack of control regarding how the data from the website are used. Lack of uniformity in data elements collected across CTN trials has limited cross-study analyses. Conclusions The Data Share website offers researchers easy access to deidentified data files with the goal to promote additional research and identify new findings from completed CTN studies. To maximize the utility of the website, on-going collaborative efforts are needed to standardize the core measures used for data collection in the CTN studies with the goal to increase their comparability and to facilitate the ability to pool data files for cross-study analyses. PMID:24085772

  7. 75 FR 71158 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ...://www.cfe.cboe.com ), automated quotation systems, published or other public sources, or on-line [[Page... and last-sale information regarding the Shares will be disseminated through the facilities of the... Optimized Portfolio Value (``IOPV'') will be calculated. The IOPV is an indicator of the value of the VIX...

  8. 78 FR 16344 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-14

    ... within the meaning of Regulation NMS (i.e. ``dark venues'' or ``dark pools''). XCST orders, pursuant to Rule 3315(a)(1)(A)(ix), check the System for available shares and simultaneously route to select dark... Web site ( http://www.sec.gov/rules/sro.shtml ). Copies of the submission, all subsequent amendments...

  9. 78 FR 15988 - Self-Regulatory Organizations; the NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-13

    .... ``dark venues'' or ``dark pools''). QCST orders, pursuant to Rule 4758(a)(1)(A)(xiii), check the System for available shares and simultaneously route to select dark venues and to certain low cost exchanges... Web site ( http://www.sec.gov/rules/sro.shtml ). Copies of the submission, all subsequent amendments...

  10. 78 FR 21632 - International Mail Product

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-11

    ... of United States Postal Service Filing of a Functionally Equivalent International Business Reply...' Decision No. 08-24; and Attachment 4--an application for non-public treatment of materials filed under seal... equivalent to the baseline agreement filed in Docket No. CP2011-59 because it shares similar cost and market...

  11. 77 FR 6833 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-09

    ... Reference Asset. The term ``Currency,'' as used in the proposed rule, means one or more currencies, or.... Description Proposed Rule 5711(e)(iii) provides that the term ``Currency Trust Shares'' as used in these...-Based Trust Shares; Currency Trust Shares; Commodity Index Trust Shares; Commodity Futures Trust Shares...

  12. Analyzing Sliding Stability of Structures Using the Modified Computer Program GWALL. Revision,

    DTIC Science & Technology

    1983-11-01

    R136 954 RNRLYZING SLIDING STRBILITY OF STRUCTURES USING THE i/i MODIFIED COMPUTER PRO..(U) ARMY ENGINEER WATERRYS EXPERIMENT STATION VICKSBURG MS...GWALL and/or the graphics software package, Graphics Compati- bility System (GCS). Input Features 4. GWALL is very easy to use because it allows the...Prepared Data File 9. Time-sharing computer systems do not always respond quickly to the userts commands, especially when there are many users

  13. 78 FR 7835 - Self-Regulatory Organizations; BOX Options Exchange LLC; Notice of Filing of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-04

    ... at $145 per share would carry a total deliverable value of $145,000, and the strike price would be... Jumbo option strike price of $145 was trading at $146 per share, the intrinsic $1 per share value would... Shares Deliverable Upon Exercise 100 shares........ 1,000 shares Strike Price if underlying is 45 45 $45...

  14. 76 FR 28493 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change To List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-17

    ... of ProShares Short VIX Short-Term Futures ETF, ProShares Short VIX Mid-Term Futures ETF, ProShares Ultra VIX Short-Term Futures ETF, ProShares Ultra VIX Mid- Term Futures ETF, ProShares UltraShort VIX Short-Term Futures ETF, and ProShares UltraShort VIX Mid-Term Futures ETF Under NYSE Arca Equities Rule...

  15. Logic Design of a Shared Disk System in a Multi-Micro Computer Environment.

    DTIC Science & Technology

    1983-06-01

    overall system, is given. An exnaustive description of eacn device can De found in tne cited references. A. INTEL 80S5 Tne INTEL Be86 is a nign...eitner could De accomplished, it was necessary to understand ootn tne existing system arcnitecture ani software. Tne last cnapter addressed tnat...to De adapted: tne loader program and tne Doot ROP program. Tne loader program is a simplified version of CP/M-Bö and contains cniy encu^n file

  16. 78 FR 41462 - Self-Regulatory Organizations; BATS Exchange, Inc.; Notice of Filing of a Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-10

    ... Asset. The term ``Currency,'' as used in the proposed rule, means one or more currencies, or currency...; Commodity-Based Trust Shares; Currency Trust Shares; Commodity Index Trust Shares; Commodity Futures Trust Shares; Partnership Units; Trust Units; Managed Trust Securities; and Currency Warrants. Specifically...

  17. Sixth Goddard Conference on Mass Storage Systems and Technologies Held in Cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    1998-01-01

    This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence.

  18. 24 CFR 266.626 - Notice of default and filing an insurance claim.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Notice of default and filing an... AND OTHER AUTHORITIES HOUSING FINANCE AGENCY RISK-SHARING PROGRAM FOR INSURED AFFORDABLE MULTIFAMILY PROJECT LOANS Contract Rights and Obligations Claim Procedures § 266.626 Notice of default and filing an...

  19. 75 FR 22889 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-30

    ... transactions. Currently, the First Trust ISE Water ETF (``FIW''), the Claymore China Technology ETF (``CQQQ''), the ProShares UltraPro Short Dow30 (``SDOW''), the ProShares UltraPro Dow30 (``UDOW''), the ProShares UltraPro Short MidCap400 (``SMDD''), the ProShares UltraPro MidCap400 (``UMDD''), the ProShares UltraPro...

  20. 75 FR 32531 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-08

    .... Currently, the First Trust ISE Water ETF (``FIW''), the Claymore China Technology ETF (``CQQQ''), the ProShares UltraPro Short Dow30 (``SDOW''), the ProShares UltraPro Dow30 (``UDOW''), the ProShares UltraPro Short MidCap400 (``SMDD''), the ProShares UltraPro MidCap400 (``UMDD''), the ProShares UltraPro Short...

  1. 75 FR 14236 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-24

    ... transactions. Currently, the First Trust ISE Water ETF (``FIW'') and the Claymore China Technology ETF (``CQQQ... options on the ProShares UltraPro Short Dow30 (``SDOW''), the ProShares UltraPro Dow30 (``UDOW''), the ProShares UltraPro Short MidCap400 (``SMDD''), the ProShares UltraPro MidCap400 (``UMDD''), the ProShares...

  2. ``The Legal Bit's in Russian'': Making Sense of Downloaded Music

    NASA Astrophysics Data System (ADS)

    Kibby, Marjorie D.

    Peer-to-peer sharing of music files grew in the face of consumer dissatisfaction with the compact disc and the absence of any real alternative. Many users were more or less “forced” to turn to illegal file sharing to access single tracks, back catalogues, and niche genres. Recently the almost simultaneous arrival of broadband internet and the iPod has seen music downloading become a respectable activity and a multi-billion dollar industry.

  3. 77 FR 31899 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-30

    ... Trading of iShares Strategic Beta U.S. Large Cap Fund and iShares Strategic Beta U.S. Small Cap Fund Under... Fund Shares''): iShares Strategic Beta U.S. Large Cap Fund and iShares Strategic Beta U.S. Small Cap...: \\3\\ iShares Strategic Beta U.S. Large Cap Fund and iShares Strategic Beta U.S. Small Cap Fund (each...

  4. 76 FR 617 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-05

    ... orders in the Exchange's system. This schedule is known as the DMM Capital Commitment Schedule (``CCS'').\\9\\ CCS provides the Display Book [supreg] \\10\\ with the amount of shares that the DMM is willing to trade at price points outside, at and inside the Exchange Best Bid or Best Offer (``BBO''). CCS interest...

  5. 75 FR 14221 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-24

    ... in the Exchange's system. This schedule is known as the DMM Capital Commitment Schedule (``CCS'').\\10\\ CCS provides the Display Book[reg] \\11\\ with the amount of shares that the DMM is willing to trade at price points outside, at and inside the Exchange BBO. CCS interest is separate and distinct from other...

  6. 75 FR 2180 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-14

    ... (``SLP'') program.\\4\\ The Exchange proposes to establish a system of credits payable to SLPs when they... add liquidity to the Exchange in securities with a per share price of $1.00 or more, if the SLP meets... the SLP does not meet the 3% average or more quoting requirement in an assigned security pursuant to...

  7. 78 FR 15999 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-13

    ... NMS (i.e. ``dark venues'' or ``dark pools''). BCST orders, pursuant to Rule 4758(a)(1)(A)(ix), check the System for available shares and simultaneously route to select dark venues and to certain low cost... charged for such executions, including its own costs. As a general matter, BX believes that the proposed...

  8. 75 FR 6077 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-05

    ... NYSE or NYSE Amex opening or closing processes. DOT orders do not check the NASDAQ book prior to... thereafter check the NASDAQ book for available shares and are then converted into SCAN or STGY orders... book and destinations on the DOTI System routing table and then are sent to NYSE or NYSE Amex. Such...

  9. A Gossip-Based Optimistic Replication for Efficient Delay-Sensitive Streaming Using an Interactive Middleware Support System

    NASA Astrophysics Data System (ADS)

    Mavromoustakis, Constandinos X.; Karatza, Helen D.

    2010-06-01

    While sharing resources the efficiency is substantially degraded as a result of the scarceness of availability of the requested resources in a multiclient support manner. These resources are often aggravated by many factors like the temporal constraints for availability or node flooding by the requested replicated file chunks. Thus replicated file chunks should be efficiently disseminated in order to enable resource availability on-demand by the mobile users. This work considers a cross layered middleware support system for efficient delay-sensitive streaming by using each device's connectivity and social interactions in a cross layered manner. The collaborative streaming is achieved through the epidemically replicated file chunk policy which uses a transition-based approach of a chained model of an infectious disease with susceptible, infected, recovered and death states. The Gossip-based stateful model enforces the mobile nodes whether to host a file chunk or not or, when no longer a chunk is needed, to purge it. The proposed model is thoroughly evaluated through experimental simulation taking measures for the effective throughput Eff as a function of the packet loss parameter in contrast with the effectiveness of the replication Gossip-based policy.

  10. 77 FR 52776 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ... vendors at least once per day. Information regarding market price and trading volume of the Shares will be... and Trading of iShares 2018 S&P AMT-Free Municipal Series and iShares 2019 S&P AMT-Free Municipal... Equities Rule 5.2(j)(3), Commentary .02, the shares of the following two series of iShares Trust: iShares...

  11. [Improving the physician-dental surgeon relationship to improve patient care].

    PubMed

    Tenenbaum, Annabelle; Folliguet, Marysette; Berdougo, Brice; Hervé, Christian; Moutel, Grégoire

    2008-04-01

    This study had two aims: to assess the nature of the relationship between general practitioners (GPs) and dental surgeons in relation to patient care and to evaluate qualitatively their interest in the changes that health networks and shared patient medical files could bring. Questionnaires were completed by 12 GPs belonging to ASDES, a private practitioner-hospital health network that seeks to promote a partnership between physicians and dental surgeons, and by 13 private dental surgeons in the network catchment area. The GPs and dentists had quite different perceptions of their relationship. Most dentists rated their relationship with GPs as "good" to "excellent" and did not wish to modify it, while GPs rated their relationship with dentists as nonexistent and expressed a desire to change the situation. Some GPs and some dentists supported data exchange by sharing personal medical files through the network. Many obstacles hinder communication between GPs and dentists. There is insufficient coordination between professionals. Health professionals must be made aware of how changes in the health care system (health networks, personal medical files, etc) can help to provide patients with optimal care. Technical innovations in medicine will not be beneficial to patients unless medical education and training begins to include interdisciplinary and holistic approaches to health care and preventive care.

  12. Data Sharing Interviews with Crop Sciences Faculty: Why They Share Data and How the Library Can Help

    ERIC Educational Resources Information Center

    Williams, Sarah C.

    2013-01-01

    This study was designed to generate a deeper understanding of data sharing by targeting faculty members who had already made data publicly available. During interviews, crop scientists at the University of Illinois at Urbana-Champaign were asked why they decided to share data, why they chose a data sharing method (e. g., supplementary file,…

  13. 75 FR 1093 - Self-Regulatory Organizations; The Options Clearing Corporation; Notice of Filing of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-08

    ... any option or any futures contracts on ETFS Physical Swiss Gold Shares and ETFS Physical Silver Shares... jurisdictional status of options or security futures on ETFS Physical Swiss Gold Shares or ETFS Physical Silver... approving a proposed rule change clarifying that options and securities futures on SPDR Gold Shares are...

  14. 75 FR 47043 - Self-Regulatory Organizations; BATS Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-04

    ... Educational Services Inc 280 TSL Trina Solar Ltd 332 NKE NIKE Inc 282 EWW iShares MSCI Mexico 335 FIS Fidelity...-Regulatory Organizations; BATS Exchange, Inc.; Notice of Filing and Immediate Effectiveness of a Proposed...\\ notice is hereby given that on July 26, 2010, BATS Exchange, Inc. (the ``Exchange'' or ``BATS'') filed...

  15. 75 FR 47664 - Self-Regulatory Organizations; NASDAQ OMX PHLX, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ..../The 331 ESI ITT Educational Services Inc. 280 TSL Trina Solar Ltd. 332 NKE NIKE Inc. 282 EWW iShares...-Regulatory Organizations; NASDAQ OMX PHLX, Inc.; Notice of Filing and Immediate Effectiveness of Proposed..., notice is hereby given that on July 22, 2010, NASDAQ OMX PHLX, Inc. (``Phlx'' or ``Exchange'') filed with...

  16. Active Learning in the Online Environment: The Integration of Student-Generated Audio Files

    ERIC Educational Resources Information Center

    Bolliger, Doris U.; Armier, David Des, Jr.

    2013-01-01

    Educators have integrated instructor-produced audio files in a variety of settings and environments for purposes such as content presentation, lecture reviews, student feedback, and so forth. Few instructors, however, require students to produce audio files and share them with peers. The purpose of this study was to obtain empirical data on…

  17. Derived virtual devices: a secure distributed file system mechanism

    NASA Technical Reports Server (NTRS)

    VanMeter, Rodney; Hotz, Steve; Finn, Gregory

    1996-01-01

    This paper presents the design of derived virtual devices (DVDs). DVDs are the mechanism used by the Netstation Project to provide secure shared access to network-attached peripherals distributed in an untrusted network environment. DVDs improve Input/Output efficiency by allowing user processes to perform I/O operations directly from devices without intermediate transfer through the controlling operating system kernel. The security enforced at the device through the DVD mechanism includes resource boundary checking, user authentication, and restricted operations, e.g., read-only access. To illustrate the application of DVDs, we present the interactions between a network-attached disk and a file system designed to exploit the DVD abstraction. We further discuss third-party transfer as a mechanism intended to provide for efficient data transfer in a typical NAP environment. We show how DVDs facilitate third-party transfer, and provide the security required in a more open network environment.

  18. Emerging Geospatial Sharing Technologies in Earth and Space Science Informatics

    NASA Astrophysics Data System (ADS)

    Singh, R.; Bermudez, L. E.

    2013-12-01

    Emerging Geospatial Sharing Technologies in Earth and Space Science Informatics The Open Geospatial Consortium (OGC) mission is to serve as a global forum for the collaboration of developers and users of spatial data products and services, and to advance the development of international standards for geospatial interoperability. The OGC coordinates with over 400 institutions in the development of geospatial standards. In the last years two main trends are making disruptions in geospatial applications: mobile and context sharing. People now have more and more mobile devices to support their work and personal life. Mobile devices are intermittently connected to the internet and have smaller computing capacity than a desktop computer. Based on this trend a new OGC file format standard called GeoPackage will enable greater geospatial data sharing on mobile devices. GeoPackage is perhaps best understood as the natural evolution of Shapefiles, which have been the predominant lightweight geodata sharing format for two decades. However the format is extremely limited. Four major shortcomings are that only vector points, lines, and polygons are supported; property names are constrained by the dBASE format; multiple files are required to encode a single data set; and multiple Shapefiles are required to encode multiple data sets. A more modern lingua franca for geospatial data is long overdue. GeoPackage fills this need with support for vector data, image tile matrices, and raster data. And it builds upon a database container - SQLite - that's self-contained, single-file, cross-platform, serverless, transactional, and open source. A GeoPackage, in essence, is a set of SQLite database tables whose content and layout is described in the candidate GeoPackage Implementation Specification available at https://portal.opengeospatial.org/files/?artifact_id=54838&version=1. The second trend is sharing client 'contexts'. When a user is looking into an article or a product on the web, they can easily share this information with colleagues or friends via an email that includes URLs (links to web resources) and attachments (inline data). In the case of geospatial information, a user would like to share a map created from different OGC sources, which may include for example, WMS and WFS links, and GML and KML annotations. The emerging OGC file format is called the OGC Web Services Context Document (OWS Context), which allows clients to reproduce a map previously created by someone else. Context sharing is important in a variety of domains, from emergency response, where fire, police and emergency medical personnel need to work off a common map, to multi-national military operations, where coalition forces need to share common data sources, but have cartographic displays in different languages and symbology sets. OWS Contexts can be written in XML (building upon the Atom Syndication Format) or JSON. This presentation will provide an introduction of GeoPackage and OWS Context and how they can be used to advance sharing of Earth and Space Science information.

  19. Security on Cloud Revocation Authority using Identity Based Encryption

    NASA Astrophysics Data System (ADS)

    Rajaprabha, M. N.

    2017-11-01

    As due to the era of cloud computing most of the people are saving there documents, files and other things on cloud spaces. Due to this security over the cloud is also important because all the confidential things are there on the cloud. So to overcome private key infrastructure (PKI) issues some revocable Identity Based Encryption (IBE) techniques are introduced which eliminates the demand of PKI. The technique introduced is key update cloud service provider which is having two issues in it and they are computation and communication cost is high and second one is scalability issue. So to overcome this problem we come along with the system in which the Cloud Revocation Authority (CRA) is there for the security which will only hold the secret key for each user. And the secret key was send with the help of advanced encryption standard security. The key is encrypted and send to the CRA for giving the authentication to the person who wants to share the data or files or for the communication purpose. Through that key only the other user will able to access that file and if the user apply some invalid key on the particular file than the information of that user and file is send to the administrator and administrator is having rights to block that person of black list that person to use the system services.

  20. Distributed Systems Technology Survey.

    DTIC Science & Technology

    1987-03-01

    and prolocols. 2. Hardware Technology Ecnomic factor we a majo reonm for the prolierat of dlstbted systoe. Processors, memory, an magne tc ndoptical...destined messages and pertorn the a pro te forwarding. There gImsno agreement that a ightweight process mechanism is essential to support com- monly used...Xerox PARC environment [311. Shared file servers, discussed below, are essential to the success of such a scheme. 11. ecurlity A distributed

  1. 76 FR 28252 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-16

    ... liquidity provider rebate is $0.0020 per share executed for displayed quotes/orders and $0.0010 per share... determining whether a member qualifies for its highest rebate tier of $0.0015 per share executed for non-displayed quotes/orders and $0.00295 per share executed for displayed quotes/orders. Currently, a member's...

  2. 76 FR 56249 - Self-Regulatory Organizations; BATS Exchange, Inc.; Notice of Filing of Proposed Rule Change by...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... sell orders at the price at which the most shares would execute. BATS Auction Feed In addition to the... Shares and the Reference Sell Shares as determined at each price level within the Reference Price Range... be used. Indicative Price. The Indicative Price will be the price at which the most shares from the...

  3. 76 FR 805 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-06

    ... Trading Shares of the SPDR Nuveen S&P High Yield Municipal Bond ETF December 30, 2010. Pursuant to Section... Change The Exchange proposes to list and trade shares of the SPDR Nuveen S&P High Yield Municipal Bond... for, the Proposed Rule Change 1. Purpose The Exchange proposes to list and trade shares (``Shares...

  4. 78 FR 76669 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Amendment No. 1 and Order...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-18

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-71067; File No. SR-NYSEArca-2013-105] Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Amendment No. 1 and Order Granting Accelerated Approval of a Proposed Rule Change, as Modified by Amendment No. 1, To List and Trade Shares of the SPDR...

  5. 78 FR 56970 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change To List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-16

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-70356; File No. SR-NYSEArca-2013-86] Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change To List and Trade Shares.... Pursuant to Section 19(b)(1) of the Securities Exchange Act of 1934 (``Act'') \\1\\ and Rule 19b-4 thereunder...

  6. Mass spectrometer output file format mzML.

    PubMed

    Deutsch, Eric W

    2010-01-01

    Mass spectrometry is an important technique for analyzing proteins and other biomolecular compounds in biological samples. Each of the vendors of these mass spectrometers uses a different proprietary binary output file format, which has hindered data sharing and the development of open source software for downstream analysis. The solution has been to develop, with the full participation of academic researchers as well as software and hardware vendors, an open XML-based format for encoding mass spectrometer output files, and then to write software to use this format for archiving, sharing, and processing. This chapter presents the various components and information available for this format, mzML. In addition to the XML schema that defines the file structure, a controlled vocabulary provides clear terms and definitions for the spectral metadata, and a semantic validation rules mapping file allows the mzML semantic validator to insure that an mzML document complies with one of several levels of requirements. Complete documentation and example files insure that the format may be uniformly implemented. At the time of release, there already existed several implementations of the format and vendors have committed to supporting the format in their products.

  7. STARS 2.0: 2nd-generation open-source archiving and query software

    NASA Astrophysics Data System (ADS)

    Winegar, Tom

    2008-07-01

    The Subaru Telescope is in process of developing an open-source alternative to the 1st-generation software and databases (STARS 1) used for archiving and query. For STARS 2, we have chosen PHP and Python for scripting and MySQL as the database software. We have collected feedback from staff and observers, and used this feedback to significantly improve the design and functionality of our future archiving and query software. Archiving - We identified two weaknesses in 1st-generation STARS archiving software: a complex and inflexible table structure and uncoordinated system administration for our business model: taking pictures from the summit and archiving them in both Hawaii and Japan. We adopted a simplified and normalized table structure with passive keyword collection, and we are designing an archive-to-archive file transfer system that automatically reports real-time status and error conditions and permits error recovery. Query - We identified several weaknesses in 1st-generation STARS query software: inflexible query tools, poor sharing of calibration data, and no automatic file transfer mechanisms to observers. We are developing improved query tools and sharing of calibration data, and multi-protocol unassisted file transfer mechanisms for observers. In the process, we have redefined a 'query': from an invisible search result that can only transfer once in-house right now, with little status and error reporting and no error recovery - to a stored search result that can be monitored, transferred to different locations with multiple protocols, reporting status and error conditions and permitting recovery from errors.

  8. Distributed Virtual System (DIVIRS) Project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1993-01-01

    As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  9. DIstributed VIRtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1994-01-01

    As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  10. DIstributed VIRtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, Clifford B.

    1995-01-01

    As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  11. Distributed Virtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1993-01-01

    As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  12. POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.; Betts, W.

    2015-12-01

    The STAR online computing infrastructure has become an intensive dynamic system used for first-hand data collection and analysis resulting in a dense collection of data output. As we have transitioned to our current state, inefficient, limited storage systems have become an impediment to fast feedback to online shift crews. Motivation for a centrally accessible, scalable and redundant distributed storage system had become a necessity in this environment. OpenStack Swift Object Storage and Ceph Object Storage are two eye-opening technologies as community use and development have led to success elsewhere. In this contribution, OpenStack Swift and Ceph have been put to the test with single and parallel I/O tests, emulating real world scenarios for data processing and workflows. The Ceph file system storage, offering a POSIX compliant file system mounted similarly to an NFS share was of particular interest as it aligned with our requirements and was retained as our solution. I/O performance tests were run against the Ceph POSIX file system and have presented surprising results indicating true potential for fast I/O and reliability. STAR'S online compute farm historical use has been for job submission and first hand data analysis. The goal of reusing the online compute farm to maintain a storage cluster and job submission will be an efficient use of the current infrastructure.

  13. Public census data on CD-ROM at Lawrence Berkeley Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrill, D.W.

    The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socio-economic and geographic data files which are available to CEDR and PAREP collaborators via LBL`s computing network. At this time 70 CD-ROM diskettes (approximately 36 gigabytes) are on line via the Unix file server cedrcd. lbl. gov. Most of the files are from the US Bureau of the Census, and most pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the formmore » of ASCII text files. Printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), or the UC Documents Library. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user`s application program(s).« less

  14. Public census data on CD-ROM at Lawrence Berkeley Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrill, D.W.

    The Comprehensive Epidemiologic Data Resource (CEDR) and Populations at Risk to Environmental Pollution (PAREP) projects, of the Information and Computing Sciences Division (ICSD) at Lawrence Berkeley Laboratory (LBL), are using public socio-economic and geographic data files which are available to CEDR and PAREP collaborators via LBL's computing network. At this time 70 CD-ROM diskettes (approximately 36 gigabytes) are on line via the Unix file server cedrcd. lbl. gov. Most of the files are from the US Bureau of the Census, and most pertain to the 1990 Census of Population and Housing. All the CD-ROM diskettes contain documentation in the formmore » of ASCII text files. Printed documentation for most files is available for inspection at University of California Data and Technical Assistance (UC DATA), or the UC Documents Library. Many of the CD-ROM diskettes distributed by the Census Bureau contain software for PC compatible computers, for easily accessing the data. Shared access to the data is maintained through a collaboration among the CEDR and PAREP projects at LBL, and UC DATA, and the UC Documents Library. Via the Sun Network File System (NFS), these data can be exported to Internet computers for direct access by the user's application program(s).« less

  15. CAD-CAM database management at Bendix Kansas City

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witte, D.R.

    1985-05-01

    The Bendix Kansas City Division of Allied Corporation began integrating mechanical CAD-CAM capabilities into its operations in June 1980. The primary capabilities include a wireframe modeling application, a solid modeling application, and the Bendix Integrated Computer Aided Manufacturing (BICAM) System application, a set of software programs and procedures which provides user-friendly access to graphic applications and data, and user-friendly sharing of data between applications and users. BICAM also provides for enforcement of corporate/enterprise policies. Three access categories, private, local, and global, are realized through the implementation of data-management metaphors: the desk, reading rack, file cabinet, and library are for themore » storage, retrieval, and sharing of drawings and models. Access is provided through menu selections; searching for designs is done by a paging method or a search-by-attribute-value method. The sharing of designs between all users of Part Data is key. The BICAM System supports 375 unique users per quarter and manages over 7500 drawings and models. The BICAM System demonstrates the need for generalized models, a high-level system framework, prototyping, information-modeling methods, and an understanding of the entire enterprise. Future BICAM System implementations are planned to take advantage of this knowledge.« less

  16. NASA Scientific and Technical Information System (STI) and New Directory of Numerical Data Bases

    NASA Technical Reports Server (NTRS)

    Wilson, J.

    1984-01-01

    The heart of NASA's STI system is a collection of scientific and technical information gathered from worldwide sources. Currently containing over 2.2 million items, the data base is growing at the rate of 140,000 items per year. In addition to announcement journals, information is disseminated through the NASA RECON on-line bibliographic search system. One part of RECON is NALNET which lists journals and books held by the NASA Centers. Another service now accessible by recon is a directory of numerical data bases (DND) which can be shared by NASA staff and contractors. The DND describes each data base and gives the name and phone number of a contact person. A NASA-wide integrated library system is being developed for the Center libraries which will include on-line catalog and subsystems for acquisition, circulation control, information retrieveal, management information, and an authority file. These subsystems can interact with on-line bibliographic, patron, and vendor files.

  17. Xgrid admin guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, Charlie E M

    2010-01-01

    Xgrid, with a capital-X is the name for Apple's grid computing system. With a lower case x, xgrid is the name of the command line utility that clients can use, among other ways, to submit jobs to a controller. An Xgrid divides into three logical components: Agent, Controller and Client. Client computers submit jobs (a set of tasks) they want run to a Controller computer. The Controller queues the Client jobs and distributes tasks to Agent computers. Agent computers run the tasks and report their output and status back to the controller where it is stored until deleted by themore » Client. The Clients can asynchronously query the controller about the status of a job and the results. Any OSX computer can be any of these. A single mac can be more than one: it's possible to be Agent, Controller and Client at the same time. There is one Controller per Grid. Clients can submit jobs to Controllers of different grids. Agents can work for more than one grid. Xgrid's setup has a pleasantly small palette of choices. The first two decisions to make are the kind of authentication & authorization to use and if a shared file system is needed. A shared file system that all the agents can access can be very beneficial for many computing problems, but it is not appropriate for every network.« less

  18. 77 FR 62285 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-12

    ... other markets and thereby allows market participants to react to the execution (an effect known as... BX available shares and routing to other venues' shares will avoid the deleterious effect of market... Orders To Simultaneously Execute Against Exchange Available Shares and Route to Other Markets for...

  19. 78 FR 76337 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-17

    ... under the Market Depth Tier 1 from $0.0032 per share to $0.00325 per share and amend the criteria... Depth Tier 1 from $0.0032 per share to $0.00325 per share and amend the criteria necessary to achieve... Depth Tier 1 The Exchange proposes to amend its Fee Schedule to increase the rebate to add liquidity...

  20. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    PubMed

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  1. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  2. 77 FR 76023 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    ..., UNS Electric, Inc., UniSource Energy Development Company. Description: Triennial Market Power Update... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice... County Wind Energy, LLC. Description: Gray County Wind and Ensign Wind Shared Facilities Agreement to be...

  3. Protecting Data Privacy in Structured P2P Networks

    NASA Astrophysics Data System (ADS)

    Jawad, Mohamed; Serrano-Alvarado, Patricia; Valduriez, Patrick

    P2P systems are increasingly used for efficient, scalable data sharing. Popular applications focus on massive file sharing. However, advanced applications such as online communities (e.g., medical or research communities) need to share private or sensitive data. Currently, in P2P systems, untrusted peers can easily violate data privacy by using data for malicious purposes (e.g., fraudulence, profiling). To prevent such behavior, the well accepted Hippocratic database principle states that data owners should specify the purpose for which their data will be collected. In this paper, we apply such principles as well as reputation techniques to support purpose and trust in structured P2P systems. Hippocratic databases enforce purpose-based privacy while reputation techniques guarantee trust. We propose a P2P data privacy model which combines the Hippocratic principles and the trust notions. We also present the algorithms of PriServ, a DHT-based P2P privacy service which supports this model and prevents data privacy violation. We show, in a performance evaluation, that PriServ introduces a small overhead.

  4. Advances in a distributed approach for ocean model data interoperability

    USGS Publications Warehouse

    Signell, Richard P.; Snowden, Derrick P.

    2014-01-01

    An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.

  5. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  6. Networking CD-ROMs: A Tutorial Introduction.

    ERIC Educational Resources Information Center

    Perone, Karen

    1996-01-01

    Provides an introduction to CD-ROM networking. Highlights include LAN (local area network) architectures for CD-ROM networks, peer-to-peer networks, shared file and dedicated file servers, commercial software/vendor solutions, problems, multiple hardware platforms, and multimedia. Six figures illustrate network architectures and a sidebar contains…

  7. Interoperability Outlook in the Big Data Future

    NASA Astrophysics Data System (ADS)

    Kuo, K. S.; Ramachandran, R.

    2015-12-01

    The establishment of distributed active archive centers (DAACs) as data warehouses and the standardization of file format by NASA's Earth Observing System Data Information System (EOSDIS) had doubtlessly propelled interoperability of NASA Earth science data to unprecedented heights in the 1990s. However, we obviously still feel wanting two decades later. We believe the inadequate interoperability we experience is a result of the the current practice that data are first packaged into files before distribution and only the metadata of these files are cataloged into databases and become searchable. Data therefore cannot be efficiently filtered. Any extensive study thus requires downloading large volumes of data files to a local system for processing and analysis.The need to download data not only creates duplication and inefficiency but also further impedes interoperability, because the analysis has to be performed locally by individual researchers in individual institutions. Each institution or researcher often has its/his/her own preference in the choice of data management practice as well as programming languages. Analysis results (derived data) so produced are thus subject to the differences of these practices, which later form formidable barriers to interoperability. A number of Big Data technologies are currently being examined and tested to address Big Earth Data issues. These technologies share one common characteristics: exploiting compute and storage affinity to more efficiently analyze large volumes and great varieties of data. Distributed active "archive" centers are likely to evolve into distributed active "analysis" centers, which not only archive data but also provide analysis service right where the data reside. "Analysis" will become the more visible function of these centers. It is thus reasonable to expect interoperability to improve because analysis, in addition to data, becomes more centralized. Within a "distributed active analysis center" interoperability is almost guaranteed because data, analysis, and results all can be readily shared and reused. Effectively, with the establishment of "distributed active analysis centers", interoperation turns from a many-to-many problem into a less complicated few-to-few problem and becomes easier to solve.

  8. The Open Microscopy Environment: open image informatics for the biological sciences

    NASA Astrophysics Data System (ADS)

    Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.

    2016-07-01

    Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).

  9. 75 FR 47330 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-05

    ...-Cumulative Perpetual Preferred Stock, $100,000 liquidation preference per share (the ``Depositary Shares... included on the subject line if e-mail is used. To help the Commission process and review your comments...

  10. Risk, Benefit, and Moderators of the Affect Heuristic in a Widespread Unlawful Activity: Evidence from a Survey of Unlawful File-Sharing Behavior.

    PubMed

    Watson, Steven J; Zizzo, Daniel J; Fleming, Piers

    2017-06-01

    Increasing the perception of legal risk via publicized litigation and lobbying for copyright law enforcement has had limited success in reducing unlawful content sharing by the public. We consider the extent to which engaging in file sharing online is motivated by the perceived benefits of this activity as opposed to perceived legal risks. Moreover, we explore moderators of the relationship between perceived risk and perceived benefits; namely, trust in industry and legal regulators, and perceived online anonymity. We examine these questions via a large two-part survey of consumers of music (n = 658) and eBooks (n = 737). We find that perceptions of benefit, but not of legal risk, predict stated file-sharing behavior. An affect heuristic is employed: as perceived benefit increases, perceived risk falls. This relationship is increased under high regulator and industry trust (which actually increases perceived risk in this study) and low anonymity (which also increases perceived risk). We propose that, given the limited impact of perceived legal risk upon unlawful downloading, it would be better for the media industries to target enhancing the perceived benefit and availability of lawful alternatives. © 2016 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  11. 75 FR 54928 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-09

    ... (``LOC'') orders executed in the NYSE Closing Auction. For stocks with a per share stock price of $1.00... stocks with a per share stock price less than $1.00 per share, the fee will change from (A) the lesser of... LOC orders executed in the NYSE Closing Auction. For stocks with a per share stock price of $1.00 or...

  12. 78 FR 67420 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... decrease the rebate to add liquidity under the Market Depth Tier 1 from $0.0033 per share to $0.0032 per... Market Depth Tier 1 from $0.0033 per share to $0.0032 per share. Footnote 1 of the Fee Schedule currently provides that Members may qualify for the Market Depth Tier 1 and receive a rebate of $0.0033 per share for...

  13. 76 FR 75932 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-05

    ... and Trading of Shares of the WisdomTree Emerging Markets Inflation Protection Bond Fund Under NYSE... Change The Exchange proposes to list and trade the shares of the following fund of the WisdomTree Trust (``Trust'') under NYSE Arca Equities Rule 8.600 (``Managed Fund Shares''): WisdomTree Emerging Markets...

  14. 76 FR 54275 - Self-Regulatory Organizations; Chicago Mercantile Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-31

    ... Change To Reflect Differences in Proprietary Trading Exchange Fees Based on Ownership of CME Group Shares.... equity member firm. Clearing members with shares are those clearing members that maintain CME Group Class... members that maintain CME Group Class A shares in accordance with CME Rule 106.J. Equity Member Firm...

  15. Shared Modular Build of Warships: How a Shared Build Can Support Future Shipbuilding

    DTIC Science & Technology

    2011-01-01

    design (CAD) files used for the design were not shared through this environment (Tang and Molas -Gillart, 2009). As the case studies show, shipyards...undated web page. As of December 20, 2010: http://www.fas.org/programs/ssp/man/uswpns/navy/submarines/ssn774_virginia.html Tang, Puay, and Jordi Molas

  16. 76 FR 67234 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-31

    ... Proposed Rule Change To Modify the Description of the Nasdaq Daily Share Volume Service October 25, 2011... Basis for, the Proposed Rule Change 1. Purpose This proposal pertains to the Nasdaq Daily Share Volume... the share volume information provided. Thus, the rule change will make it clear that the Service is...

  17. 77 FR 75468 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Amendment No. 1 and Order...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-20

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-68440; File No. SR-NYSEArca-2012-28] Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Amendment No. 1 and Order Granting Accelerated Approval of a Proposed Rule Change as Modified by Amendment No. 1 To List and Trade Shares of the JPM XF Physical Copper Trust Pursuant to NYSE...

  18. Standardized data sharing in a paediatric oncology research network--a proof-of-concept study.

    PubMed

    Hochedlinger, Nina; Nitzlnader, Michael; Falgenhauer, Markus; Welte, Stefan; Hayn, Dieter; Koumakis, Lefteris; Potamias, George; Tsiknakis, Manolis; Saraceno, Davide; Rinaldi, Eugenia; Ladenstein, Ruth; Schreier, Günter

    2015-01-01

    Data that has been collected in the course of clinical trials are potentially valuable for additional scientific research questions in so called secondary use scenarios. This is of particular importance in rare disease areas like paediatric oncology. If data from several research projects need to be connected, so called Core Datasets can be used to define which information needs to be extracted from every involved source system. In this work, the utility of the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM) as a format for Core Datasets was evaluated and a web tool was developed which received Source ODM XML files and--via Extensible Stylesheet Language Transformation (XSLT)--generated standardized Core Dataset ODM XML files. Using this tool, data from different source systems were extracted and pooled for joined analysis in a proof-of-concept study, facilitating both, basic syntactic and semantic interoperability.

  19. Folksonomical P2P File Sharing Networks Using Vectorized KANSEI Information as Search Tags

    NASA Astrophysics Data System (ADS)

    Ohnishi, Kei; Yoshida, Kaori; Oie, Yuji

    We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.

  20. The HydroShare Collaborative Repository for the Hydrology Community

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.

    2017-12-01

    HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of, and collaboration around, "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting our approach to making this system easy to use and serving the needs of the hydrology community represented by the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc. (CUAHSI). Metadata for uploaded files is harvested automatically or captured using easy to use web user interfaces. Users are encouraged to add or create resources in HydroShare early in the data life cycle. To encourage this we allow users to share and collaborate on HydroShare resources privately among individual users or groups, entering metadata while doing the work. HydroShare also provides enhanced functionality for users through web apps that provide tools and computational capability for actions on resources. HydroShare's architecture broadly is comprised of: (1) resource storage, (2) resource exploration website, and (3) web apps for actions on resources. System components are loosely coupled and interact through APIs, which enhances robustness, as components can be upgraded and advanced relatively independently. The full power of this paradigm is the extensibility it supports. Web apps are hosted on separate servers, which may be 3rd party servers. They are registered in HydroShare using a web app resource that configures the connectivity for them to be discovered and launched directly from resource types they are associated with.

  1. 77 FR 46144 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-02

    ... share credit per transaction when adding liquidity, if the SLP meets quoting requirements pursuant to... an equity per share credit per transaction when adding liquidity, if the SLP does not meet the...

  2. QuakeSim Project Networking

    NASA Astrophysics Data System (ADS)

    Kong, D.; Donnellan, A.; Pierce, M. E.

    2012-12-01

    QuakeSim is an online computational framework focused on using remotely sensed geodetic imaging data to model and understand earthquakes. With the rise in online social networking over the last decade, many tools and concepts have been developed that are useful to research groups. In particular, QuakeSim is interested in the ability for researchers to post, share, and annotate files generated by modeling tools in order to facilitate collaboration. To accomplish this, features were added to the preexisting QuakeSim site that include single sign-on, automated saving of output from modeling tools, and a personal user space to manage sharing permissions on these saved files. These features implement OpenID and Lightweight Data Access Protocol (LDAP) technologies to manage files across several different servers, including a web server running Drupal and other servers hosting the computational tools themselves.

  3. HAZPAC; an interactive map of Pacific Rim natural hazards, population, and infrastructure

    USGS Publications Warehouse

    Bemis, B.L.; Goss, H.V.; Yurkovich, E.S.; Perron, T.J.; Howell, D.G.

    2002-01-01

    This is an online version of a CD-ROM publication. The text files that describe using this publication make reference to software provided on the disc. For this online version the software can be downloaded for free from Adobe Systems and Environmental Systems Research Institute, Inc. (ESRI). Welcome to HAZPAC! HAZPAC is an interactive map about natural hazard risk in the Pacific Rim region. It is intended to communicate to a broad audience the ideas of 'Crowding the Rim,' which is an international, public-private partnership that fosters collaborative solutions for regional risks. HAZPAC, which stands for 'HAZards of the PACific,' uses Geographic Information System (GIS) technology to help people visualize the socioeconomic connections and shared hazard vulnerabilities among Pacific Rim countries, as well as to explore the general nature of risk. Please refer to the 'INTRODUCTION TO HAZPAC' section of the readme file below to determine which HAZPAC project will be right for you. Once you have decided which HAZPAC project is suitable for you, please refer to the 'GETTING STARTED' sections in the readme file for some basic information that will help you begin using HAZPAC. Also, we highly recommend that you follow the Tutorial exercises in the project-specific HAZPAC User Guides. The User Guides are PDF (Portable Document Format) files that must be read with Adobe Acrobat Reader (a free copy of Acrobat Reader is available using the link near the bottom of this page).

  4. Collaborative Workspaces within Distributed Virtual Environments.

    DTIC Science & Technology

    1996-12-01

    such as a text document, a 3D model, or a captured image using a collaborative workspace called the InPerson Whiteboard . The Whiteboard contains a...commands for editing objects drawn on the screen. Finally, when the call is completed, the Whiteboard can be saved to a file for future use . IRIS Annotator... use , and a shared whiteboard that includes a number of multimedia annotation tools. Both systems are also mindful of bandwidth limitations and can

  5. Interoperable Data Sharing for Diverse Scientific Disciplines

    NASA Astrophysics Data System (ADS)

    Hughes, John S.; Crichton, Daniel; Martinez, Santa; Law, Emily; Hardman, Sean

    2016-04-01

    For diverse scientific disciplines to interoperate they must be able to exchange information based on a shared understanding. To capture this shared understanding, we have developed a knowledge representation framework using ontologies and ISO level archive and metadata registry reference models. This framework provides multi-level governance, evolves independent of implementation technologies, and promotes agile development, namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. The knowledge representation framework is populated through knowledge acquisition from discipline experts. It is also extended to meet specific discipline requirements. The result is a formalized and rigorous knowledge base that addresses data representation, integrity, provenance, context, quantity, and their relationships within the community. The contents of the knowledge base is translated and written to files in appropriate formats to configure system software and services, provide user documentation, validate ingested data, and support data analytics. This presentation will provide an overview of the framework, present the Planetary Data System's PDS4 as a use case that has been adopted by the international planetary science community, describe how the framework is being applied to other disciplines, and share some important lessons learned.

  6. 30 CFR 1212.51 - Records and files maintenance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... INTERIOR Natural Resources Revenue RECORDS AND FILES MAINTENANCE Oil, Gas, and OCS Sulphur-General § 1212..., royalties, net profit shares, and other payments related to offshore and onshore Federal and Indian oil and gas leases are in compliance with lease terms, regulations, and orders. Records covered by this...

  7. 77 FR 58985 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-25

    .... Applicants: Pacific Wind Lessee, LLC, Catalina Solar, LLC. Description: Shared Transmission Facilities Agreement of Pacific Wind Lessee LLC & Catalina Solar LLC to be effective 11/15/2012. Filed Date: 9/14/12...: Pacific Gas and Electric Company. Description: E&P Agreement for SKIC Solar, LLC to be effective 9/ 17...

  8. 78 FR 21633 - International Mail Product

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-11

    ... of United States Postal Service Filing of a Functionally Equivalent International Business Reply...); Attachment 3--a copy of Governors' Decision No. 08-24; and Attachment 4--an application for non-public... equivalent to the baseline agreement filed in Docket No. CP2011-59 because it shares similar cost and market...

  9. Multiple Robots Localization Via Data Sharing

    DTIC Science & Technology

    2015-09-01

    multiple humans, each with specialized skills complementing each other, work to create the solution. Hence, there is a motivation to think in terms of...pygame.Color(255,255,255) COLORBLACK = pygame.Color(0,0,0) F. AUTOMATE.PY The automate.py file is a helper file to assist in running multiple simulation

  10. Exploring the use of I/O nodes for computation in a MIMD multiprocessor

    NASA Technical Reports Server (NTRS)

    Kotz, David; Cai, Ting

    1995-01-01

    As parallel systems move into the production scientific-computing world, the emphasis will be on cost-effective solutions that provide high throughput for a mix of applications. Cost effective solutions demand that a system make effective use of all of its resources. Many MIMD multiprocessors today, however, distinguish between 'compute' and 'I/O' nodes, the latter having attached disks and being dedicated to running the file-system server. This static division of responsibilities simplifies system management but does not necessarily lead to the best performance in workloads that need a different balance of computation and I/O. Of course, computational processes sharing a node with a file-system service may receive less CPU time, network bandwidth, and memory bandwidth than they would on a computation-only node. In this paper we begin to examine this issue experimentally. We found that high performance I/O does not necessarily require substantial CPU time, leaving plenty of time for application computation. There were some complex file-system requests, however, which left little CPU time available to the application. (The impact on network and memory bandwidth still needs to be determined.) For applications (or users) that cannot tolerate an occasional interruption, we recommend that they continue to use only compute nodes. For tolerant applications needing more cycles than those provided by the compute nodes, we recommend that they take full advantage of both compute and I/O nodes for computation, and that operating systems should make this possible.

  11. 78 FR 67427 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... Change Proposing to Amend the Rule Governing the Listing and Trading of Shares of the WisdomTree Global... change to the means of achieving the investment objective applicable to the WisdomTree Global Real Return... Rule 8.600 \\4\\ (``Managed Fund Shares'').\\5\\ The Shares are offered by the WisdomTree Trust (``Trust...

  12. 75 FR 76056 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ... Change Relating to the Calculation of Net Asset Value for the iShares[supreg] Gold Trust November 30... iShares[supreg] Gold Trust (``Trust''), which is currently listed on the Exchange, will value the gold owned by the iShares Gold Trust on the basis of the London PM Fix instead of the COMEX settlement...

  13. 77 FR 65920 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-31

    ... Trading of Shares of the Pring Turner Business Cycle ETF Under NYSE Arca Equities Rule 8.600 October 25... Turner Business Cycle ETF. The text of the proposed rule change is available on the Exchange's Web site... and trade shares (``Shares'') of the Pring Turner Business Cycle ETF (``Fund'') under NYSE Arca...

  14. 75 FR 168 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-04

    ... to the Consolidated Tape for Closing Transactions That Exceed 99,999,999 Shares December 23, 2009... for closing transactions that exceed 99,999,999 shares. The text of the proposed rule change is... closing transactions that exceed 99,999,999 shares. Currently, pursuant to NYSE Rules 116.40(c) and 123C(3...

  15. 77 FR 76135 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    ... 500 Index option series in the pilot: (1) A time series analysis of open interest; and (2) an analysis... issue's total market share value, which is the share price times the number of shares outstanding. These... other series. Strike price intervals would be set no less than 5 points apart. Consistent with existing...

  16. 78 FR 27265 - Self-Regulatory Organizations; EDGA Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-09

    ... rate that Direct Edge ECN LLC (d/b/a DE Route) (``DE Route''), the Exchange's affiliated routing broker... daily average volume of at least 50,000 shares from $0.0002 per share to $0.0005 per share). The... the financial markets. The Exchange believes that its proposal to pass through a rebate of $0.0005 per...

  17. Data and Information Exchange System for the "Reindeer Mapper" Project

    NASA Technical Reports Server (NTRS)

    Maynard, Nancy; Yurchak, Boris

    2005-01-01

    During this past year, the Reindeer Mapper Intranet system has been set up on the NASA system, 8 team members have been established, a Reindeer Mapper reference list containing 696 items has been entered, 6 power point presentations have been put on line for review among team members, 304 satellite images have been catalogued (including 16 Landsat images, 288 NDVI 10-day composited images and an anomaly series- May 1998 to December 2002, and 56 SAR CEOS S A R format files), schedules and meeting dates are being shared, students at the Nordic Sami Institute are experimenting with the system for reindeer herder indigenous knowledge sharing, and an "address book" is being developed. Several documents and presentations have been translated and made available in Russian for our Russian colleagues. This has enabled our Russian partners to utilize documents and presentations for use in their research (e.g., SAR imagery comparisons with Russian GIS of specific study areas) and discussion with local colleagues.

  18. Can administrative claim file review be used to gather physical therapy, occupational therapy, and psychology payment data and functional independence measure scores? Implications for rehabilitation providers in the private health sector.

    PubMed

    Riis, Viivi; Jaglal, Susan; Boschen, Kathryn; Walker, Jan; Verrier, Molly

    2011-01-01

    Rehabilitation costs for spinal-cord injury (SCI) are increasingly borne by Canada's private health system. Because of poor outcomes, payers are questioning the value of their expenditures, but there is a paucity of data informing analysis of rehabilitation costs and outcomes. This study evaluated the feasibility of using administrative claim file review to extract rehabilitation payment data and functional status for a sample of persons with work-related SCI. Researchers reviewed 28 administrative e-claim files for persons who sustained a work-related SCI between 1996 and 2000. Payment data were extracted for physical therapy (PT), occupational therapy (OT), and psychology services. Functional Independence Measure (FIM) scores were targeted as a surrogate measure for functional outcome. Feasibility was tested using an existing approach for evaluating health services data. The process of administrative e-claim file review was not practical for extraction of the targeted data. While administrative claim files contain some rehabilitation payment and outcome data, in their present form the data are not suitable to inform rehabilitation services research. A new strategy to standardize collection, recording, and sharing of data in the rehabilitation industry should be explored as a means of promoting best practices.

  19. XML-BSPM: an XML format for storing Body Surface Potential Map recordings.

    PubMed

    Bond, Raymond R; Finlay, Dewar D; Nugent, Chris D; Moore, George

    2010-05-14

    The Body Surface Potential Map (BSPM) is an electrocardiographic method, for recording and displaying the electrical activity of the heart, from a spatial perspective. The BSPM has been deemed more accurate for assessing certain cardiac pathologies when compared to the 12-lead ECG. Nevertheless, the 12-lead ECG remains the most popular ECG acquisition method for non-invasively assessing the electrical activity of the heart. Although data from the 12-lead ECG can be stored and shared using open formats such as SCP-ECG, no open formats currently exist for storing and sharing the BSPM. As a result, an innovative format for storing BSPM datasets has been developed within this study. The XML vocabulary was chosen for implementation, as opposed to binary for the purpose of human readability. There are currently no standards to dictate the number of electrodes and electrode positions for recording a BSPM. In fact, there are at least 11 different BSPM electrode configurations in use today. Therefore, in order to support these BSPM variants, the XML-BSPM format was made versatile. Hence, the format supports the storage of custom torso diagrams using SVG graphics. This diagram can then be used in a 2D coordinate system for retaining electrode positions. This XML-BSPM format has been successfully used to store the Kornreich-117 BSPM dataset and the Lux-192 BSPM dataset. The resulting file sizes were in the region of 277 kilobytes for each BSPM recording and can be deemed suitable for example, for use with any telemonitoring application. Moreover, there is potential for file sizes to be further reduced using basic compression algorithms, i.e. the deflate algorithm. Finally, these BSPM files have been parsed and visualised within a convenient time period using a web based BSPM viewer. This format, if widely adopted could promote BSPM interoperability, knowledge sharing and data mining. This work could also be used to provide conceptual solutions and inspire existing formats such as DICOM, SCP-ECG and aECG to support the storage of BSPMs. In summary, this research provides initial ground work for creating a complete BSPM management system.

  20. COMBINE archive and OMEX format: one file to share all information to reproduce a modeling project.

    PubMed

    Bergmann, Frank T; Adams, Richard; Moodie, Stuart; Cooper, Jonathan; Glont, Mihai; Golebiewski, Martin; Hucka, Michael; Laibe, Camille; Miller, Andrew K; Nickerson, David P; Olivier, Brett G; Rodriguez, Nicolas; Sauro, Herbert M; Scharm, Martin; Soiland-Reyes, Stian; Waltemath, Dagmar; Yvon, Florent; Le Novère, Nicolas

    2014-12-14

    With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result. We describe the Open Modeling EXchange format (OMEX). Together with the use of other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive, a single file that supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, listing the content of the archive, an optional metadata file adding information about the archive and its content, and the files describing the model. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. Several tools that support the COMBINE Archive are available, either as independent libraries or embedded in modeling software. The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails. We anticipate that the COMBINE Archive will become a significant help for modellers, as the domain moves to larger, more complex experiments such as multi-scale models of organs, digital organisms, and bioengineering.

  1. 12 CFR 745.202 - Appeal.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS SHARE INSURANCE AND APPENDIX Payment of Share Insurance and Appeals § 745.202 Appeal. (a) Time for filing. Within 60 days after issuance of an initial determination, or of the determination on a request for reconsideration by the...

  2. 75 FR 62615 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Order Granting Accelerated...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-12

    ... Silver Trust \\8\\ and ETFS Gold Trust.\\9\\ The Commission also has previously approved listing on the Exchange of shares of the Sprott Physical Gold Trust, streetTRACKS Gold Trust, and iShares COMEX Gold Trust...

  3. 77 FR 40647 - Toward Innovative Spectrum-Sharing Technologies: Wireless Spectrum Research and Development...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-10

    ... this important step forward. Submitted by the National Science Foundation for the National Coordination... NATIONAL SCIENCE FOUNDATION Toward Innovative Spectrum-Sharing Technologies: Wireless Spectrum.... Suzanne H. Plimpton, Reports Clearance Officer, National Science Foundation. [FR Doc. 2012-16804 Filed 7-9...

  4. Confidentiality and spatially explicit data: Concerns and challenges

    PubMed Central

    VanWey, Leah K.; Rindfuss, Ronald R.; Gutmann, Myron P.; Entwisle, Barbara; Balk, Deborah L.

    2005-01-01

    Recent theoretical, methodological, and technological advances in the spatial sciences create an opportunity for social scientists to address questions about the reciprocal relationship between context (spatial organization, environment, etc.) and individual behavior. This emerging research community has yet to adequately address the new threats to the confidentiality of respondent data in spatially explicit social survey or census data files, however. This paper presents four sometimes conflicting principles for the conduct of ethical and high-quality science using such data: protection of confidentiality, the social–spatial linkage, data sharing, and data preservation. The conflict among these four principles is particularly evident in the display of spatially explicit data through maps combined with the sharing of tabular data files. This paper reviews these two research activities and shows how current practices favor one of the principles over the others and do not satisfactorily resolve the conflict among them. Maps are indispensable for the display of results but also reveal information on the location of respondents and sampling clusters that can then be used in combination with shared data files to identify respondents. The current practice of sharing modified or incomplete data sets or using data enclaves is not ideal for either the advancement of science or the protection of confidentiality. Further basic research and open debate are needed to advance both understanding of and solutions to this dilemma. PMID:16230608

  5. Oak Ridge Institutional Cluster Autotune Test Drive Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibonananda, Sanyal; New, Joshua Ryan

    2014-02-01

    The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titanmore » (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.« less

  6. 78 FR 51251 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-20

    ... described below apply to transactions in stocks with a per share stock price of $1.00 or more. The Exchange... from the Exchange) that are not otherwise specified in the Price List are charged $0.0024 per share per... otherwise specified on the Price List (i.e., the proposed $.0022 and $0.0020 per share rates) because d...

  7. 78 FR 21681 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-11

    ... calculate an estimated intraday NAV. Such traders understand what the intrinsic per-share price is, hedge... granting the Existing Relief rests on the premise that the prices of ETP shares closely track their per... number of shares of the ETP that are outstanding. The Annual Fee ranges from $5,000 to $55,000. \\6\\ The...

  8. 75 FR 54665 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing of Proposed Rule Change To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-08

    ...; (d) Two market makers; and (e) A minimum initial listing price of $0.25 per share for securities... $0.05 per share bid price. Further, with respect to companies not previously listed on a national..., to be eligible to list with a $0.25 per share price. The Exchange believes it appropriate to consider...

  9. 77 FR 73500 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-10

    ... Trading of Shares of the Horizons S&P 500 Covered Call ETF, Horizons S&P Financial Select Sector Covered Call ETF, and Horizons S&P Energy Select Sector Covered Call ETF Under NYSE Arca Equities Rule 5.2(j)(3... and trade shares (``Shares'') of the Horizons S&P 500 Covered Call ETF, Horizons S&P Financial Select...

  10. 78 FR 47041 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... methodology developed by NationsShares, a firm that develops proprietary derivatives-based indexes and options... Organizations; International Securities Exchange, LLC; Notice of Filing of Proposed Rule Change To List Options... the Exchange of options on the Nations VolDex index, a new index that measures changes in implied...

  11. The Music Industry as a Vehicle for Economic Analysis

    ERIC Educational Resources Information Center

    Klein, Christopher C.

    2015-01-01

    Issues arising in the music industry in response to the availability of digital music files provide an opportunity for exposing undergraduate students to economic analyses rarely covered in the undergraduate economics curriculum. Three of these analyses are covered here: the optimal copyright term, the effect of piracy or illegal file sharing, and…

  12. 75 FR 25010 - Self-Regulatory Organizations; Stock Clearing Corporation of Philadelphia; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-06

    ...-025, The NASDAQ Stock Market LLC (``NASDAQ Exchange'') sought and received Commission approval to... requirements apply to elections of directors and were not amended. Each share of common stock has one vote,\\8...-Regulatory Organizations; Stock Clearing Corporation of Philadelphia; Notice of Filing and Immediate...

  13. 77 FR 71020 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-28

    ... Products with that of derivative securities products, like ETFs, that are listed on the Exchange. In this regard, the Exchange believes that derivative securities products and Structured Products share certain... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-68280; File No. SR-NYSEArca-2012-127] Self...

  14. NeuronDepot: keeping your colleagues in sync by combining modern cloud storage services, the local file system, and simple web applications

    PubMed Central

    Rautenberg, Philipp L.; Kumaraswamy, Ajayrama; Tejero-Cantero, Alvaro; Doblander, Christoph; Norouzian, Mohammad R.; Kai, Kazuki; Jacobsen, Hans-Arno; Ai, Hiroyuki; Wachtler, Thomas; Ikeno, Hidetoshi

    2014-01-01

    Neuroscience today deals with a “data deluge” derived from the availability of high-throughput sensors of brain structure and brain activity, and increased computational resources for detailed simulations with complex output. We report here (1) a novel approach to data sharing between collaborating scientists that brings together file system tools and cloud technologies, (2) a service implementing this approach, called NeuronDepot, and (3) an example application of the service to a complex use case in the neurosciences. The main drivers for our approach are to facilitate collaborations with a transparent, automated data flow that shields scientists from having to learn new tools or data structuring paradigms. Using NeuronDepot is simple: one-time data assignment from the originator and cloud based syncing—thus making experimental and modeling data available across the collaboration with minimum overhead. Since data sharing is cloud based, our approach opens up the possibility of using new software developments and hardware scalabitliy which are associated with elastic cloud computing. We provide an implementation that relies on existing synchronization services and is usable from all devices via a reactive web interface. We are motivating our solution by solving the practical problems of the GinJang project, a collaboration of three universities across eight time zones with a complex workflow encompassing data from electrophysiological recordings, imaging, morphological reconstructions, and simulations. PMID:24971059

  15. NeuronDepot: keeping your colleagues in sync by combining modern cloud storage services, the local file system, and simple web applications.

    PubMed

    Rautenberg, Philipp L; Kumaraswamy, Ajayrama; Tejero-Cantero, Alvaro; Doblander, Christoph; Norouzian, Mohammad R; Kai, Kazuki; Jacobsen, Hans-Arno; Ai, Hiroyuki; Wachtler, Thomas; Ikeno, Hidetoshi

    2014-01-01

    Neuroscience today deals with a "data deluge" derived from the availability of high-throughput sensors of brain structure and brain activity, and increased computational resources for detailed simulations with complex output. We report here (1) a novel approach to data sharing between collaborating scientists that brings together file system tools and cloud technologies, (2) a service implementing this approach, called NeuronDepot, and (3) an example application of the service to a complex use case in the neurosciences. The main drivers for our approach are to facilitate collaborations with a transparent, automated data flow that shields scientists from having to learn new tools or data structuring paradigms. Using NeuronDepot is simple: one-time data assignment from the originator and cloud based syncing-thus making experimental and modeling data available across the collaboration with minimum overhead. Since data sharing is cloud based, our approach opens up the possibility of using new software developments and hardware scalabitliy which are associated with elastic cloud computing. We provide an implementation that relies on existing synchronization services and is usable from all devices via a reactive web interface. We are motivating our solution by solving the practical problems of the GinJang project, a collaboration of three universities across eight time zones with a complex workflow encompassing data from electrophysiological recordings, imaging, morphological reconstructions, and simulations.

  16. Sharing chemical structures with peer-reviewed publications. Are we there yet?

    EPA Science Inventory

    In the domain of chemistry one of the greatest benefits to publishing research is that data are shared. Unfortunately, the vast majority of chemical structure data remain locked up in document form, primarily as PDF files. Despite the explosive growth of online chemical databases...

  17. 76 FR 30417 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-25

    ... Change Amending Rule 7.31(h)(5) To Reduce the Minimum Order Entry Size of a Mid-Point Passive Liquidity... order entry size of a Mid-Point Passive Liquidity Order (``MPL Order'') from 100 shares to one share...

  18. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  19. Reducing financial barriers to emergency obstetric care: experience of cost-sharing mechanism in a district hospital in Burkina Faso.

    PubMed

    Richard, F; Ouédraogo, C; Compaoré, J; Dubourg, D; De Brouwere, V

    2007-08-01

    To describe the implementation of a cost-sharing system for emergency obstetric care in an urban health district of Ouagadougou, Burkina Faso and analyse its results after 1 year of activity. Service availability and use, service quality, knowledge of the cost-sharing system in the community and financial viability of the system were measured before and after the system was implemented. Different sources of data were used: community survey, anthropological study, routine data from hospital files and registers and specific data collected on major obstetric interventions (MOI) in all the hospitals utilized by the district population. Direct costs of MOI were collected for each patient through an individual form and monitored during the year 2005. Rates of MOI for absolute maternal indications (AMI) were calculated for the period 2003-2005. The direct cost of a MOI was on average 136US$, including referral cost. Through the cost-sharing system this amount was shared between families (46US$), health centres (15US$), Ministry of Health (38US$) and local authority (37US$). The scheme was started in January 2005. The rate of cost recovery was 91.3% and the balance at the end of 2005 was slightly positive (4.7% of the total contribution). The number of emergency referrals by health centres increased from 84 in 2004 to 683 in 2005. MOI per 100 expected births increased from 1.95% in 2003 to 3.56% in 2005 and MOI for AMI increased from 0.75% to 1.42%. The dramatic increase in MOI suggests that the cost-sharing scheme decreased financial and geographical barriers to emergency obstetric care. Other positive effects on quality of care were documented but the sustainability of such a system remains uncertain in the dynamic context of Burkina Faso (decentralization).

  20. Automatic image database generation from CAD for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.

    1993-06-01

    The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.

  1. 20180318 - Sharing chemical structures with peer-reviewed publications. Are we there yet? (ACS Spring)

    EPA Science Inventory

    In the domain of chemistry one of the greatest benefits to publishing research is that data are shared. Unfortunately, the vast majority of chemical structure data remain locked up in document form, primarily as PDF files. Despite the explosive growth of online chemical databases...

  2. 26 CFR 1.936-7 - Manner of making election under section 936 (h)(5); special election for export sales; revocation...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... election to use the cost sharing method or profit split method? A. 1: A possessions corporation makes an election to use the cost sharing or profit split method by filing Form 5712-A (“Election and Verification of the Cost Sharing or Profit Split Method Under Section 936(h)(5)”) and attaching it to its tax...

  3. 75 FR 14646 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-26

    ... Organizations; International Securities Exchange, LLC; Notice of Filing of Proposed Rule Change To List and... ``Commission'') authorized ISE to list and trade options on the SPDR Gold Trust,\\3\\ the iShares COMEX Gold... Exchange proposes to list and trade options on the ETFS Palladium Trust and the ETFS Platinum Trust. \\3...

  4. 76 FR 71089 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ... Receipts/SPDRs (``SPY''); (ii) the PowerShares QQQ Trust (``QQQ'') [reg]; (iii) Apple, Inc. (``AAPL''); (iv... initially filed a proposed rule change \\6\\ to pay a different Customer Complex Order Rebate to Add Liquidity... the Exchange from continuing to increase its order flow. Currently, the Exchange pays a Customer...

  5. HydroShare for iUTAH: Collaborative Publication, Interoperability, and Reuse of Hydrologic Data and Models for a Large, Interdisciplinary Water Research Project

    NASA Astrophysics Data System (ADS)

    Horsburgh, J. S.; Jones, A. S.

    2016-12-01

    Data and models used within the hydrologic science community are diverse. New research data and model repositories have succeeded in making data and models more accessible, but have been, in most cases, limited to particular types or classes of data or models and also lack the type of collaborative, and iterative functionality needed to enable shared data collection and modeling workflows. File sharing systems currently used within many scientific communities for private sharing of preliminary and intermediate data and modeling products do not support collaborative data capture, description, visualization, and annotation. More recently, hydrologic datasets and models have been cast as "social objects" that can be published, collaborated around, annotated, discovered, and accessed. Yet it can be difficult using existing software tools to achieve the kind of collaborative workflows and data/model reuse that many envision. HydroShare is a new, web-based system for sharing hydrologic data and models with specific functionality aimed at making collaboration easier and achieving new levels of interactive functionality and interoperability. Within HydroShare, we have developed new functionality for creating datasets, describing them with metadata, and sharing them with collaborators. HydroShare is enabled by a generic data model and content packaging scheme that supports describing and sharing diverse hydrologic datasets and models. Interoperability among the diverse types of data and models used by hydrologic scientists is achieved through the use of consistent storage, management, sharing, publication, and annotation within HydroShare. In this presentation, we highlight and demonstrate how the flexibility of HydroShare's data model and packaging scheme, HydroShare's access control and sharing functionality, and versioning and publication capabilities have enabled the sharing and publication of research datasets for a large, interdisciplinary water research project called iUTAH (innovative Urban Transitions and Aridregion Hydro-sustainability). We discuss the experiences of iUTAH researchers now using HydroShare to collaboratively create, curate, and publish datasets and models in a way that encourages collaboration, promotes reuse, and meets funding agency requirements.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindsey, Nicholas C.

    The growth of additive manufacturing as a disruptive technology poses nuclear proliferation concerns worthy of serious consideration. Additive manufacturing began in the early 1980s with technological advances in polymer manipulation, computer capabilities, and computer-aided design (CAD) modeling. It was originally limited to rapid prototyping; however, it eventually developed into a complete means of production that has slowly penetrated the consumer market. Today, additive manufacturing machines can produce complex and unique items in a vast array of materials including plastics, metals, and ceramics. These capabilities have democratized the manufacturing industry, allowing almost anyone to produce items as simple as cup holdersmore » or as complex as jet fuel nozzles. Additive manufacturing, or three-dimensional (3D) printing as it is commonly called, relies on CAD files created or shared by individuals with additive manufacturing machines to produce a 3D object from a digital model. This sharing of files means that a 3D object can be scanned or rendered as a CAD model in one country, and then downloaded and printed in another country, allowing items to be shared globally without physically crossing borders. The sharing of CAD files online has been a challenging task for the export controls regime to manage over the years, and additive manufacturing could make these transfers more common. In this sense, additive manufacturing is a disruptive technology not only within the manufacturing industry but also within the nuclear nonproliferation world. This paper provides an overview of additive manufacturing concerns of proliferation.« less

  7. Applying Service-Oriented Architecture on The Development of Groundwater Modeling Support System

    NASA Astrophysics Data System (ADS)

    Li, C. Y.; WANG, Y.; Chang, L. C.; Tsai, J. P.; Hsiao, C. T.

    2016-12-01

    Groundwater simulation has become an essential step on the groundwater resources management and assessment. There are many stand-alone pre- and post-processing software packages to alleviate the model simulation loading, but the stand-alone software do not consider centralized management of data and simulation results neither do they provide network sharing functions. Hence, it is difficult to share and reuse the data and knowledge (simulation cases) systematically within or across companies. Therefore, this study develops a centralized and network based groundwater modeling support system to assist model construction. The system is based on service-oriented architecture and allows remote user to develop their modeling cases on internet. The data and cases (knowledge) are thus easy to manage centralized. MODFLOW is the modeling engine of the system, which is the most popular groundwater model in the world. The system provides a data warehouse to restore groundwater observations, MODFLOW Support Service, MODFLOW Input File & Shapefile Convert Service, MODFLOW Service, and Expert System Service to assist researchers to build models. Since the system architecture is service-oriented, it is scalable and flexible. The system can be easily extended to include the scenarios analysis and knowledge management to facilitate the reuse of groundwater modeling knowledge.

  8. 77 FR 21120 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change to List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-09

    ... Contracts Overlying 10 Shares of a Security (``Mini-Options Contracts'') and Implementing Rule Text... contracts'') and implement rule text necessary to distinguish mini-options contracts from option contracts overlying 100 shares of a security (``standard contracts''). The text of the proposed rule change is...

  9. 75 FR 22874 - Claymore Exchange-Traded Fund Trust 3, et al.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-30

    ... SECURITIES AND EXCHANGE COMMISSION [Investment Company Act Release No. 29256; File No. 812-13534... the Investment Company Act of 1940 (``Act'') for an exemption from sections 2(a)(32), 5(a)(1), 22(d... management investment companies to issue shares (``Shares'') redeemable in large aggregations only...

  10. 25 CFR 227.4 - Sale of oil and gas leases.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND... first year's rental, and his share of the advertising costs, and shall file with the superintendent the... bidder or bidders will be required to pay his or their share of the advertising costs. Amounts received...

  11. 25 CFR 227.4 - Sale of oil and gas leases.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND... first year's rental, and his share of the advertising costs, and shall file with the superintendent the... bidder or bidders will be required to pay his or their share of the advertising costs. Amounts received...

  12. 25 CFR 227.4 - Sale of oil and gas leases.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND... first year's rental, and his share of the advertising costs, and shall file with the superintendent the... bidder or bidders will be required to pay his or their share of the advertising costs. Amounts received...

  13. 25 CFR 227.4 - Sale of oil and gas leases.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND... first year's rental, and his share of the advertising costs, and shall file with the superintendent the... bidder or bidders will be required to pay his or their share of the advertising costs. Amounts received...

  14. 25 CFR 227.4 - Sale of oil and gas leases.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEASING OF CERTAIN LANDS IN WIND... first year's rental, and his share of the advertising costs, and shall file with the superintendent the... bidder or bidders will be required to pay his or their share of the advertising costs. Amounts received...

  15. 78 FR 35338 - Self-Regulatory Organizations; BOX Options Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-12

    ... trading option contracts overlying 1,000 SPDR[supreg] S&P 500[supreg] exchange-traded fund shares (``SPY''),\\3\\ or (``Jumbo SPY Options'').\\4\\ Whereas standard options contracts represent a deliverable of 100... the number of deliverable shares, Jumbo SPY Options have the same terms and contract characteristics...

  16. 77 FR 34117 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-08

    ... other high credit quality, short-term fixed-income or similar securities (including shares of money market funds, bank deposits, bank money market accounts, certain variable rate-demand notes, and...- income or similar securities (including shares of money market funds, bank deposits, bank money market...

  17. [A method for auditing medical records quality: audit of 467 medical records within the framework of the medical information systems project quality control].

    PubMed

    Boulay, F; Chevallier, T; Gendreike, Y; Mailland, V; Joliot, Y; Sambuc, R

    1998-03-01

    Future hospital accreditation could take into account the quality of medical files. The objectives of this study is to test a method for auditing and evaluating the quality of the handing of medical files. We conducted a retrospective regional audit based on the frame of reference the National Agency for Medical Development and Evaluation, by using a sample of cases, stratified by establishment. In our region, the global budgets of 47 public and private hospitals participating in the public hospital service, are adjusted while keeping in mind the medicalised activity data (PMSI). This audit was proposed to the doctors of the Department of Medical Information on the occasion of the regulatory PMSI quality control. A total of 467 questionnaires were given by 39 of the 47 sollicited hospitals (83%). The methodological aspects (questionnaire, cooperative approach...) are discussed. The make-up of medical files can alos be improved by raising the percentage of the presence of important data or documents such as the reason for admission (74.1%), the surgery report (83.2%), and the hospitalisation report (66.6%). A system for classifying the paraclinical results is shared and systematic throughout the service or hospital in only 73.2% of cases. The quality of the handing of medical files seems problematic in our hospitals and actions for improving the quality should be undertaken as a priority.

  18. 77 FR 71644 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-03

    ... exercise limits for options on the iShares MSCI Emerging Markets Index Fund (``EEM'') to 500,000 contracts... exercise limits for EEM options to 500,000 contracts.\\3\\ There is precedent for establishing position...\\ \\3\\ By virtue of Rule 1002, which is not being amended by this filing, the exercise limit for EEM...

  19. From Jefferson to Metallica to Your Campus: Copyright Issues in Student Peer-to-Peer File Sharing

    ERIC Educational Resources Information Center

    Cesarini, Lisa McHugh; Cesarini, Paul

    2008-01-01

    When Lars Ulrich, drummer for the rock group Metallica, testified before Congress about his group's lawsuit against Napster in 2000, many people who followed copyright issues in the music industry were not surprised (Ulrich, 2000). Ever since downloading audio files became as easy as clicking a few buttons on a personal computer, charges of…

  20. 75 FR 47651 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Order Approving a Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-62605; File No. SR-NASDAQ-2010-068] Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Order Approving a Proposed Rule Change to Establish a Revenue Sharing Program With Correlix, Inc. July 30, 2010. On June 8, 2010, The NASDAQ Stock Market LLC (``NASDAQ'' or the ``Exchange'') filed wit...

  1. Enhanced K-means clustering with encryption on cloud

    NASA Astrophysics Data System (ADS)

    Singh, Iqjot; Dwivedi, Prerna; Gupta, Taru; Shynu, P. G.

    2017-11-01

    This paper tries to solve the problem of storing and managing big files over cloud by implementing hashing on Hadoop in big-data and ensure security while uploading and downloading files. Cloud computing is a term that emphasis on sharing data and facilitates to share infrastructure and resources.[10] Hadoop is an open source software that gives us access to store and manage big files according to our needs on cloud. K-means clustering algorithm is an algorithm used to calculate distance between the centroid of the cluster and the data points. Hashing is a algorithm in which we are storing and retrieving data with hash keys. The hashing algorithm is called as hash function which is used to portray the original data and later to fetch the data stored at the specific key. [17] Encryption is a process to transform electronic data into non readable form known as cipher text. Decryption is the opposite process of encryption, it transforms the cipher text into plain text that the end user can read and understand well. For encryption and decryption we are using Symmetric key cryptographic algorithm. In symmetric key cryptography are using DES algorithm for a secure storage of the files. [3

  2. Development of a user-friendly system for image processing of electron microscopy by integrating a web browser and PIONE with Eos.

    PubMed

    Tsukamoto, Takafumi; Yasunaga, Takuo

    2014-11-01

    Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Modeling Tools for Propulsion Analysis and Computational Fluid Dynamics on the Internet

    NASA Technical Reports Server (NTRS)

    Muss, J. A.; Johnson, C. W.; Gotchy, M. B.

    2000-01-01

    The existing RocketWeb(TradeMark) Internet Analysis System (httr)://www.iohnsonrockets.com/rocketweb) provides an integrated set of advanced analysis tools that can be securely accessed over the Internet. Since these tools consist of both batch and interactive analysis codes, the system includes convenient methods for creating input files and evaluating the resulting data. The RocketWeb(TradeMark) system also contains many features that permit data sharing which, when further developed, will facilitate real-time, geographically diverse, collaborative engineering within a designated work group. Adding work group management functionality while simultaneously extending and integrating the system's set of design and analysis tools will create a system providing rigorous, controlled design development, reducing design cycle time and cost.

  4. Grid data access on widely distributed worker nodes using scalla and SRM

    NASA Astrophysics Data System (ADS)

    Jakl, P.; Lauret, J.; Hanushevsky, A.; Shoshani, A.; Sim, A.; Gu, J.

    2008-07-01

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.

  5. Centralized automated cataloging of health science materials in the MLC/SUNY/OCLC shared cataloging service.

    PubMed Central

    Raper, J E

    1977-01-01

    Since February 1976, The Medical Library Center of New York, with the assistance of the SUNY/OCLC Network, has offered, on a subscription basis, a centralized automated cataloging service to health science libraries in the greater metropolitan New York area. By using workforms and prints of OCLC record (amended by the subscribing participants), technical services personnel at the center have fed cataloging data, via a CRT terminal, into the OCLC system, which provides (1) catalog cards, received in computer filing order; (2) book card, spine, and pocket labels; (3) accessions lists; and (4) data for eventual production of book catalogs and union catalogs. The experience of the center in the development, implementation, operation, and budgeting of its shared cataloging service is discussed. PMID:843650

  6. Centralized automated cataloging of health science materials in the MLC/SUNY/OCLC shared cataloging service.

    PubMed

    Raper, J E

    1977-04-01

    Since February 1976, The Medical Library Center of New York, with the assistance of the SUNY/OCLC Network, has offered, on a subscription basis, a centralized automated cataloging service to health science libraries in the greater metropolitan New York area. By using workforms and prints of OCLC record (amended by the subscribing participants), technical services personnel at the center have fed cataloging data, via a CRT terminal, into the OCLC system, which provides (1) catalog cards, received in computer filing order; (2) book card, spine, and pocket labels; (3) accessions lists; and (4) data for eventual production of book catalogs and union catalogs. The experience of the center in the development, implementation, operation, and budgeting of its shared cataloging service is discussed.

  7. Enabling the democratization of the genomics revolution with a fully integrated web-based bioinformatics platform, Version 1.5 and 1.x.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chain, Patrick; Lo, Chien-Chi; Li, Po-E

    EDGE bioinformatics was developed to help biologists process Next Generation Sequencing data (in the form of raw FASTQ files), even if they have little to no bioinformatics expertise. EDGE is a highly integrated and interactive web-based platform that is capable of running many of the standard analyses that biologists require for viral, bacterial/archaeal, and metagenomic samples. EDGE provides the following analytical workflows: quality trimming and host removal, assembly and annotation, comparisons against known references, taxonomy classification of reads and contigs, whole genome SNP-based phylogenetic analysis, and PCR analysis. EDGE provides an intuitive web-based interface for user input, allows users tomore » visualize and interact with selected results (e.g. JBrowse genome browser), and generates a final detailed PDF report. Results in the form of tables, text files, graphic files, and PDFs can be downloaded. A user management system allows tracking of an individual’s EDGE runs, along with the ability to share, post publicly, delete, or archive their results.« less

  8. The virtual microscopy database-sharing digital microscope images for research and education.

    PubMed

    Lee, Lisa M J; Goldman, Haviva M; Hortsch, Michael

    2018-02-14

    Over the last 20 years, virtual microscopy has become the predominant modus of teaching the structural organization of cells, tissues, and organs, replacing the use of optical microscopes and glass slides in a traditional histology or pathology laboratory setting. Although virtual microscopy image files can easily be duplicated, creating them requires not only quality histological glass slides but also an expensive whole slide microscopic scanner and massive data storage devices. These resources are not available to all educators and researchers, especially at new institutions in developing countries. This leaves many schools without access to virtual microscopy resources. The Virtual Microscopy Database (VMD) is a new resource established to address this problem. It is a virtual image file-sharing website that allows researchers and educators easy access to a large repository of virtual histology and pathology image files. With the support from the American Association of Anatomists (Bethesda, MD) and MBF Bioscience Inc. (Williston, VT), registration and use of the VMD are currently free of charge. However, the VMD site is restricted to faculty and staff of research and educational institutions. Virtual Microscopy Database users can upload their own collection of virtual slide files, as well as view and download image files for their own non-profit educational and research purposes that have been deposited by other VMD clients. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.

  9. CERN data services for LHC computing

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.

    2017-10-01

    Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.

  10. Efficient File Sharing by Multicast - P2P Protocol Using Network Coding and Rank Based Peer Selection

    NASA Technical Reports Server (NTRS)

    Stoenescu, Tudor M.; Woo, Simon S.

    2009-01-01

    In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.

  11. QoS support for end users of I/O-intensive applications using shared storage systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2011-01-19

    I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less

  12. I/O Performance Characterization of Lustre and NASA Applications on Pleiades

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Rappleye, Jason; Chang, Johnny; Barker, David Peter; Biswas, Rupak; Mehrotra, Piyush

    2012-01-01

    In this paper we study the performance of the Lustre file system using five scientific and engineering applications representative of NASA workload on large-scale supercomputing systems such as NASA s Pleiades. In order to facilitate the collection of Lustre performance metrics, we have developed a software tool that exports a wide variety of client and server-side metrics using SGI's Performance Co-Pilot (PCP), and generates a human readable report on key metrics at the end of a batch job. These performance metrics are (a) amount of data read and written, (b) number of files opened and closed, and (c) remote procedure call (RPC) size distribution (4 KB to 1024 KB, in powers of 2) for I/O operations. RPC size distribution measures the efficiency of the Lustre client and can pinpoint problems such as small write sizes, disk fragmentation, etc. These extracted statistics are useful in determining the I/O pattern of the application and can assist in identifying possible improvements for users applications. Information on the number of file operations enables a scientist to optimize the I/O performance of their applications. Amount of I/O data helps users choose the optimal stripe size and stripe count to enhance I/O performance. In this paper, we demonstrate the usefulness of this tool on Pleiades for five production quality NASA scientific and engineering applications. We compare the latency of read and write operations under Lustre to that with NFS by tracing system calls and signals. We also investigate the read and write policies and study the effect of page cache size on I/O operations. We examine the performance impact of Lustre stripe size and stripe count along with performance evaluation of file per process and single shared file accessed by all the processes for NASA workload using parameterized IOR benchmark.

  13. 77 FR 68873 - Self-Regulatory Organizations; National Stock Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-16

    ... calculation for Auto-Ex Mode, (ii) provide a fixed per share rebate for Midpoint Peg Zero Display Reserve... NMS stocks with quoted prices less than one dollar, (ii) create a fixed per share rebate for Midpoint Peg Zero Display Reserve Orders,\\3\\ and (iii) correct typographical inconsistencies within the Fee...

  14. 77 FR 36599 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-19

    ... Change Relating to the Accuvest Global Long Short ETF (Formerly the Mars Hill Global Relative Value ETF...) applicable to, the Accuvest Global Long Short ETF (``Fund'') (formerly known as the Mars Hill Global Relative... the Exchange of shares (``Shares'') of the Mars Hill Global Relative Value ETF, a series of Advisor...

  15. 77 FR 24233 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-23

    ... Change Relating to the Peritus High Yield ETF April 17, 2012. Pursuant to Section 19(b)(1) of the... Change The Exchange proposes to reflect a change to the holdings of the Peritus High Yield ETF to achieve... Exchange shares (``Shares'') of the Peritus High Yield ETF (``Fund'') under [[Page 24234

  16. 75 FR 16217 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-31

    ..., Inc. Regarding the Listing of the ProShares Ultra MSCI Mexico Investable Market Fund March 24, 2010... Ultra MSCI Mexico Investable Market. The text of the proposed rule change is available at the Exchange... (``ICUs''): \\4\\ ProShares Ultra MSCI Mexico Investable Market (the ``Fund''). \\4\\ An Investment Company...

  17. 77 FR 9281 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-16

    ... and Trading of the PIMCO Global Advantage Inflation-Linked Bond Strategy Fund Under NYSE Arca Equities...''): PIMCO Global Advantage Inflation-Linked Bond Strategy Fund. The text of the proposed rule change is... Shares \\3\\ (``Shares'') under NYSE Arca Equities Rule 8.600: PIMCO Global Advantage Inflation-Linked Bond...

  18. Beta Coefficient and Market Share: Downloading and Processing Data from DIALOG to LOTUS 1-2-3.

    ERIC Educational Resources Information Center

    Popovich, Charles J.

    This article briefly describes the topics "beta coefficient"--a measurement of the price volatility of a company's stock in relationship to the overall stock market--and "market share"--an average measurement for the overall stock market based on a specified group of stocks. It then selectively recommends a database (file) on…

  19. 12 CFR 239.22 - Charter amendments.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... this part, shall be approved at the time of adoption, if adopted without change and filed with the... shares may be issued from time to time as authorized by the board of directors without further approval... of a series of capital stock to vote as a separate class or series or to more than one vote per share...

  20. KiMoSys: a web-based repository of experimental data for KInetic MOdels of biological SYStems

    PubMed Central

    2014-01-01

    Background The kinetic modeling of biological systems is mainly composed of three steps that proceed iteratively: model building, simulation and analysis. In the first step, it is usually required to set initial metabolite concentrations, and to assign kinetic rate laws, along with estimating parameter values using kinetic data through optimization when these are not known. Although the rapid development of high-throughput methods has generated much omics data, experimentalists present only a summary of obtained results for publication, the experimental data files are not usually submitted to any public repository, or simply not available at all. In order to automatize as much as possible the steps of building kinetic models, there is a growing requirement in the systems biology community for easily exchanging data in combination with models, which represents the main motivation of KiMoSys development. Description KiMoSys is a user-friendly platform that includes a public data repository of published experimental data, containing concentration data of metabolites and enzymes and flux data. It was designed to ensure data management, storage and sharing for a wider systems biology community. This community repository offers a web-based interface and upload facility to turn available data into publicly accessible, centralized and structured-format data files. Moreover, it compiles and integrates available kinetic models associated with the data. KiMoSys also integrates some tools to facilitate the kinetic model construction process of large-scale metabolic networks, especially when the systems biologists perform computational research. Conclusions KiMoSys is a web-based system that integrates a public data and associated model(s) repository with computational tools, providing the systems biology community with a novel application facilitating data storage and sharing, thus supporting construction of ODE-based kinetic models and collaborative research projects. The web application implemented using Ruby on Rails framework is freely available for web access at http://kimosys.org, along with its full documentation. PMID:25115331

  1. Automating Data Submission to a National Archive

    NASA Astrophysics Data System (ADS)

    Work, T. T.; Chandler, C. L.; Groman, R. C.; Allison, M. D.; Gegg, S. R.; Biological; Chemical Oceanography Data Management Office

    2010-12-01

    In late 2006, the U.S. National Science Foundation (NSF) funded the Biological and Chemical Oceanographic Data Management Office (BCO-DMO) at Woods Hole Oceanographic Institution (WHOI) to work closely with investigators to manage oceanographic data generated from their research projects. One of the final data management tasks is to ensure that the data are permanently archived at the U.S. National Oceanographic Data Center (NODC) or other appropriate national archiving facility. In the past, BCO-DMO submitted data to NODC as an email with attachments including a PDF file (a manually completed metadata record) and one or more data files. This method is no longer feasible given the rate at which data sets are contributed to BCO-DMO. Working with collaborators at NODC, a more streamlined and automated workflow was developed to keep up with the increased volume of data that must be archived at NODC. We will describe our new workflow; a semi-automated approach for contributing data to NODC that includes a Federal Geographic Data Committee (FGDC) compliant Extensible Markup Language (XML) metadata file accompanied by comma-delimited data files. The FGDC XML file is populated from information stored in a MySQL database. A crosswalk described by an Extensible Stylesheet Language Transformation (XSLT) is used to transform the XML formatted MySQL result set to a FGDC compliant XML metadata file. To ensure data integrity, the MD5 algorithm is used to generate a checksum and manifest of the files submitted to NODC for permanent archive. The revised system supports preparation of detailed, standards-compliant metadata that facilitate data sharing and enable accurate reuse of multidisciplinary information. The approach is generic enough to be adapted for use by other data management groups.

  2. A Working Framework for Enabling International Science Data System Interoperability

    NASA Astrophysics Data System (ADS)

    Hughes, J. Steven; Hardman, Sean; Crichton, Daniel J.; Martinez, Santa; Law, Emily; Gordon, Mitchell K.

    2016-07-01

    For diverse scientific disciplines to interoperate they must be able to exchange information based on a shared understanding. To capture this shared understanding, we have developed a knowledge representation framework that leverages ISO level reference models for metadata registries and digital archives. This framework provides multi-level governance, evolves independent of the implementation technologies, and promotes agile development, namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. The knowledge representation is captured in an ontology through a process of knowledge acquisition. Discipline experts in the role of stewards at the common, discipline, and project levels work to design and populate the ontology model. The result is a formal and consistent knowledge base that provides requirements for data representation, integrity, provenance, context, identification, and relationship. The contents of the knowledge base are translated and written to files in suitable formats to configure system software and services, provide user documentation, validate input, and support data analytics. This presentation will provide an overview of the framework, present a use case that has been adopted by an entire science discipline at the international level, and share some important lessons learned.

  3. An Interactive Web System for Field Data Sharing and Collaboration

    NASA Astrophysics Data System (ADS)

    Weng, Y.; Sun, F.; Grigsby, J. D.

    2010-12-01

    A Web 2.0 system is designed and developed to facilitate data collection for the field studies in the Geological Sciences department at Ball State University. The system provides a student-centered learning platform that enables the users to first upload their collected data in various formats, interact and collaborate dynamically online, and ultimately create a shared digital repository of field experiences. The data types considered for the system and their corresponding format and requirements are listed in the table below. The system has six main functionalities as follows. (1) Only the registered users can access the system with confidential identification and password. (2) Each user can upload/revise/delete data in various formats such as image, audio, video, and text files to the system. (3) Interested users are allowed to co-edit the contents and join the collaboration whiteboard for further discussion. (4) The system integrates with Google, Yahoo, or Flickr to search for similar photos with same tags. (5) Users can search the web system according to the specific key words. (6) Photos with recorded GPS readings can be mashed and mapped to Google Maps/Earth for visualization. Application of the system to geology field trips at Ball State University will be demonstrated to assess the usability of the system.Data Requirements

  4. A new microfluidic approach for the one-step capture, amplification and label-free quantification of bacteria from raw samples† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6sc03880h Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file. Click here for additional data file.

    PubMed Central

    Pereiro, Iago; Bendali, Amel; Tabnaoui, Sanae; Alexandre, Lucile; Srbova, Jana; Bilkova, Zuzana; Deegan, Shane; Joshi, Lokesh; Viovy, Jean-Louis; Malaquin, Laurent

    2017-01-01

    A microfluidic method to specifically capture and detect infectious bacteria based on immunorecognition and proliferative power is presented. It involves a microscale fluidized bed in which magnetic and drag forces are balanced to retain antibody-functionalized superparamagnetic beads in a chamber during sample perfusion. Captured cells are then cultivated in situ by infusing nutritionally-rich medium. The system was validated by the direct one-step detection of Salmonella Typhimurium in undiluted unskimmed milk, without pre-treatment. The growth of bacteria induces an expansion of the fluidized bed, mainly due to the volume occupied by the newly formed bacteria. This expansion can be observed with the naked eye, providing simple low-cost detection of only a few bacteria and in a few hours. The time to expansion can also be measured with a low-cost camera, allowing quantitative detection down to 4 cfu (colony forming unit), with a dynamic range of 100 to 107 cfu ml–1 in 2 to 8 hours, depending on the initial concentration. This mode of operation is an equivalent of quantitative PCR, with which it shares a high dynamic range and outstanding sensitivity and specificity, operating at the live cell rather than DNA level. Specificity was demonstrated by controls performed in the presence of a 500× excess of non-pathogenic Lactococcus lactis. The system's versatility was demonstrated by its successful application to the detection and quantitation of Escherichia coli O157:H15 and Enterobacter cloacae. This new technology allows fast, low-cost, portable and automated bacteria detection for various applications in food, environment, security and clinics. PMID:28626552

  5. Serial interpolation for secure membership testing and matching in a secret-split archive

    DOEpatents

    Kroeger, Thomas M.; Benson, Thomas R.

    2016-12-06

    The various technologies presented herein relate to analyzing a plurality of shares stored at a plurality of repositories to determine whether a secret from which the shares were formed matches a term in a query. A threshold number of shares are formed with a generating polynomial operating on the secret. A process of serially interpolating the threshold number of shares can be conducted whereby a contribution of a first share is determined, a contribution of a second share is determined while seeded with the contribution of the first share, etc. A value of a final share in the threshold number of shares can be determined and compared with the search term. In the event of the value of the final share and the search term matching, the search term matches the secret in the file from which the shares are formed.

  6. Data on fossil fuel availability for Shared Socioeconomic Pathways.

    PubMed

    Bauer, Nico; Hilaire, Jérôme; Brecha, Robert J; Edmonds, Jae; Jiang, Kejun; Kriegler, Elmar; Rogner, Hans-Holger; Sferra, Fabio

    2017-02-01

    The data files contain the assumptions and results for the construction of cumulative availability curves for coal, oil and gas for the five Shared Socioeconomic Pathways. The files include the maximum availability (also known as cumulative extraction cost curves) and the assumptions that are applied to construct the SSPs. The data is differentiated into twenty regions. The resulting cumulative availability curves are plotted and the aggregate data as well as cumulative availability curves are compared across SSPs. The methodology, the data sources and the assumptions are documented in a related article (N. Bauer, J. Hilaire, R.J. Brecha, J. Edmonds, K. Jiang, E. Kriegler, H.-H. Rogner, F. Sferra, 2016) [1] under DOI: http://dx.doi.org/10.1016/j.energy.2016.05.088.

  7. Data rescue of NASA First ISLSCP (International Satellite Land Surface Climatology Project) Field Experiment (FIFE) aerial observations

    NASA Astrophysics Data System (ADS)

    Santhana Vannan, S. K.; Boyer, A.; Deb, D.; Beaty, T.; Wei, Y.; Wei, Z.

    2017-12-01

    The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) for biogeochemical dynamics is one of the NASA Earth Observing System Data and Information System (EOSDIS) data centers. ORNL DAAC (https://daac.ornl.gov) is responsible for data archival, product development and distribution, and user support for biogeochemical and ecological data and models. In particular, ORNL DAAC has been providing data management support for NASA's terrestrial ecology field campaign programs for the last several decades. Field campaigns combine ground, aircraft, and satellite-based measurements in specific ecosystems over multi-year time periods. The data collected during NASA field campaigns are archived at the ORNL DAAC (https://daac.ornl.gov/get_data/). This paper describes the effort of the ORNL DAAC team for data rescue of a First ISLSCP Field Experiment (FIFE) dataset containing airborne and satellite data observations from the 1980s. The data collected during the FIFE campaign contain high resolution aerial imageries collected over Kansas. The data rescue workflow was prepared to test for successful recovery of the data from a CD-ROM and to ensure that the data are usable and preserved for the future. The imageries contain spectral reflectance data that can be used as a historical benchmark to examine climatological and ecological changes in the Kansas region since the 1980s. Below are the key steps taken to convert the files to modern standards. Decompress the imageries using custom compression software provided with the data. The compression algorithm created for MS-DOS in 1980s had to be set up to run on modern computer systems. Decompressed files were geo-referenced by using metadata information stored in separate compressed header files. Standardized file names were applied (File names and details were described in separate readme documents). Image files were converted to GeoTIFF format with embedded georeferencing information. Leverage Open Geospatial Consortium (OGC) Web services to provide dynamic data transformation and visualization. We will describe the steps in detail and share lessons learned during the AGU session.

  8. VisIO: enabling interactive visualization of ultra-scale, time-series data via high-bandwidth distributed I/O systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Christopher J; Ahrens, James P; Wang, Jun

    2010-10-15

    Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar tomore » other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.« less

  9. Data Publishing and Sharing Via the THREDDS Data Repository

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Caron, J.; Davis, E.; Baltzer, T.

    2007-12-01

    The terms "Team Science" and "Networked Science" have been coined to describe a virtual organization of researchers tied via some intellectual challenge, but often located in different organizations and locations. A critical component to these endeavors is publishing and sharing of content, including scientific data. Imagine pointing your web browser to a web page that interactively lets you upload data and metadata to a repository residing on a remote server, which can then be accessed by others in a secure fasion via the web. While any content can be added to this repository, it is designed particularly for storing and sharing scientific data and metadata. Server support includes uploading of data files that can subsequently be subsetted, aggregrated, and served in NetCDF or other scientific data formats. Metadata can be associated with the data and interactively edited. The THREDDS Data Repository (TDR) is a server that provides client initiated, on demand, location transparent storage for data of any type that can then be served by the THREDDS Data Server (TDS). The TDR provides functionality to: * securely store and "own" data files and associated metadata * upload files via HTTP and gridftp * upload a collection of data as single file * modify and restructure repository contents * incorporate metadata provided by the user * generate additional metadata programmatically * edit individual metadata elements The TDR can exist separately from a TDS, serving content via HTTP. Also, it can work in conjunction with the TDS, which includes functionality to provide: * access to data in a variety of formats via -- OPeNDAP -- OGC Web Coverage Service (for gridded datasets) -- bulk HTTP file transfer * a NetCDF view of datasets in NetCDF, OPeNDAP, HDF-5, GRIB, and NEXRAD formats * serving of very large volume datasets, such as NEXRAD radar * aggregation into virtual datasets * subsetting via OPeNDAP and NetCDF Subsetting services This talk will discuss TDR/TDS capabilities as well as how users can install this software to create their own repositories.

  10. Technological Networks

    NASA Astrophysics Data System (ADS)

    Mitra, Bivas

    The study of networks in the form of mathematical graph theory is one of the fundamental pillars of discrete mathematics. However, recent years have witnessed a substantial new movement in network research. The focus of the research is shifting away from the analysis of small graphs and the properties of individual vertices or edges to consideration of statistical properties of large scale networks. This new approach has been driven largely by the availability of technological networks like the Internet [12], World Wide Web network [2], etc. that allow us to gather and analyze data on a scale far larger than previously possible. At the same time, technological networks have evolved as a socio-technological system, as the concepts of social systems that are based on self-organization theory have become unified in technological networks [13]. In today’s society, we have a simple and universal access to great amounts of information and services. These information services are based upon the infrastructure of the Internet and the World Wide Web. The Internet is the system composed of ‘computers’ connected by cables or some other form of physical connections. Over this physical network, it is possible to exchange e-mails, transfer files, etc. On the other hand, the World Wide Web (commonly shortened to the Web) is a system of interlinked hypertext documents accessed via the Internet where nodes represent web pages and links represent hyperlinks between the pages. Peer-to-peer (P2P) networks [26] also have recently become a popular medium through which huge amounts of data can be shared. P2P file sharing systems, where files are searched and downloaded among peers without the help of central servers, have emerged as a major component of Internet traffic. An important advantage in P2P networks is that all clients provide resources, including bandwidth, storage space, and computing power. In this chapter, we discuss these technological networks in detail. The review is organized as follows. Section 2 presents an introduction to the Internet and different protocols related to it. This section also specifies the socio-technological properties of the Internet, like scale invariance, the small-world property, network resilience, etc. Section 3 describes the P2P networks, their categorization, and other related issues like search, stability, etc. Section 4 concludes the chapter.

  11. Using Instant Messaging Systems as a Platform for Electronic Voting

    NASA Astrophysics Data System (ADS)

    Meletiadou, Anastasia; Grimm, Rüdiger

    Many Instant Messaging (IM) systems like Skype or Spark offer ex tended services such as file sharing, VoIP, or a shared whiteboard. As the name suggests, IM applications are predominantly used for spontaneous text-based communication for private or business purposes. In this paper we explore their potential to serve as platforms for secure collaborative applications like electronic contract negotiation, e-payment or electronic voting. Such applications have to deal with challenges like time constraints (“instant” com munication is desired), integration of media channels and the absence of one uni fying “sphere of control” covering all participants. In this paper, we address these challenges by discussing one particular secure collaborative application: secure decision processes for small groups. We provide the following contribu tions: (1) we define three varying scenarios and corresponding security require ments (2) we present an IM-based architecture implementing these scenarios, in cluding a Video-based authentication mechanism, and (3) we discuss poten tial attack patterns.

  12. 75 FR 37502 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-29

    ... Thereto Regarding Listing and Trading of the WisdomTree Emerging Markets Local Debt Fund June 22, 2010... following fund of the WisdomTree Trust (the ``Trust'') under NYSE Arca Equities Rule 8.600 (``Managed Fund Shares''): WisdomTree Emerging Markets Local Debt Fund (the ``Fund''). The shares of the Fund are...

  13. 77 FR 63406 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-16

    ... Liquidity Provider (``SLP'') for all assigned SLP securities in the aggregate (including shares of both a SLP proprietary trading unit (``SLP-Prop'') and a SLP market maker (``SLMM'') of the same member... per share price of $1.00 or more, if the SLP (i) meets the 10% average or more quoting requirement in...

  14. 76 FR 50529 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-15

    ... or more, to include criteria based on an SLP's Average Daily Volume (``ADV'') in added liquidity in... liquidity in the applicable month for all assigned SLP securities, as follows: \\5\\ \\5\\ See Securities... is more than 10 million shares but not more than 20 million shares.\\6\\ \\6\\ For all other SLP...

  15. 76 FR 56850 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-14

    ....S.C. 78f(b)(4). With respect to the reduction of fees for taking liquidity, the Exchange believes... trading of Nasdaq securities pursuant to UTP. Additionally, the approach for lowering fees for taking... for taking liquidity from $0.0014 per share to a rebate of $0.0006 per share.\\7\\ The Exchange further...

  16. 78 FR 17988 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-25

    ... minimum CMM quoting requirements based on a percentage of series or as a percentage of time achieves the... shares of underlying stock or exchange-traded fund shares. Long- term options are series with a time to... 60% of the non-adjusted options series that have a time to expiration of less than nine months); NYSE...

  17. 76 FR 2738 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-14

    ... Price List (``Price List'') for equity transactions in stocks with a per share stock price less than $1.00 to provide that the equity per share charge for all other transactions when taking liquidity from the Exchange per transaction will be the lesser of (i) 0.3% of the total dollar value of the...

  18. 77 FR 76132 - Self-Regulatory Organizations; BOX Options Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    ... Rule Change To Increase the Position and Exercise Limits for Options on the iShares MSCI Emerging... exercise limits for options on the iShares MSCI Emerging Markets Index Fund (``EEM'') to 500,000 contracts... 3120 to increase the position and exercise limits for EEM options to 500,000 contracts.\\3\\ There is...

  19. 77 FR 8936 - Self-Regulatory Organizations; the NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-15

    ... defined as ``the ratio of (A) the total number of liquidity-providing orders entered by a member through... trading sessions. (3) The ratio between shares of liquidity provided through the MPID and total shares..., or pre-market and/or post- market hours; and to maintain a high ratio of liquidity provision to order...

  20. Access Control for Home Data Sharing: Attitudes, Needs and Practices

    DTIC Science & Technology

    2009-10-01

    cameras, mobile phones and portable music players make creating and interacting with this content easy. Home users are increasingly interested in...messages, photos, home videos, journal files and home musical recordings. Many participants considered unauthorized access by strangers, acquaintances...configuration does not allow users to share different subsets of music with different people. Facebook supplies rich, customizable access controls for

  1. 78 FR 1894 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-09

    ... whole months [sic]). In the case of spin-offs, the operating history of the spin-off will be considered... component price per share, (a) the highest price per share of a component was $661.15 (Google, Inc.), (b... top five highest weighted components was 40.78% (Apple Inc., Microsoft Corporation, Google Inc...

  2. 78 FR 62903 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ....0027 per share; (vi) lower the ADV threshold required to meet the MidPoint Match Volume Tier; and (vii...) lower the ADV threshold required to meet the MidPoint Match Volume Tier; and (vii) decrease the rebate... shares in average daily volume (``ADV'') on a daily basis, measured monthly; and (2) add at least 1,000...

  3. 78 FR 60348 - Self-Regulatory Organizations; Miami International Securities Exchange LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-01

    ... applicant must: (i) Be a Member in good standing of MIAX; (ii) qualify as an ``accredited investor'' as such... each unit (i) 101,695 shares of MIH common stock and (ii) warrants to purchase 2,182,639 shares of common stock of MIH in exchange for such participant Member's initial cash capital contribution of $508...

  4. 76 FR 56824 - Self-Regulatory Organizations; C2 Options Exchange, Incorporated: Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-14

    ... Exchange is proposing to eliminate the PULSe non-standard services fee. All of these changes, which are.... Currently the fee is set at $0.05 per executed contract or share equivalent. The Exchange is proposing to reduce the fee to $0.02 per contract or share equivalent. The second purpose of this proposed rule change...

  5. 75 FR 9272 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-01

    ... Transaction Exceeds 99,999,999 Shares February 22, 2010. Pursuant to Section 19(b)(1) \\1\\ of the Securities... Tape when a closing transaction exceeds 99,999,999 shares. The text of the proposed rule change is... report multiple closing prints to the Consolidated Tape when a closing transaction exceeds 99,999,999...

  6. 76 FR 4401 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... Proposed Rule Change To Impose a Quarterly Maximum on the Listing of Additional Shares Fees Payable by... Terms of Substance of the Proposed Rule Change Nasdaq proposes to impose a quarterly maximum on the.... 5910. The NASDAQ Global Market (a) No change. (b) Additional Shares (1)-(5) No change. (6) The maximum...

  7. 76 FR 70520 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing of Proposed Rule Change to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-14

    .... Surveillance for opening price manipulation and other existing surveillance patterns are utilized to monitor... exchanges, covered securities were required to have a closing market price of at least $7.50 per share for... proposing the $3 per share closing market price requirement and the five-day ``look back'' period that is...

  8. 75 FR 8774 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing of Proposed Rule Change Amending...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-25

    ... limits were introduced as a means of forestalling the potential manipulation of an equity's price by... significantly reduced concerns of market manipulation or disruption in the underlying markets. Shares in these... values on a per-share basis, the option strike prices result in being equal to \\1/ 100\\th of the...

  9. 77 FR 14843 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-13

    ... share executed for BTFY orders that execute at the New York Stock Exchange (``NYSE''), but NYSE charges... book for available shares only if so instructed by the entering firm and are thereafter routed out to... un-executed after routing, they are posted to the BX book and do not thereafter route out. The BTFY...

  10. Launching large computing applications on a disk-less cluster

    NASA Astrophysics Data System (ADS)

    Schwemmer, Rainer; Caicedo Carvajal, Juan Manuel; Neufeld, Niko

    2011-12-01

    The LHCb Event Filter Farm system is based on a cluster of the order of 1.500 disk-less Linux nodes. Each node runs one instance of the filtering application per core. The amount of cores in our current production environment is 8 per machine for the old cluster and 12 per machine on extension of the cluster. Each instance has to load about 1.000 shared libraries, weighting 200 MB from several directory locations from a central repository. The repository is currently hosted on a SAN and exported via NFS. The libraries are all available in the local file system cache on every node. Loading a library still causes a huge number of requests to the server though, because the loader will try to probe every available path. Measurements show there are between 100.000-200.000 calls per application instance start up. Multiplied by the numbers of cores in the farm, this translates into a veritable DDoS attack on the servers, which lasts several minutes. Since the application is being restarted frequently, a better solution had to be found.scp Rolling out the software to the nodes is out of the question, because they have no disks and the software in it's entirety is too large to put into a ram disk. To solve this problem we developed a FUSE based file systems which acts as a permanent, controllable cache that keeps the essential files that are necessary in stock.

  11. GRIIDC: A Data Repository for Gulf of Mexico Science

    NASA Astrophysics Data System (ADS)

    Ellis, S.; Gibeaut, J. C.

    2017-12-01

    The Gulf of Mexico Research Initiative Information & Data Cooperative (GRIIDC) system is a data management solution appropriate for any researcher sharing Gulf of Mexico and oil spill science data. Our mission is to ensure a data and information legacy that promotes continual scientific discovery and public awareness of the Gulf of Mexico ecosystem. GRIIDC developed an open-source software solution to manage data from the Gulf of Mexico Research Initiative (GoMRI). The GoMRI program has over 2500 researchers from diverse fields of study with a variety of attitudes, experiences, and capacities for data sharing. The success of this solution is apparent through new partnerships to share data generated by RESTORE Act Centers of Excellence Programs, the National Academies of Science, and others. The GRIIDC data management system integrates dataset management planning, metadata creation, persistent identification, and data discoverability into an easy-to-use web application. No specialized software or program installations are required to support dataset submission or discovery. Furthermore, no data transformations are needed to submit data to GRIIDC; common file formats such as Excel, csv, and text are all acceptable for submissions. To ensure data are properly documented using the GRIIDC implementation of the ISO 19115-2 metadata standard, researchers submit detailed descriptive information through a series of interactive forms and no knowledge of metadata or xml formats are required. Once a dataset is documented and submitted the GRIIDC team performs a review of the dataset package. This review ensures that files can be opened and contain data, and that data are completely and accurately described. This review does not include performing quality assurance or control of data points, as GRIIDC expects scientists to perform these steps during the course of their work. Once approved, data are made public and searchable through the GRIIDC data discovery portal and the DataONE network.

  12. Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakl, Pavel; /Prague, Inst. Phys.; Lauret, Jerome

    2011-11-10

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of themore » largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.« less

  13. FAST User Guide

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Clucas, Jean; McCabe, R. Kevin; Plessel, Todd; Potter, R.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The Flow Analysis Software Toolkit, FAST, is a software environment for visualizing data. FAST is a collection of separate programs (modules) that run simultaneously and allow the user to examine the results of numerical and experimental simulations. The user can load data files, perform calculations on the data, visualize the results of these calculations, construct scenes of 3D graphical objects, and plot, animate and record the scenes. Computational Fluid Dynamics (CFD) visualization is the primary intended use of FAST, but FAST can also assist in the analysis of other types of data. FAST combines the capabilities of such programs as PLOT3D, RIP, SURF, and GAS into one environment with modules that share data. Sharing data between modules eliminates the drudgery of transferring data between programs. All the modules in the FAST environment have a consistent, highly interactive graphical user interface. Most commands are entered by pointing and'clicking. The modular construction of FAST makes it flexible and extensible. The environment can be custom configured and new modules can be developed and added as needed. The following modules have been developed for FAST: VIEWER, FILE IO, CALCULATOR, SURFER, TOPOLOGY, PLOTTER, TITLER, TRACER, ARCGRAPH, GQ, SURFERU, SHOTET, and ISOLEVU. A utility is also included to make the inclusion of user defined modules in the FAST environment easy. The VIEWER module is the central control for the FAST environment. From VIEWER, the user can-change object attributes, interactively position objects in three-dimensional space, define and save scenes, create animations, spawn new FAST modules, add additional view windows, and save and execute command scripts. The FAST User Guide uses text and FAST MAPS (graphical representations of the entire user interface) to guide the user through the use of FAST. Chapters include: Maps, Overview, Tips, Getting Started Tutorial, a separate chapter for each module, file formats, and system administration.

  14. Meningococcal Photos

    MedlinePlus

    ... Vaccine Campaign Podcast: Meningitis Immunization for Adolescents Meningitis Sepsis Meningococcal Photos Recommend on Facebook Tweet Share Compartir ... Vaccine Campaign Podcast: Meningitis Immunization for Adolescents Meningitis Sepsis File Formats Help: How do I view different ...

  15. DMFS: A Data Migration File System for NetBSD

    NASA Technical Reports Server (NTRS)

    Studenmund, William

    1999-01-01

    I have recently developed dmfs, a Data Migration File System, for NetBSD. This file system is based on the overlay file system, which is discussed in a separate paper, and provides kernel support for the data migration system being developed by my research group here at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal meta data in a flat file, which resides on a separate file system. Our data migration system provides archiving and file migration services. System utilities scan the dmfs file system for recently modified files, and archive them to two separate tape stores. Once a file has been doubly archived, files larger than a specified size will be truncated to that size, potentially freeing up large amounts of the underlying file store. Some sites will choose to retain none of the file (deleting its contents entirely from the file system) while others may choose to retain a portion, for instance a preamble describing the remainder of the file. The dmfs layer coordinates access to the file, retaining user-perceived access and modification times, file size, and restricting access to partially migrated files to the portion actually resident. When a user process attempts to read from the non-resident portion of a file, it is blocked and the dmfs layer sends a request to a system daemon to restore the file. As more of the file becomes resident, the user process is permitted to begin accessing the now-resident portions of the file. For simplicity, our data migration system divides a file into two portions, a resident portion followed by an optional non-resident portion. Also, a file is in one of three states: fully resident, fully resident and archived, and (partially) non-resident and archived. For a file which is only partially resident, any attempt to write or truncate the file, or to read a non-resident portion, will trigger a file restoration. Truncations and writes are blocked until the file is fully restored so that a restoration which only partially succeed does not leave the file in an indeterminate state with portions existing only on tape and other portions only in the disk file system. We chose layered file system technology as it permits us to focus on the data migration functionality, and permits end system administrators to choose the underlying file store technology. We chose the overlay layered file system instead of the null layer for two reasons: first to permit our layer to better preserve meta data integrity and second to prevent even root processes from accessing migrated files. This is achieved as the underlying file store becomes inaccessible once the dmfs layer is mounted. We are quite pleased with how the layered file system has turned out. Of the 45 vnode operations in NetBSD, 20 (forty-four percent) required no intervention by our file layer - they are passed directly to the underlying file store. Of the twenty five we do intercept, nine (such as vop_create()) are intercepted only to ensure meta data integrity. Most of the functionality was concentrated in five operations: vop_read, vop_write, vop_getattr, vop_setattr, and vop_fcntl. The first four are the core operations for controlling access to migrated files and preserving the user experience. vop_fcntl, a call generated for a certain class of fcntl codes, provides the command channel used by privileged user programs to communicate with the dmfs layer.

  16. Online network of subspecialty aortic disease experts: Impact of "cloud" technology on management of acute aortic emergencies.

    PubMed

    Schoenhagen, Paul; Roselli, Eric E; Harris, C Martin; Eagleton, Matthew; Menon, Venu

    2016-07-01

    For the management of acute aortic syndromes, regional treatment networks have been established to coordinate diagnosis and treatment between local emergency rooms and central specialized centers. Triage of acute aortic syndromes requires definitive imaging, resulting in complex data files. Modern information technology network structures, specifically "cloud" technology, coupled with mobile communication, increasingly support sharing of these data in a network of experts using mobile, online access and communication. Although this network is technically complex, the potential benefit of online sharing of data files between professionals at multiple locations within a treatment network appear obvious; however, clinical experience is limited, and further evaluation is needed. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  17. [Design of visualized medical images network and web platform based on MeVisLab].

    PubMed

    Xiang, Jun; Ye, Qing; Yuan, Xun

    2017-04-01

    With the trend of the development of "Internet +", some further requirements for the mobility of medical images have been required in the medical field. In view of this demand, this paper presents a web-based visual medical imaging platform. First, the feasibility of medical imaging is analyzed and technical points. CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) images are reconstructed three-dimensionally by MeVisLab and packaged as X3D (Extensible 3D Graphics) files shown in the present paper. Then, the B/S (Browser/Server) system specially designed for 3D image is designed by using the HTML 5 and WebGL rendering engine library, and the X3D image file is parsed and rendered by the system. The results of this study showed that the platform was suitable for multiple operating systems to realize the platform-crossing and mobilization of medical image data. The development of medical imaging platform is also pointed out in this paper. It notes that web application technology will not only promote the sharing of medical image data, but also facilitate image-based medical remote consultations and distance learning.

  18. Interactive real-time media streaming with reliable communication

    NASA Astrophysics Data System (ADS)

    Pan, Xunyu; Free, Kevin M.

    2014-02-01

    Streaming media is a recent technique for delivering multimedia information from a source provider to an end- user over the Internet. The major advantage of this technique is that the media player can start playing a multimedia file even before the entire file is transmitted. Most streaming media applications are currently implemented based on the client-server architecture, where a server system hosts the media file and a client system connects to this server system to download the file. Although the client-server architecture is successful in many situations, it may not be ideal to rely on such a system to provide the streaming service as users may be required to register an account using personal information in order to use the service. This is troublesome if a user wishes to watch a movie simultaneously while interacting with a friend in another part of the world over the Internet. In this paper, we describe a new real-time media streaming application implemented on a peer-to-peer (P2P) architecture in order to overcome these challenges within a mobile environment. When using the peer-to-peer architecture, streaming media is shared directly between end-users, called peers, with minimal or no reliance on a dedicated server. Based on the proposed software pɛvμa (pronounced [revma]), named for the Greek word meaning stream, we can host a media file on any computer and directly stream it to a connected partner. To accomplish this, pɛvμa utilizes the Microsoft .NET Framework and Windows Presentation Framework, which are widely available on various types of windows-compatible personal computers and mobile devices. With specially designed multi-threaded algorithms, the application can stream HD video at speeds upwards of 20 Mbps using the User Datagram Protocol (UDP). Streaming and playback are handled using synchronized threads that communicate with one another once a connection is established. Alteration of playback, such as pausing playback or tracking to a different spot in the media file, will be reflected in all media streams. These techniques are designed to allow users at different locations to simultaneously view a full length HD video and interactively control the media streaming session. To create a sustainable media stream with high quality, our system supports UDP packet loss recovery at high transmission speed using custom File- Buffers. Traditional real-time streaming protocols such as Real-time Transport Protocol/RTP Control Protocol (RTP/RTCP) provide no such error recovery mechanism. Finally, the system also features an Instant Messenger that allows users to perform social interactions with one another while they enjoy a media file. The ultimate goal of the application is to offer users a hassle free way to watch a media file over long distances without having to upload any personal information into a third party database. Moreover, the users can communicate with each other and stream media directly from one mobile device to another while maintaining an independence from traditional sign up required by most streaming services.

  19. LabKey Server NAb: A tool for analyzing, visualizing and sharing results from neutralizing antibody assays

    PubMed Central

    2011-01-01

    Background Multiple types of assays allow sensitive detection of virus-specific neutralizing antibodies. For example, the extent of antibody neutralization of HIV-1, SIV and SHIV can be measured in the TZM-bl cell line through the degree of luciferase reporter gene expression after infection. In the past, neutralization curves and titers for this standard assay have been calculated using an Excel macro. Updating all instances of such a macro with new techniques can be unwieldy and introduce non-uniformity across multi-lab teams. Using Excel also poses challenges in centrally storing, sharing and associating raw data files and results. Results We present LabKey Server's NAb tool for organizing, analyzing and securely sharing data, files and results for neutralizing antibody (NAb) assays, including the luciferase-based TZM-bl NAb assay. The customizable tool supports high-throughput experiments and includes a graphical plate template designer, allowing researchers to quickly adapt calculations to new plate layouts. The tool calculates the percent neutralization for each serum dilution based on luminescence measurements, fits a range of neutralization curves to titration results and uses these curves to estimate the neutralizing antibody titers for benchmark dilutions. Results, curve visualizations and raw data files are stored in a database and shared through a secure, web-based interface. NAb results can be integrated with other data sources based on sample identifiers. It is simple to make results public after publication by updating folder security settings. Conclusions Standardized tools for analyzing, archiving and sharing assay results can improve the reproducibility, comparability and reliability of results obtained across many labs. LabKey Server and its NAb tool are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. Many members of the HIV research community can also access the LabKey Server NAb tool without installing the software by using the Atlas Science Portal (https://atlas.scharp.org). Atlas is an installation of LabKey Server. PMID:21619655

  20. Accessing files in an Internet: The Jade file system

    NASA Technical Reports Server (NTRS)

    Peterson, Larry L.; Rao, Herman C.

    1991-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  1. Accessing files in an internet - The Jade file system

    NASA Technical Reports Server (NTRS)

    Rao, Herman C.; Peterson, Larry L.

    1993-01-01

    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features.

  2. Distributed structure-searchable toxicity (DSSTox) public database network: a proposal.

    PubMed

    Richard, Ann M; Williams, ClarLynda R

    2002-01-29

    The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, Structure-Activity Relationship (SAR) model development, or building of chemical relational databases (CRD). The distributed structure-searchable toxicity (DSSTox) public database network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: (1) to adopt and encourage the use of a common standard file format (structure data file (SDF)) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; (2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data sources with potential users of these data from other disciplines (such as chemistry, modeling, and computer science); and (3) to engage public/commercial/academic/industry groups in contributing to and expanding this community-wide, public data sharing and distribution effort. The DSSTox project's overall aims are to effect the closer association of chemical structure information with existing toxicity data, and to promote and facilitate structure-based exploration of these data within a common chemistry-based framework that spans toxicological disciplines.

  3. Meningococcal Disease: Prevention

    MedlinePlus

    ... Vaccine Campaign Podcast: Meningitis Immunization for Adolescents Meningitis Sepsis Prevention Recommend on Facebook Tweet Share Compartir On ... Vaccine Campaign Podcast: Meningitis Immunization for Adolescents Meningitis Sepsis File Formats Help: How do I view different ...

  4. Non-Infectious Meningitis

    MedlinePlus

    ... Links Vaccine Schedules Preteen & Teen Vaccines Meningococcal Disease Sepsis Non-Infectious Meningitis Recommend on Facebook Tweet Share ... Links Vaccine Schedules Preteen & Teen Vaccines Meningococcal Disease Sepsis File Formats Help: How do I view different ...

  5. MIMS for TRIM

    EPA Pesticide Factsheets

    MIMS supports complex computational studies that use multiple interrelated models / programs, such as the modules within TRIM. MIMS is used by TRIM to run various models in sequence, while sharing input and output files.

  6. 76 FR 34112 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change To List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-10

    ... Shares of the WisdomTree Dreyfus Euro Debt Fund Under NYSE Arca Equities Rule 8.600 June 6, 2011... following fund of the WisdomTree Trust (the ``Trust'') under NYSE Arca Equities Rule 8.600 (``Managed Fund Shares''): WisdomTree Dreyfus Euro Debt Fund. The text of the proposed rule change is available at the...

  7. 78 FR 11932 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-20

    ... Change Relating to the WisdomTree Euro Debt Fund February 12, 2013. Pursuant to Section 19(b)(1) \\1\\ of... investment objective applicable to the WisdomTree Euro Debt Fund (the ``Fund''). The text of the proposed... trading of Managed Fund Shares on the Exchange.\\4\\ The Shares are offered by the WisdomTree Trust (``Trust...

  8. 75 FR 8164 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-23

    ...Tree Real Return Fund February 16, 2010. Pursuant to Section 19(b)(1) of the Securities Exchange Act of... proposes to list and trade the shares of the following fund of the WisdomTree Trust (the ``Trust'') under NYSE Arca Equities Rule 8.600: WisdomTree Real Return Fund (the ``Fund''). The shares of the Fund are...

  9. 76 FR 27127 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change To List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-10

    ... WisdomTree Global Real Return Fund May 5, 2011. Pursuant to Section 19(b)(1) of the Securities Exchange... Exchange proposes to list and trade the shares (``Shares'') of the following series of the WisdomTree Trust (``Trust'') under NYSE Arca Equities Rule 8.600: WisdomTree Global Real Return Fund (``Fund''). The text of...

  10. 76 FR 1656 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-11

    ... Exchange-listed securities priced at $1.00 or more. For such transactions in which the SLP also meets the 5... requirement''), the credit per share for the SLP will increase from the current rate of $0.0020 to $0.0027. For such transactions in which the SLP does not meet the 5% quoting requirement, the credit per share...

  11. 78 FR 16023 - Self-Regulatory Organizations; EDGA Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-13

    ..., qualifies for BX's volume tiered rebate of $0.0010 per share by adding an average of 25,000 shares but less...\\ The Exchange notes that to the extent DE Route does or does not achieve any volume tiered rebate on BX... ``Single MPID Step-up Add Tier'' by posting more than .10% of the Total Consolidated Volume (``TCV''), on a...

  12. 78 FR 21452 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-10

    .... First, the Exchange proposes to eliminate the first and third qualification requirements for Tier 4... Pilot Issues, plus executed ADV of Retail Orders of 0.3% of U.S. Equity Market Share Posted and Executed... and Non-Penny Pilot Issues Plus executed ADV of Retail Orders of 0.3% ADV of U.S. Equity Market Share...

  13. 77 FR 39767 - Self-Regulatory Organizations; National Stock Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ... has executed on average per trading day (excluding partial trading days) in AutoEx or Order Delivery... (``AutoEx'') shall mean only those executed shares of the ETP Holder that are submitted in AutoEx mode... period, a combined ADV in both AutoEx and Order Delivery of at least 11.5 million shares, of which at...

  14. 75 FR 30078 - Self-Regulatory Organizations; Notice of Filing of Proposed Rule Change by NASDAQ OMX PHLX, Inc...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... example: Sanofi Aventis (SNY), which has recently traded at a low of about $33 per share, is not currently in the $1 Strike Program. This means that options on Sanofi Aventis are offered at strike price intervals of $2.50. If an investor desired to protect 100 shares of Sanofi Aventis in the event of a 10...

  15. 77 FR 28411 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-14

    ... liquidity from the Book to pay a reduced fee of $0.0029 per share if they directly execute providing volume... liquidity from the Book to pay a reduced fee of $0.0029 per share if they directly execute providing volume... B Step Up Tier allows ETP Holders and Market Makers that take liquidity from the Book to pay a...

  16. 77 FR 29419 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-17

    ...,000.\\3\\ ETP issuers also pay a graduated Annual Fee based on the number of shares of the ETP that are..., and the ETP therefore trades without an LMM assigned to it. The Exchange operates under the price-time... may be higher for certain ETPs with low volume and low shares outstanding because there are fewer...

  17. 77 FR 15819 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-16

    ... from the Book where the per share price is below $1.00, (ii) add three new Step Up Tiers and a new... market centers other than the New York Stock Exchange (``NYSE''). Below $1.00 Per Share Price Currently... removing liquidity from the Book, the Tape B Securities fee for orders routed outside the Book to any away...

  18. Integration of digital gross pathology images for enterprise-wide access.

    PubMed

    Amin, Milon; Sharma, Gaurav; Parwani, Anil V; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B; Lauro, Gonzalo Romero; Pantanowitz, Liron

    2012-01-01

    Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then "wrapped" according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a "DICOM wrapper" for multisystem compatibility.

  19. Integration of digital gross pathology images for enterprise-wide access

    PubMed Central

    Amin, Milon; Sharma, Gaurav; Parwani, Anil V.; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B.; Lauro, Gonzalo Romero; Pantanowitz, Liron

    2012-01-01

    Background: Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Methods: Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then “wrapped” according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. Results: In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Conclusions: Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a “DICOM wrapper” for multisystem compatibility. PMID:22530178

  20. 77 FR 13959 - National Consumer Protection Week, 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-08

    ....NCPW.gov . With the leadership of the Consumer Financial Protection Bureau (CFPB) and Director Richard... the CFPB. To share your own experience with consumer financial products, file a complaint, or find...

  1. Secure Federal File Sharing Act

    THOMAS, 111th Congress

    Sen. McCaskill, Claire [D-MO

    2010-06-14

    Senate - 06/14/2010 Read twice and referred to the Committee on Homeland Security and Governmental Affairs. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  2. Highway Safety Information System guidebook for the Minnesota state data files. Volume 1 : SAS file formats

    DOT National Transportation Integrated Search

    2001-02-01

    The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...

  3. An alternative model to distribute VO software to WLCG sites based on CernVM-FS: a prototype at PIC Tier1

    NASA Astrophysics Data System (ADS)

    Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.

    2011-12-01

    In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.

  4. Long-Term file activity patterns in a UNIX workstation environment

    NASA Technical Reports Server (NTRS)

    Gibson, Timothy J.; Miller, Ethan L.

    1998-01-01

    As mass storage technology becomes more affordable for sites smaller than supercomputer centers, understanding their file access patterns becomes crucial for developing systems to store rarely used data on tertiary storage devices such as tapes and optical disks. This paper presents a new way to collect and analyze file system statistics for UNIX-based file systems. The collection system runs in user-space and requires no modification of the operating system kernel. The statistics package provides details about file system operations at the file level: creations, deletions, modifications, etc. The paper analyzes four months of file system activity on a university file system. The results confirm previously published results gathered from supercomputer file systems, but differ in several important areas. Files in this study were considerably smaller than those at supercomputer centers, and they were accessed less frequently. Additionally, the long-term creation rate on workstation file systems is sufficiently low so that all data more than a day old could be cheaply saved on a mass storage device, allowing the integration of time travel into every file system.

  5. Share, steal, or buy? A social cognitive perspective of music downloading.

    PubMed

    LaRose, Robert; Kim, Junghyun

    2007-04-01

    The music downloading phenomenon presents a unique opportunity to examine normative influences on media consumption behavior. Downloaders face moral, legal, and ethical quandaries that can be conceptualized as normative influences within the self-regulatory mechanism of social cognitive theory. The music industry hopes to eliminate illegal file sharing and to divert illegal downloaders to pay services by asserting normative influence through selective prosecutions and public information campaigns. However the deficient self-regulation of downloaders counters these efforts maintaining file sharing as a persistent habit that defies attempts to establish normative control. The present research tests and extends the social cognitive theory of downloading on a sample of college students. The expected outcomes of downloading behavior and deficient self-regulation of that behavior were found to be important determinants of intentions to continue downloading. Consistent with social cognitive theory but in contrast to the theory of planned behavior, it was found that descriptive and prescriptive norms influenced deficient self-regulation but had no direct impact on behavioral intentions. Downloading intentions also had no direct relationship to either compact disc purchases or to subscription to online pay music services.

  6. 75 FR 27848 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... liquidity to the NYSE in securities with a per share price of $1.00 or more, and the SLP (i) meets the 3.... \\6\\ The Exchange currently has a three tier structure of rebates paid only to SLPs when the SLP... a per share price of $1.00 or more, and the SLP (i) meets the Quoting Requirement and (ii) adds...

  7. 76 FR 45885 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change To List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... Registration Statements, the Funds face the risk of non-performance by the counterparties to over-the- counter... the Fund in a fashion such that its per Share NAV will equal, in dollar terms, the spot price of a... intend to operate the Fund in a fashion such that its per Share NAV will equal, in dollar terms, the spot...

  8. 78 FR 9094 - Self-Regulatory Organizations; National Stock Exchange, Inc.; Notice of Filing of a Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-07

    ... is available on the Exchange's Web site at http://www.nsx.com , at the principal office of the...: Midpoint Peg x 500 (Auto-Ex mode/Dark) 134.50 x 400 (Order Delivery mode) 134.50 x 200 (Auto-Ex mode... shares priced at 134.50 would execute against the Midpoint Peg Dark Auto-Ex order of 500 shares at 134...

  9. Haemophilus influenzae Disease (Including Hib) Symptoms

    MedlinePlus

    ... Links Global Hib Vaccination Hib Vaccination Meningitis Pneumonia Sepsis Signs and Symptoms Recommend on Facebook Tweet Share ... Links Global Hib Vaccination Hib Vaccination Meningitis Pneumonia Sepsis File Formats Help: How do I view different ...

  10. 78 FR 13715 - Self-Regulatory Organizations; The Options Clearing Corporation; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-28

    ... Change To Provide Clarifying Language To Conform Interpretive Guidance Concerning Options Overlying Fund... providing clarifying language to conform interpretive guidance concerning options on fund shares with...

  11. Radiology education 2.0--on the cusp of change: part 2. eBooks; file sharing and synchronization tools; websites/teaching files; reference management tools and note taking applications.

    PubMed

    Bhargava, Puneet; Dhand, Sabeen; Lackey, Amanda E; Pandey, Tarun; Moshiri, Mariam; Jambhekar, Kedar

    2013-03-01

    Increasing use of smartphones and handheld computers is accompanied by a rapid growth in the other related industries. Electronic books have revolutionized the centuries-old conventional books and magazines markets and have simplified publishing by reducing the cost and processing time required to create and distribute any given book. We are now able to read, review, store, and share various types of documents via several electronic tools, many of which are available free of charge. Additionally, this electronic revolution has resulted in an explosion of readily available Internet-based educational resources for the residents and has paved the path for educators to reach out to a larger and more diverse student population. Published by Elsevier Inc.

  12. PSTOOLS - FOUR PROGRAMS THAT INTERPRET/FORMAT POSTSCRIPT FILES

    NASA Technical Reports Server (NTRS)

    Choi, D.

    1994-01-01

    PSTOOLS is a package of four programs that operate on files written in the page description language, PostScript. The programs include a PostScript previewer for the IRIS workstation, a PostScript driver for the Matrix QCRZ film recorder, a PostScript driver for the Tektronix 4693D printer, and a PostScript code beautifier that formats PostScript files to be more legible. The three programs PSIRIS, PSMATRIX, and PSTEK are similar in that they all interpret the PostScript language and output the graphical results to a device, and they support color PostScript images. The common code which is shared by these three programs is included as a library of routines. PSPRETTY formats a PostScript file by appropriately indenting procedures and code delimited by "saves" and "restores." PSTOOLS does not use Adobe fonts. PSTOOLS is written in C-language for implementation on SGI IRIS 4D series workstations running IRIX 3.2 or later. A README file and UNIX man pages provide information regarding the installation and use of the PSTOOLS programs. A six-page manual which provides slightly more detailed information may be purchased separately. The standard distribution medium for this package is one .25 inch streaming magnetic tape cartridge in UNIX tar format. PSIRIS (the largest program) requires 1.2Mb of main memory. PSMATRIX requires the "gpib" board (IEEE 488) available from Silicon Graphics. Inc. The programs with graphical interfaces require that the IRIS have at least 24 bit planes. This package was developed in 1990 and updated in 1991. SGI, IRIS 4D, and IRIX are trademarks of Silicon Graphics, Inc. Matrix QCRZ is a registered trademark of the AGFA Group. Tektronix 4693D is a trademark of Tektronix, Inc. Adobe is a trademark of Adobe Systems Incorporated. PostScript is a registered trademark of Adobe Systems Incorporated. UNIX is a registered trademark of AT&T Bell Laboratories.

  13. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Enhancement of water permeation across nanochannels by partial charges mimicked from biological channels

    NASA Astrophysics Data System (ADS)

    Gong, Xiao-Jing; Fang, Hai-Ping

    2008-07-01

    In biological water channel aquaporins (AQPs), it is believed that the bipolar orientation of the single-file water molecules inside the channel blocks proton permeation but not water transport. In this paper, the water permeation and particularly the water-selective behaviour across a single-walled carbon nanotube (SWNT) with two partial charges adjacent to the wall of the SWNT are studied by molecular dynamics simulations, in which the distance between the two partial charges is varied from 0.14 nm to 0.5 nm and the charges each have a quantity of 0.5 e. The two partial charges are used to mimic the charge distribution of the conserved non-pseudoautosomal (NPA) (asparagine/proline/alanine) regions in AQPs. Compared with across the nanochannel in a system with one +1 e charge, the water permeation across the nanochannel is greatly enhanced in a system with two +0.5 e charges when charges are close to the nanotube, i.e. the two partial charges permit more rapid water diffusion and maintain better bipolar order along the water file when the distance between the two charges and the wall of SWNT is smaller than about 0.05 nm. The bipolar orientation of the single-file water molecules is crucial for the exclusion of proton transfer. These findings may serve as guidelines for the future nanodevices by using charges to transport water and have biological implications because membrane water channels share a similar single-file water chain and positive charged region at centre and provide an insight into why two residues are necessitated in the central region of water channel protein.

  14. IRiS: construction of ARG networks at genomic scales.

    PubMed

    Javed, Asif; Pybus, Marc; Melé, Marta; Utro, Filippo; Bertranpetit, Jaume; Calafell, Francesc; Parida, Laxmi

    2011-09-01

    Given a set of extant haplotypes IRiS first detects high confidence recombination events in their shared genealogy. Next using the local sequence topology defined by each detected event, it integrates these recombinations into an ancestral recombination graph. While the current system has been calibrated for human population data, it is easily extendible to other species as well. IRiS (Identification of Recombinations in Sequences) binary files are available for non-commercial use in both Linux and Microsoft Windows, 32 and 64 bit environments from https://researcher.ibm.com/researcher/view_project.php?id = 2303 parida@us.ibm.com.

  15. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    NASA Astrophysics Data System (ADS)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.

  16. Development of the Large-Scale Statistical Analysis System of Satellites Observations Data with Grid Datafarm Architecture

    NASA Astrophysics Data System (ADS)

    Yamamoto, K.; Murata, K.; Kimura, E.; Honda, R.

    2006-12-01

    In the Solar-Terrestrial Physics (STP) field, the amount of satellite observation data has been increasing every year. It is necessary to solve the following three problems to achieve large-scale statistical analyses of plenty of data. (i) More CPU power and larger memory and disk size are required. However, total powers of personal computers are not enough to analyze such amount of data. Super-computers provide a high performance CPU and rich memory area, but they are usually separated from the Internet or connected only for the purpose of programming or data file transfer. (ii) Most of the observation data files are managed at distributed data sites over the Internet. Users have to know where the data files are located. (iii) Since no common data format in the STP field is available now, users have to prepare reading program for each data by themselves. To overcome the problems (i) and (ii), we constructed a parallel and distributed data analysis environment based on the Gfarm reference implementation of the Grid Datafarm architecture. The Gfarm shares both computational resources and perform parallel distributed processings. In addition, the Gfarm provides the Gfarm filesystem which can be as virtual directory tree among nodes. The Gfarm environment is composed of three parts; a metadata server to manage distributed files information, filesystem nodes to provide computational resources and a client to throw a job into metadata server and manages data processing schedulings. In the present study, both data files and data processes are parallelized on the Gfarm with 6 file system nodes: CPU clock frequency of each node is Pentium V 1GHz, 256MB memory and40GB disk. To evaluate performances of the present Gfarm system, we scanned plenty of data files, the size of which is about 300MB for each, in three processing methods: sequential processing in one node, sequential processing by each node and parallel processing by each node. As a result, in comparison between the number of files and the elapsed time, parallel and distributed processing shorten the elapsed time to 1/5 than sequential processing. On the other hand, sequential processing times were shortened in another experiment, whose file size is smaller than 100KB. In this case, the elapsed time to scan one file is within one second. It implies that disk swap took place in case of parallel processing by each node. We note that the operation became unstable when the number of the files exceeded 1000. To overcome the problem (iii), we developed an original data class. This class supports our reading of data files with various data formats since it converts them into an original data format since it defines schemata for every type of data and encapsulates the structure of data files. In addition, since this class provides a function of time re-sampling, users can easily convert multiple data (array) with different time resolution into the same time resolution array. Finally, using the Gfarm, we achieved a high performance environment for large-scale statistical data analyses. It should be noted that the present method is effective only when one data file size is large enough. At present, we are restructuring the new Gfarm environment with 8 nodes: CPU is Athlon 64 x2 Dual Core 2GHz, 2GB memory and 1.2TB disk (using RAID0) for each node. Our original class is to be implemented on the new Gfarm environment. In the present talk, we show the latest results with applying the present system for data analyses with huge number of satellite observation data files.

  17. 10 CFR 13.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... identity when filing documents and serving participants electronically through the E-Filing system, and... transmitted electronically from the E-Filing system to the submitter confirming receipt of electronic filing... presentation of the docket and a link to its files. E-Filing System means an electronic system that receives...

  18. Experiences and lessons learned from creating a generalized workflow for data publication of field campaign datasets

    NASA Astrophysics Data System (ADS)

    Santhana Vannan, S. K.; Ramachandran, R.; Deb, D.; Beaty, T.; Wright, D.

    2017-12-01

    This paper summarizes the workflow challenges of curating and publishing data produced from disparate data sources and provides a generalized workflow solution to efficiently archive data generated by researchers. The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) for biogeochemical dynamics and the Global Hydrology Resource Center (GHRC) DAAC have been collaborating on the development of a generalized workflow solution to efficiently manage the data publication process. The generalized workflow presented here are built on lessons learned from implementations of the workflow system. Data publication consists of the following steps: Accepting the data package from the data providers, ensuring the full integrity of the data files. Identifying and addressing data quality issues Assembling standardized, detailed metadata and documentation, including file level details, processing methodology, and characteristics of data files Setting up data access mechanisms Setup of the data in data tools and services for improved data dissemination and user experience Registering the dataset in online search and discovery catalogues Preserving the data location through Digital Object Identifiers (DOI) We will describe the steps taken to automate, and realize efficiencies to the above process. The goals of the workflow system are to reduce the time taken to publish a dataset, to increase the quality of documentation and metadata, and to track individual datasets through the data curation process. Utilities developed to achieve these goal will be described. We will also share metrics driven value of the workflow system and discuss the future steps towards creation of a common software framework.

  19. Using GTO-Velo to Facilitate Communication and Sharing of Simulation Results in Support of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Signe K.; Purohit, Sumit; Boyd, Lauren W.

    The Geothermal Technologies Office Code Comparison Study (GTO-CCS) aims to support the DOE Geothermal Technologies Office in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. Teams of researchers are collaborating in this code comparison effort, and it is important to be able to share results in a forum where technical discussions can easily take place without requiring teams to travel to a common location. Pacific Northwest National Laboratory has developed an open-source, flexible framework called Velo that providesmore » a knowledge management infrastructure and tools to support modeling and simulation for a variety of types of projects in a number of scientific domains. GTO-Velo is a customized version of the Velo Framework that is being used as the collaborative tool in support of the GTO-CCS project. Velo is designed around a novel integration of a collaborative Web-based environment and a scalable enterprise Content Management System (CMS). The underlying framework provides a flexible and unstructured data storage system that allows for easy upload of files that can be in any format. Data files are organized in hierarchical folders and each folder and each file has a corresponding wiki page for metadata. The user interacts with Velo through a web browser based wiki technology, providing the benefit of familiarity and ease of use. High-level folders have been defined in GTO-Velo for the benchmark problem descriptions, descriptions of simulator/code capabilities, a project notebook, and folders for participating teams. Each team has a subfolder with write access limited only to the team members, where they can upload their simulation results. The GTO-CCS participants are charged with defining the benchmark problems for the study, and as each GTO-CCS Benchmark problem is defined, the problem creator can provide a description using a template on the metadata page corresponding to the benchmark problem folder. Project documents, references and videos of the weekly online meetings are shared via GTO-Velo. A results comparison tool allows users to plot their uploaded simulation results on the fly, along with those of other teams, to facilitate weekly discussions of the benchmark problem results being generated by the teams. GTO-Velo is an invaluable tool providing the project coordinators and team members with a framework for collaboration among geographically dispersed organizations.« less

  20. dCache, Sync-and-Share for Big Data

    NASA Astrophysics Data System (ADS)

    Millar, AP; Fuhrmann, P.; Mkrtchyan, T.; Behrmann, G.; Bernardt, C.; Buchholz, Q.; Guelzow, V.; Litvintsev, D.; Schwank, K.; Rossi, A.; van der Reest, P.

    2015-12-01

    The availability of cheap, easy-to-use sync-and-share cloud services has split the scientific storage world into the traditional big data management systems and the very attractive sync-and-share services. With the former, the location of data is well understood while the latter is mostly operated in the Cloud, resulting in a rather complex legal situation. Beside legal issues, those two worlds have little overlap in user authentication and access protocols. While traditional storage technologies, popular in HEP, are based on X.509, cloud services and sync-and-share software technologies are generally based on username/password authentication or mechanisms like SAML or Open ID Connect. Similarly, data access models offered by both are somewhat different, with sync-and-share services often using proprietary protocols. As both approaches are very attractive, dCache.org developed a hybrid system, providing the best of both worlds. To avoid reinventing the wheel, dCache.org decided to embed another Open Source project: OwnCloud. This offers the required modern access capabilities but does not support the managed data functionality needed for large capacity data storage. With this hybrid system, scientists can share files and synchronize their data with laptops or mobile devices as easy as with any other cloud storage service. On top of this, the same data can be accessed via established mechanisms, like GridFTP to serve the Globus Transfer Service or the WLCG FTS3 tool, or the data can be made available to worker nodes or HPC applications via a mounted filesystem. As dCache provides a flexible authentication module, the same user can access its storage via different authentication mechanisms; e.g., X.509 and SAML. Additionally, users can specify the desired quality of service or trigger media transitions as necessary, thus tuning data access latency to the planned access profile. Such features are a natural consequence of using dCache. We will describe the design of the hybrid dCache/OwnCloud system, report on several months of operations experience running it at DESY, and elucidate the future road-map.

  1. Measuring driver satisfaction with an urban arterial before and after deployment of an adaptive timing signal system

    DOT National Transportation Integrated Search

    2001-02-01

    The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...

  2. Zebra: A striped network file system

    NASA Technical Reports Server (NTRS)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  3. DMFS: A Data Migration File System for NetBSD

    NASA Technical Reports Server (NTRS)

    Studenmund, William

    2000-01-01

    I have recently developed DMFS, a Data Migration File System, for NetBSD. This file system provides kernel support for the data migration system being developed by my research group at NASA/Ames. The file system utilizes an underlying file store to provide the file backing, and coordinates user and system access to the files. It stores its internal metadata in a flat file, which resides on a separate file system. This paper will first describe our data migration system to provide a context for DMFS, then it will describe DMFS. It also will describe the changes to NetBSD needed to make DMFS work. Then it will give an overview of the file archival and restoration procedures, and describe how some typical user actions are modified by DMFS. Lastly, the paper will present simple performance measurements which indicate that there is little performance loss due to the use of the DMFS layer.

  4. The TimeStudio Project: An open source scientific workflow system for the behavioral and brain sciences.

    PubMed

    Nyström, Pär; Falck-Ytter, Terje; Gredebäck, Gustaf

    2016-06-01

    This article describes a new open source scientific workflow system, the TimeStudio Project, dedicated to the behavioral and brain sciences. The program is written in MATLAB and features a graphical user interface for the dynamic pipelining of computer algorithms developed as TimeStudio plugins. TimeStudio includes both a set of general plugins (for reading data files, modifying data structures, visualizing data structures, etc.) and a set of plugins specifically developed for the analysis of event-related eyetracking data as a proof of concept. It is possible to create custom plugins to integrate new or existing MATLAB code anywhere in a workflow, making TimeStudio a flexible workbench for organizing and performing a wide range of analyses. The system also features an integrated sharing and archiving tool for TimeStudio workflows, which can be used to share workflows both during the data analysis phase and after scientific publication. TimeStudio thus facilitates the reproduction and replication of scientific studies, increases the transparency of analyses, and reduces individual researchers' analysis workload. The project website ( http://timestudioproject.com ) contains the latest releases of TimeStudio, together with documentation and user forums.

  5. A resilient and secure software platform and architecture for distributed spacecraft

    NASA Astrophysics Data System (ADS)

    Otte, William R.; Dubey, Abhishek; Karsai, Gabor

    2014-06-01

    A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.

  6. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  7. Concierge: Personal Database Software for Managing Digital Research Resources

    PubMed Central

    Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro

    2007-01-01

    This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800

  8. Radiology Teacher: a free, Internet-based radiology teaching file server.

    PubMed

    Talanow, Roland

    2009-12-01

    Teaching files are an essential ingredient in residency education. The online program Radiology Teacher was developed to allow the creation of interactive and customized teaching files in real time. Online access makes it available anytime and anywhere, and it is free of charge, user tailored, and easy to use. No programming skills, additional plug-ins, or installations are needed, allowing its use even on protected intranets. Special effects for enhancing the learning experience as well as the linking and the source code are created automatically by the program. It may be used in different modes by individuals and institutions to share cases from multiple authors in a single database. Radiology Teacher is an easy-to-use automatic teaching file program that may enhance users' learning experiences by offering different modes of user-defined presentations.

  9. Evaluation of the Air Force Office of Special Investigations Conduct of Internet Based Operations and Investigations (REDACTED)

    DTIC Science & Technology

    2016-04-25

    transfer of computer files containing child pornography and (b) investigations concerning the use of the Internet for solicitation of a minor (under the...law enforcement agencies in the realm of the investigation of P2P file transfers of child pornography and the solicitation of minors for sexual...NCIS) special agent who launched a broad investigation into the sharing of child pornography on a peer-to-peer network by anyone in the State of

  10. The connectome viewer toolkit: an open source framework to manage, analyze, and visualize connectomes.

    PubMed

    Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric

    2011-01-01

    Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/

  11. The Connectome Viewer Toolkit: An Open Source Framework to Manage, Analyze, and Visualize Connectomes

    PubMed Central

    Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric

    2011-01-01

    Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit – a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/ PMID:21713110

  12. An Object-Relational Ifc Storage Model Based on Oracle Database

    NASA Astrophysics Data System (ADS)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  13. Dealing with Processing Chapter 10 Files from Multiple Vendors

    NASA Technical Reports Server (NTRS)

    Knudtson, Kevin Mark

    2011-01-01

    This presentation discusses the experiences of the NASA Dryden Flight Research Center's (DFRC) Western Aeronautical Test Range (WATR) in dealing with the problems encountered while performing post flight data processing using the WATR's data collection/processing system on Chapter 10 files from different Chapter 10 recorders. The transition to Chapter 10 recorders has brought Vvith it an assortment of issues that must be addressed: the ambiguities of language in the Chapter 10 standard, the unrealistic near-term expectations of the Chapter 10 standard, the incompatibility of data products generated from Chapter 10 recorders, and the unavailability of mature Chapter 10 applications. Some of these issues properly belong to the users of Chapter 10 recorders, some to the manufacturers, and some to the flight test community at large. The goal of this presentation is to share the WATR's lesson learned in processing data products from various Chapter 10 recorder vendors. The WATR could benefit greatly in the open forum Vvith lessons learned discussions with other members of the flight test community.

  14. EVA Wiki - Transforming Knowledge Management for EVA Flight Controllers and Instructors

    NASA Technical Reports Server (NTRS)

    Johnston, Stephanie S.; Alpert, Brian K.; Montalvo, Edwin James; Welsh, Lawrence Daren; Wray, Scott; Mavridis, Costa

    2016-01-01

    The EVA Wiki was recently implemented as the primary knowledge database to retain critical knowledge and skills in the EVA Operations group at NASA's Johnson Space Center by ensuring that information is recorded in a common, easy to search repository. Prior to the EVA Wiki, information required for EVA flight controllers and instructors was scattered across different sources, including multiple file share directories, SharePoint, individual computers, and paper archives. Many documents were outdated, and data was often difficult to find and distribute. In 2011, a team recognized that these knowledge management problems could be solved by creating an EVA Wiki using MediaWiki, a free and open-source software developed by the Wikimedia Foundation. The EVA Wiki developed into an EVA-specific Wikipedia on an internal NASA server. While the technical implementation of the wiki had many challenges, one of the biggest hurdles came from a cultural shift. Like many enterprise organizations, the EVA Operations group was accustomed to hierarchical data structures and individually-owned documents. Instead of sorting files into various folders, the wiki searches content. Rather than having a single document owner, the wiki harmonized the efforts of many contributors and established an automated revision controlled system. As the group adapted to the wiki, the usefulness of this single portal for information became apparent. It transformed into a useful data mining tool for EVA flight controllers and instructors, as well as hundreds of others that support the EVA. Program managers, engineers, astronauts, flight directors, and flight controllers in differing disciplines now have an easier-to-use, searchable system to find EVA data. This paper presents the benefits the EVA Wiki has brought to NASA's EVA community, as well as the cultural challenges it had to overcome.

  15. EVA Wiki - Transforming Knowledge Management for EVA Flight Controllers and Instructors

    NASA Technical Reports Server (NTRS)

    Johnston, Stephanie S.; Alpert, Brian K.; Montalvo, Edwin James; Welsh, Lawrence Daren; Wray, Scott; Mavridis, Costa

    2016-01-01

    The EVA Wiki was recently implemented as the primary knowledge database to retain critical knowledge and skills in the EVA Operations group at NASA's Johnson Space Center by ensuring that information is recorded in a common, easy to search repository. Prior to the EVA Wiki, information required for EVA flight controllers and instructors was scattered across different sources, including multiple file share directories, SharePoint, individual computers, and paper archives. Many documents were outdated, and data was often difficult to find and distribute. In 2011, a team recognized that these knowledge management problems could be solved by creating an EVA Wiki using MediaWiki, a free and open-source software developed by the Wikimedia Foundation. The EVA Wiki developed into an EVA-specific Wikipedia on an internal NASA server. While the technical implementation of the wiki had many challenges, one of the biggest hurdles came from a cultural shift. Like many enterprise organizations, the EVA Operations group was accustomed to hierarchical data structures and individually-owned documents. Instead of sorting files into various folders, the wiki searches content. Rather than having a single document owner, the wiki harmonized the efforts of many contributors and established an automated revision controlled system. As the group adapted to the wiki, the usefulness of this single portal for information became apparent. It transformed into a useful data mining tool for EVA flight controllers and instructors, as well as hundreds of others that support EVA. Program managers, engineers, astronauts, flight directors, and flight controllers in differing disciplines now have an easier-to-use, searchable system to find EVA data. This paper presents the benefits the EVA Wiki has brought to NASA's EVA community, as well as the cultural challenges it had to overcome.

  16. EVA Wiki - Transforming Knowledge Management for EVA Flight Controllers and Instructors

    NASA Technical Reports Server (NTRS)

    Johnston, Stephanie

    2016-01-01

    The EVA (Extravehicular Activity) Wiki was recently implemented as the primary knowledge database to retain critical knowledge and skills in the EVA Operations group at NASA's Johnson Space Center by ensuring that information is recorded in a common, searchable repository. Prior to the EVA Wiki, information required for EVA flight controllers and instructors was scattered across different sources, including multiple file share directories, SharePoint, individual computers, and paper archives. Many documents were outdated, and data was often difficult to find and distribute. In 2011, a team recognized that these knowledge management problems could be solved by creating an EVA Wiki using MediaWiki, a free and open-source software developed by the Wikimedia Foundation. The EVA Wiki developed into an EVA-specific Wikipedia on an internal NASA server. While the technical implementation of the wiki had many challenges, the one of the biggest hurdles came from a cultural shift. Like many enterprise organizations, the EVA Operations group was accustomed to hierarchical data structures and individually-owned documents. Instead of sorting files into various folders, the wiki searches content. Rather than having a single document owner, the wiki harmonized the efforts of many contributors and established an automated revision control system. As the group adapted to the wiki, the usefulness of this single portal for information became apparent. It transformed into a useful data mining tool for EVA flight controllers and instructors, and also for hundreds of other NASA and contract employees. Program managers, engineers, astronauts, flight directors, and flight controllers in differing disciplines now have an easier-to-use, searchable system to find EVA data. This paper presents the benefits the EVA Wiki has brought to NASA's EVA community, as well as the cultural challenges it had to overcome.

  17. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Radio data archiving system

    NASA Astrophysics Data System (ADS)

    Knapic, C.; Zanichelli, A.; Dovgan, E.; Nanni, M.; Stagni, M.; Righini, S.; Sponza, M.; Bedosti, F.; Orlati, A.; Smareglia, R.

    2016-07-01

    Radio Astronomical Data models are becoming very complex since the huge possible range of instrumental configurations available with the modern Radio Telescopes. What in the past was the last frontiers of data formats in terms of efficiency and flexibility is now evolving with new strategies and methodologies enabling the persistence of a very complex, hierarchical and multi-purpose information. Such an evolution of data models and data formats require new data archiving techniques in order to guarantee data preservation following the directives of Open Archival Information System and the International Virtual Observatory Alliance for data sharing and publication. Currently, various formats (FITS, MBFITS, VLBI's XML description files and ancillary files) of data acquired with the Medicina and Noto Radio Telescopes can be stored and handled by a common Radio Archive, that is planned to be released to the (inter)national community by the end of 2016. This state-of-the-art archiving system for radio astronomical data aims at delegating as much as possible to the software setting how and where the descriptors (metadata) are saved, while the users perform user-friendly queries translated by the web interface into complex interrogations on the database to retrieve data. In such a way, the Archive is ready to be Virtual Observatory compliant and as much as possible user-friendly.

  19. 77 FR 70427 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... Number: 20121116-5122. Comments Due: 5 p.m. ET 12/7/12. Docket Numbers: ER13-388-000. Applicants: Sky River LLC. Description: Sky River LLC and North Sky River Energy, LLC Shared Facilities Agreement to be...

  20. Code 672 observational science branch computer networks

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Shirk, H. G.

    1988-01-01

    In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.

  1. 75 FR 50015 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-16

    ... Solar, Inc. (``FSLR''), Market Vectors ETF Gold Miners (``GDX''), SPDR Gold Trust (``GLD''), iShares DJ... still lower than fees charged by other options exchanges. PHLX, For example, currently charges Broker...

  2. Secure Federal File Sharing Act

    THOMAS, 111th Congress

    Rep. Towns, Edolphus [D-NY-10

    2009-11-17

    Senate - 03/25/2010 Received in the Senate and Read twice and referred to the Committee on Homeland Security and Governmental Affairs. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:

  3. CAPRI (Computational Analysis PRogramming Interface): A Solid Modeling Based Infra-Structure for Engineering Analysis and Design Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Follen, Gregory J.

    1998-01-01

    CAPRI is a CAD-vendor neutral application programming interface designed for the construction of analysis and design systems. By allowing access to the geometry from within all modules (grid generators, solvers and post-processors) such tasks as meshing on the actual surfaces, node enrichment by solvers and defining which mesh faces are boundaries (for the solver and visualization system) become simpler. The overall reliance on file 'standards' is minimized. This 'Geometry Centric' approach makes multi-physics (multi-disciplinary) analysis codes much easier to build. By using the shared (coupled) surface as the foundation, CAPRI provides a single call to interpolate grid-node based data from the surface discretization in one volume to another. Finally, design systems are possible where the results can be brought back into the CAD system (and therefore manufactured) because all geometry construction and modification are performed using the CAD system's geometry kernel.

  4. Small file aggregation in a parallel computing system

    DOEpatents

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  5. DataSync - sharing data via filesystem

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Klump, Jens

    2014-05-01

    Usually research work is a cycle of to hypothesize, to collect data, to corroborate the hypothesis, and finally to publish the results. In this sequence there are possibilities to base the own work on the work of others. Maybe there are candidates of physical samples listed in the IGSN-Registry and there is no need to go on excursion to acquire physical samples. Hopefully the DataCite catalogue lists already metadata of datasets that meet the constraints of the hypothesis and that are now open for reappraisal. After all, working with the measured data to corroborate the hypothesis involves new methods, and proven methods as well as different software tools. A cohort of intermediate data is created that can be shared with colleagues to discuss the research progress and receive a first evaluation. In consequence, the intermediate data should be versioned to easily get back to valid intermediate data, when you notice you get on the wrong track. Things are different for project managers. They want to know what is currently done, what has been done, and what is the last valid data, if somebody has to continue the work. To make life of members of small science projects easier we developed Datasync [1] as a software for sharing and versioning data. Datasync is designed to synchronize directory trees between different computers of a research team over the internet. The software is developed as JAVA application and watches a local directory tree for changes that are replicated as eSciDoc-objects into an eSciDoc-infrastructure [2] using the eSciDoc REST API. Modifications to the local filesystem automatically create a new version of an eSciDoc-object inside the eSciDoc-infrastructure. This way individual folders can be shared between team members while project managers can get a general idea of current status by synchronizing whole project inventories. Additionally XML metadata from separate files can be managed together with data files inside the eSciDoc-objects. While Datasync's major task is to distribute directory trees, we complement its functionality with the PHP-based application panMetaDocs [3]. panMetaDocs is the successor to panMetaWorks [4] and inherits most of its functionality. Through an internet browser PanMetaDocs provides a web-based overview of the datasets inside the eSciDoc-infrastructure. The software allows to upload further data, to add and edit metadata using the metadata editor, and it disseminates metadata through various channels. In addition, previous versions of a file can be downloaded and access rights can be defined on files and folders to control visibility of files for users of both panMetaDocs and Datasync. panMetaDocs serves as a publication agent for datasets and it serves as a registration agent for dataset DOIs. The application stack presented here allows sharing, versioning, and central storage of data from the very beginning of project activities by using the file synchronization service Datasync. The web-application panMetaDocs complements the functionality of DataSync by providing a dataset publication agent and other tools to handle administrative tasks on the data. [1] http://github.com/ulbricht/datasync [2] http://github.com/escidoc [3] http://panmetadocs.sf.net [4] http://metaworks.pangaea.de

  6. Scientific workflow and support for high resolution global climate modeling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.

    2012-04-01

    The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on these systems, and developed as part of the Ultra-High Resolution Climate Modeling Project, allows users of OLCF resources to efficiently share simulated data, often multi-terabyte in volume, as well as the results from the modeling experiments and various synthesized products derived from these simulations. The final objective in the exercise is to ensure that the simulation results and the enhanced understanding will serve the needs of a diverse group of stakeholders across the world, including our research partners in U.S. Department of Energy laboratories & universities, domain scientists, students (K-12 as well as higher education), resource managers, decision makers, and the general public.

  7. PANDA: A distributed multiprocessor operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubb, P.

    1989-01-01

    PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less

  8. Collective operations in a file system based execution model

    DOEpatents

    Shinde, Pravin; Van Hensbergen, Eric

    2013-02-12

    A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

  9. Collective operations in a file system based execution model

    DOEpatents

    Shinde, Pravin; Van Hensbergen, Eric

    2013-02-19

    A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

  10. Design and Implementation of a Metadata-rich File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less

  11. Methods and apparatus for multi-resolution replication of files in a parallel computing system using semantic information

    DOEpatents

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-10-20

    Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.

  12. Reciproc versus Twisted file for root canal filling removal: assessment of apically extruded debris.

    PubMed

    Altunbas, Demet; Kutuk, Betul; Toyoglu, Mustafa; Kutlu, Gizem; Kustarci, Alper; Er, Kursat

    2016-01-01

    The aim of this study was to evaluate the amount of apically extruded debris during endodontic retreatment with different file systems. Sixty extracted human mandibular premolar teeth were used in this study. Root canals of the teeth were instrumented and filled before being randomly assigned to three groups. Guttapercha was removed using the Reciproc system, the Twisted File system (TF), and Hedström-files (H-file). Apically extruded debris was collected and dried in pre-weighed Eppendorf tubes. The amount of extruded debris was assessed with an electronic balance. Data were statistically analyzed using one-way ANOVA, Kruskal-Wallis, and Mann-Whitney U tests. The Reciproc and TF systems extruded significantly less debris than the H-file (p<0.05). However, no significant difference was found between the Reciproc and TF systems. All tested file systems caused apical extrusion of debris. Both the rotary file (TF) and the reciprocating single-file (Reciproc) systems were associated with less apical extrusion compared with the H-file.

  13. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  14. Distributed PACS using distributed file system with hierarchical meta data servers.

    PubMed

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  15. Toward information management in corporations (2)

    NASA Astrophysics Data System (ADS)

    Shibata, Mitsuru

    If construction of inhouse information management systems in an advanced information society should be positioned along with the social information management, its base making begins with reviewing current paper filing systems. Since the problems which inhere in inhouse information management systems utilizing OA equipments also inhere in paper filing systems, the first step toward full scale inhouse information management should be to grasp and solve the fundamental problems in current filing systems. This paper describes analysis of fundamental problems in filing systems, making new type of offices and analysis of improvement needs in filing systems, and some points in improving filing systems.

  16. DISTRIBUTED STRUCTURE-SEARCHABLE TOXICITY ...

    EPA Pesticide Factsheets

    The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, SAR model development, or building of chemical relational databases (CRD). The Distributed Structure-Searchable Toxicity (DSSTox) Public Database Network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: 1) to adopt and encourage the use of a common standard file format (SDF) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; 2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data s

  17. MPI-IO: A Parallel File I/O Interface for MPI Version 0.3

    NASA Technical Reports Server (NTRS)

    Corbett, Peter; Feitelson, Dror; Hsu, Yarsun; Prost, Jean-Pierre; Snir, Marc; Fineberg, Sam; Nitzberg, Bill; Traversat, Bernard; Wong, Parkson

    1995-01-01

    Thanks to MPI [9], writing portable message passing parallel programs is almost a reality. One of the remaining problems is file I/0. Although parallel file systems support similar interfaces, the lack of a standard makes developing a truly portable program impossible. Further, the closest thing to a standard, the UNIX file interface, is ill-suited to parallel computing. Working together, IBM Research and NASA Ames have drafted MPI-I0, a proposal to address the portable parallel I/0 problem. In a nutshell, this proposal is based on the idea that I/0 can be modeled as message passing: writing to a file is like sending a message, and reading from a file is like receiving a message. MPI-IO intends to leverage the relatively wide acceptance of the MPI interface in order to create a similar I/0 interface. The above approach can be materialized in different ways. The current proposal represents the result of extensive discussions (and arguments), but is by no means finished. Many changes can be expected as additional participants join the effort to define an interface for portable I/0. This document is organized as follows. The remainder of this section includes a discussion of some issues that have shaped the style of the interface. Section 2 presents an overview of MPI-IO as it is currently defined. It specifies what the interface currently supports and states what would need to be added to the current proposal to make the interface more complete and robust. The next seven sections contain the interface definition itself. Section 3 presents definitions and conventions. Section 4 contains functions for file control, most notably open. Section 5 includes functions for independent I/O, both blocking and nonblocking. Section 6 includes functions for collective I/O, both blocking and nonblocking. Section 7 presents functions to support system-maintained file pointers, and shared file pointers. Section 8 presents constructors that can be used to define useful filetypes (the role of filetypes is explained in Section 2 below). Section 9 presents how the error handling mechanism of MPI is supported by the MPI-IO interface. All this is followed by a set of appendices, which contain information about issues that have not been totally resolved yet, and about design considerations. The reader can find there the motivation behind some of our design choices. More information on this would definitely be welcome and will be included in a further release of this document. The first appendix contains a description of MPI-I0's 'hints' structure which is used when opening a file. Appendix B is a discussion of various issues in the support for file pointers. Appendix C explains what we mean in talking about atomic access. Appendix D provides detailed examples of filetype constructors, and Appendix E contains a collection of arguments for and against various design decisions.

  18. Adaptable, high recall, event extraction system with minimal configuration.

    PubMed

    Miwa, Makoto; Ananiadou, Sophia

    2015-01-01

    Biomedical event extraction has been a major focus of biomedical natural language processing (BioNLP) research since the first BioNLP shared task was held in 2009. Accordingly, a large number of event extraction systems have been developed. Most such systems, however, have been developed for specific tasks and/or incorporated task specific settings, making their application to new corpora and tasks problematic without modification of the systems themselves. There is thus a need for event extraction systems that can achieve high levels of accuracy when applied to corpora in new domains, without the need for exhaustive tuning or modification, whilst retaining competitive levels of performance. We have enhanced our state-of-the-art event extraction system, EventMine, to alleviate the need for task-specific tuning. Task-specific details are specified in a configuration file, while extensive task-specific parameter tuning is avoided through the integration of a weighting method, a covariate shift method, and their combination. The task-specific configuration and weighting method have been employed within the context of two different sub-tasks of BioNLP shared task 2013, i.e. Cancer Genetics (CG) and Pathway Curation (PC), removing the need to modify the system specifically for each task. With minimal task specific configuration and tuning, EventMine achieved the 1st place in the PC task, and 2nd in the CG, achieving the highest recall for both tasks. The system has been further enhanced following the shared task by incorporating the covariate shift method and entity generalisations based on the task definitions, leading to further performance improvements. We have shown that it is possible to apply a state-of-the-art event extraction system to new tasks with high levels of performance, without having to modify the system internally. Both covariate shift and weighting methods are useful in facilitating the production of high recall systems. These methods and their combination can adapt a model to the target data with no deep tuning and little manual configuration.

  19. American Telephone and Telegraph System V/MLS Release 1.1.2 Running on Unix System V Release 3.1.1

    DTIC Science & Technology

    1989-10-18

    Evaluation Report AT&T System V/MLS SYSTEM OVERVIEW what is specified in the /mls/ passwd file. For a complete description of how this works, see page 62...from the publicly readable files /etc/ passwd and /etclgroup, to the protected files /mlslpasswd and /mls/group. These protected files are ASCII...files which are referred to as "shadow files". October 18, 1989 62 Final Evaluation Report AT&T System V/MLS SYSTEM OVERVIEW Imls/ passwd contains the

  20. World Energy Projection System Plus Model Documentation: Commercial Module

    EIA Publications

    2016-01-01

    The Commercial Model of the World Energy Projection System Plus (WEPS ) is an energy demand modeling system of the world commercial end?use sector at a regional level. This report describes the version of the Commercial Model that was used to produce the commercial sector projections published in the International Energy Outlook 2016 (IEO2016). The Commercial Model is one of 13 components of the WEPS system. The WEPS is a modular system, consisting of a number of separate energy models that are communicate and work with each other through an integrated system model. The model components are each developed independently, but are designed with well?defined protocols for system communication and interactivity. The WEPS modeling system uses a shared database (the “restart” file) that allows all the models to communicate with each other when they are run in sequence over a number of iterations. The overall WEPS system uses an iterative solution technique that forces convergence of consumption and supply pressures to solve for an equilibrium price.

  1. Computer Ethics and Cyber Laws to Mental Health Professionals

    PubMed Central

    Raveesh, B N; Pande, Sanjay

    2004-01-01

    The explosive growth of computer and communications technology raises new legal and ethical challenges that reflect tensions between individual rights and societal needs. For instance, should cracking into a computer system be viewed as a petty prank, as trespassing, as theft, or as espionage? Should placing copyrighted material onto a public file server be treated as freedom of expression or as theft? Should ordinary communications be encrypted using codes that make it impossible for law-enforcement agencies to perform wiretaps? As we develop shared understandings and norms of behaviour, we are setting standards that will govern the information society for decades to come. PMID:21408035

  2. Computer ethics and cyber laws to mental health professionals.

    PubMed

    Raveesh, B N; Pande, Sanjay

    2004-04-01

    The explosive growth of computer and communications technology raises new legal and ethical challenges that reflect tensions between individual rights and societal needs. For instance, should cracking into a computer system be viewed as a petty prank, as trespassing, as theft, or as espionage? Should placing copyrighted material onto a public file server be treated as freedom of expression or as theft? Should ordinary communications be encrypted using codes that make it impossible for law-enforcement agencies to perform wiretaps? As we develop shared understandings and norms of behaviour, we are setting standards that will govern the information society for decades to come.

  3. Dynamic Collaboration Infrastructure for Hydrologic Science

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.

  4. 78 FR 21930 - Aquenergy Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-12

    ... Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application Document, and Approving Use of the Traditional Licensing Process a. Type of Filing: Notice of Intent to File License...: November 11, 2012. d. Submitted by: Aquenergy Systems, Inc., a fully owned subsidiaries of Enel Green Power...

  5. Storing files in a parallel computing system based on user-specified parser function

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  6. Methods and apparatus for capture and storage of semantic information with sub-files in a parallel computing system

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-02-03

    Techniques are provided for storing files in a parallel computing system using sub-files with semantically meaningful boundaries. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a plurality of sub-files. The method comprises the steps of obtaining a user specification of semantic information related to the file; providing the semantic information as a data structure description to a data formatting library write function; and storing the semantic information related to the file with one or more of the sub-files in one or more storage nodes of the parallel computing system. The semantic information provides a description of data in the file. The sub-files can be replicated based on semantically meaningful boundaries.

  7. Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF

    NASA Technical Reports Server (NTRS)

    Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.

    2001-01-01

    The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.

  8. panMetaDocs and DataSync - providing a convenient way to share and publish research data

    NASA Astrophysics Data System (ADS)

    Ulbricht, D.; Klump, J. F.

    2013-12-01

    In recent years research institutions, geological surveys and funding organizations started to build infrastructures to facilitate the re-use of research data from previous work. At present, several intermeshed activities are coordinated to make data systems of the earth sciences interoperable and recorded data discoverable. Driven by governmental authorities, ISO19115/19139 emerged as metadata standards for discovery of data and services. Established metadata transport protocols like OAI-PMH and OGC-CSW are used to disseminate metadata to data portals. With the persistent identifiers like DOI and IGSN research data and corresponding physical samples can be given unambiguous names and thus become citable. In summary, these activities focus primarily on 'ready to give away'-data, already stored in an institutional repository and described with appropriate metadata. Many datasets are not 'born' in this state but are produced in small and federated research projects. To make access and reuse of these 'small data' easier, these data should be centrally stored and version controlled from the very beginning of activities. We developed DataSync [1] as supplemental application to the panMetaDocs [2] data exchange platform as a data management tool for small science projects. DataSync is a JAVA-application that runs on a local computer and synchronizes directory trees into an eSciDoc-repository [3] by creating eSciDoc-objects via eSciDocs' REST API. DataSync can be installed on multiple computers and is in this way able to synchronize files of a research team over the internet. XML Metadata can be added as separate files that are managed together with data files as versioned eSciDoc-objects. A project-customized instance of panMetaDocs is provided to show a web-based overview of the previously uploaded file collection and to allow further annotation with metadata inside the eSciDoc-repository. PanMetaDocs is a PHP based web application to assist the creation of metadata in any XML-based metadata schema. To reduce manual entries of metadata to a minimum and make use of contextual information in a project setting, metadata fields can be populated with static or dynamic content. Access rights can be defined to control visibility and access to stored objects. Notifications about recently updated datasets are available by RSS and e-mail and the entire inventory can be harvested via OAI-PMH. panMetaDocs is optimized to be harvested by panFMP [4]. panMetaDocs is able to mint dataset DOIs though DataCite and uses eSciDocs' REST API to transfer eSciDoc-objects from a non-public 'pending'-status to the published status 'released', which makes data and metadata of the published object available worldwide through the internet. The application scenario presented here shows the adoption of open source applications to data sharing and publication of data. An eSciDoc-repository is used as storage for data and metadata. DataSync serves as a file ingester and distributor, whereas panMetaDocs' main function is to annotate the dataset files with metadata to make them ready for publication and sharing with your own team, or with the scientific community.

  9. Sharing electronic structure and crystallographic data with ETSF_IO

    NASA Astrophysics Data System (ADS)

    Caliste, D.; Pouillon, Y.; Verstraete, M. J.; Olevano, V.; Gonze, X.

    2008-11-01

    We present a library of routines whose main goal is to read and write exchangeable files (NetCDF file format) storing electronic structure and crystallographic information. It is based on the specification agreed inside the European Theoretical Spectroscopy Facility (ETSF). Accordingly, this library is nicknamed ETSF_IO. The purpose of this article is to give both an overview of the ETSF_IO library and a closer look at its usage. ETSF_IO is designed to be robust and easy to use, close to Fortran read and write routines. To facilitate its adoption, a complete documentation of the input and output arguments of the routines is available in the package, as well as six tutorials explaining in detail various possible uses of the library routines. Catalogue identifier: AEBG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu Lesser General Public License No. of lines in distributed program, including test data, etc.: 63 156 No. of bytes in distributed program, including test data, etc.: 363 390 Distribution format: tar.gz Programming language: Fortran 95 Computer: All systems with a Fortran95 compiler Operating system: All systems with a Fortran95 compiler Classification: 7.3, 8 External routines: NetCDF, http://www.unidata.ucar.edu/software/netcdf Nature of problem: Store and exchange electronic structure data and crystallographic data independently of the computational platform, language and generating software Solution method: Implement a library based both on NetCDF file format and an open specification (http://etsf.eu/index.php?page=standardization)

  10. Storing files in a parallel computing system using list-based index to identify replica files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy

    Improved techniques are provided for storing files in a parallel computing system using a list-based index to identify file replicas. A file and at least one replica of the file are stored in one or more storage nodes of the parallel computing system. An index for the file comprises at least one list comprising a pointer to a storage location of the file and a storage location of the at least one replica of the file. The file comprises one or more of a complete file and one or more sub-files. The index may also comprise a checksum value formore » one or more of the file and the replica(s) of the file. The checksum value can be evaluated to validate the file and/or the file replica(s). A query can be processed using the list.« less

  11. EDGE3: A web-based solution for management and analysis of Agilent two color microarray experiments

    PubMed Central

    Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A

    2009-01-01

    Background The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE3 was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. Results EDGE3 has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE3 is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Conclusion Here, we present EDGE3, an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE3 provides a means for managing RNA samples and arrays during the hybridization process. EDGE3 is freely available for download at . PMID:19732451

  12. EDGE(3): a web-based solution for management and analysis of Agilent two color microarray experiments.

    PubMed

    Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A

    2009-09-04

    The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE(3) was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. EDGE(3) has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE(3) is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Here, we present EDGE(3), an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE(3) provides a means for managing RNA samples and arrays during the hybridization process. EDGE(3) is freely available for download at http://edge.oncology.wisc.edu/.

  13. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  14. Client-side Medical Image Colorization in a Collaborative Environment.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2015-01-01

    The paper presents an application related to collaborative medicine using a browser based medical visualization system with focus on the medical image colorization process and the underlying open source web development technologies involved. Browser based systems allow physicians to share medical data with their remotely located counterparts or medical students, assisting them during patient diagnosis, treatment monitoring, surgery planning or for educational purposes. This approach brings forth the advantage of ubiquity. The system can be accessed from a any device, in order to process the images, assuring the independence towards having a specific proprietary operating system. The current work starts with processing of DICOM (Digital Imaging and Communications in Medicine) files and ends with the rendering of the resulting bitmap images on a HTML5 (fifth revision of the HyperText Markup Language) canvas element. The application improves the image visualization emphasizing different tissue densities.

  15. Storing files in a parallel computing system based on user or application specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.

    2016-03-29

    Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on onemore » or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.« less

  16. The tissue microarray OWL schema: An open-source tool for sharing tissue microarray data

    PubMed Central

    Kang, Hyunseok P.; Borromeo, Charles D.; Berman, Jules J.; Becich, Michael J.

    2010-01-01

    Background: Tissue microarrays (TMAs) are enormously useful tools for translational research, but incompatibilities in database systems between various researchers and institutions prevent the efficient sharing of data that could help realize their full potential. Resource Description Framework (RDF) provides a flexible method to represent knowledge in triples, which take the form Subject-Predicate-Object. All data resources are described using Uniform Resource Identifiers (URIs), which are global in scope. We present an OWL (Web Ontology Language) schema that expands upon the TMA data exchange specification to address this issue and assist in data sharing and integration. Methods: A minimal OWL schema was designed containing only concepts specific to TMA experiments. More general data elements were incorporated from predefined ontologies such as the NCI thesaurus. URIs were assigned using the Linked Data format. Results: We present examples of files utilizing the schema and conversion of XML data (similar to the TMA DES) to OWL. Conclusion: By utilizing predefined ontologies and global unique identifiers, this OWL schema provides a solution to the limitations of XML, which represents concepts defined in a localized setting. This will help increase the utilization of tissue resources, facilitating collaborative translational research efforts. PMID:20805954

  17. The computerized OMAHA system in microsoft office excel.

    PubMed

    Lai, Xiaobin; Wong, Frances K Y; Zhang, Peiqiang; Leung, Carenx W Y; Lee, Lai H; Wong, Jessica S Y; Lo, Yim F; Ching, Shirley S Y

    2014-01-01

    The OMAHA System was adopted as the documentation system in an interventional study. To systematically record client care and facilitate data analysis, two Office Excel files were developed. The first Excel file (File A) was designed to record problems, care procedure, and outcomes for individual clients according to the OMAHA System. It was used by the intervention nurses in the study. The second Excel file (File B) was the summary of all clients that had been automatically extracted from File A. Data in File B can be analyzed directly in Excel or imported in PASW for further analysis. Both files have four parts to record basic information and the three parts of the OMAHA System. The computerized OMAHA System simplified the documentation procedure and facilitated the management and analysis of data.

  18. 75 FR 13623 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... ``immediate family'' shall include parents, mother-in-law or father-in-law, husband or wife, children or any... direct proportionate share limitation of paragraph (1)(A)(iii) are accounts of the immediate family of...

  19. 78 FR 38747 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-27

    ... unlisted trading privileges for which a ``Required Value,'' such as an intraday indicative value or... Managed Fund Shares listed on the Exchange if the Intraday Indicative Value or the index value applicable...

  20. 76 FR 62877 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-11

    .... This lower rate will be calculated on a daily basis. Market participants who share a trading acronym or... require that a market participant appropriately indicate his trading acronym and/or MPID in the...

  1. Permanent-File-Validation Utility Computer Program

    NASA Technical Reports Server (NTRS)

    Derry, Stephen D.

    1988-01-01

    Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

  2. 76 FR 66695 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-27

    .... DWHS P04 System name: Reduction-In-Force Case Files (February 11, 2011, 76 FR 7825). Changes....'' * * * * * DWHS P04 System name: Reduction-In-Force Case Files. System location: Human Resources Directorate... system: Storage: Paper file folders. Retrievability: Filed alphabetically by last name. Safeguards...

  3. Personal File Management for the Health Sciences.

    ERIC Educational Resources Information Center

    Apostle, Lynne

    Written as an introduction to the concepts of creating a personal or reprint file, this workbook discusses both manual and computerized systems, with emphasis on the preliminary groundwork that needs to be done before starting any filing system. A file assessment worksheet is provided; considerations in developing a personal filing system are…

  4. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Random Selection International Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign...) For a description of file number information, see The International Bureau Filing System File Number... 47 Telecommunication 1 2013-10-01 2013-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  5. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2010-10-01 2010-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  6. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Random Selection International Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign...) For a description of file number information, see The International Bureau Filing System File Number... 47 Telecommunication 1 2012-10-01 2012-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  7. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2011-10-01 2011-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  8. 47 CFR 1.10008 - What are IBFS file numbers?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Random Selection International Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign...) For a description of file number information, see The International Bureau Filing System File Number... 47 Telecommunication 1 2014-10-01 2014-10-01 false What are IBFS file numbers? 1.10008 Section 1...

  9. A File Archival System

    NASA Technical Reports Server (NTRS)

    Fanselow, J. L.; Vavrus, J. L.

    1984-01-01

    ARCH, file archival system for DEC VAX, provides for easy offline storage and retrieval of arbitrary files on DEC VAX system. System designed to eliminate situations that tie up disk space and lead to confusion when different programers develop different versions of same programs and associated files.

  10. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences

    PubMed Central

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099

  11. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.

    PubMed

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.

  12. 75 FR 65467 - Combined Notice of Filings No. 1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-25

    ...: Venice Gathering System, L.L.C. Description: Venice Gathering System, L.L.C. submits tariff filing per 154.203: Venice Gathering System Rate Settlement Compliance Filing to be effective 11/1/2010. Filed...

  13. 76 FR 39453 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-06

    ... Commitment Schedule (``CCS'').\\10\\ CCS provides the Display Book[supreg] \\11\\ with the amount of shares that... (``BBO''). CCS interest is separate and distinct from other DMM interest in that it serves as the...

  14. 17 CFR 260.7a-27 - Title of securities.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... given the full designation of the class of shares and, if not included therein, the par or stated value... Form T-1 and Form T-2, if the rate of interest is not determined at the time these forms are filed. (c...

  15. 17 CFR 260.7a-27 - Title of securities.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... given the full designation of the class of shares and, if not included therein, the par or stated value... Form T-1 and Form T-2, if the rate of interest is not determined at the time these forms are filed. (c...

  16. 17 CFR 260.7a-27 - Title of securities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... given the full designation of the class of shares and, if not included therein, the par or stated value... Form T-1 and Form T-2, if the rate of interest is not determined at the time these forms are filed. (c...

  17. 17 CFR 260.7a-27 - Title of securities.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... given the full designation of the class of shares and, if not included therein, the par or stated value... Form T-1 and Form T-2, if the rate of interest is not determined at the time these forms are filed. (c...

  18. 17 CFR 260.7a-27 - Title of securities.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... given the full designation of the class of shares and, if not included therein, the par or stated value... Form T-1 and Form T-2, if the rate of interest is not determined at the time these forms are filed. (c...

  19. 77 FR 28912 - Self-Regulatory Organizations; NYSE Amex LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-16

    ... credit per transaction when adding liquidity, if the SLP meets quoting requirements pursuant to Rule 107B... $0.0030 equity per share credit per transaction when adding liquidity, if the SLP does not meet the...

  20. 75 FR 47327 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-05

    ... interest in a share of Series A Non-Cumulative Perpetual Preferred Stock, $100,000 liquidation preference... on the subject line if e-mail is used. To help the Commission process and review your comments more...

  1. Cyberspace: The Community Frontier.

    ERIC Educational Resources Information Center

    Albanese, Andrew Richard

    2002-01-01

    This interview with John Perry Barlow (Grated Dead lyricist / technology expert) addresses issues concerning cyberspace, technology, and culture. Topics include the idea of community; the Internet; the Electronic Frontier Foundation; the role of libraries; print materials; concepts of information; peer-to-peer technology; file sharing; and…

  2. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  3. 48 CFR 304.803-70 - Contract/order file organization and use of checklists.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...

  4. 48 CFR 304.803-70 - Contract/order file organization and use of checklists.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...

  5. 48 CFR 304.803-70 - Contract/order file organization and use of checklists.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...

  6. 48 CFR 304.803-70 - Contract/order file organization and use of checklists.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...

  7. 48 CFR 304.803-70 - Contract/order file organization and use of checklists.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Contract/order file organization and use of checklists. 304.803-70 Section 304.803-70 Federal Acquisition Regulations System HEALTH... content of HHS contract and order files, OPDIVs shall use the folder filing system and accompanying file...

  8. To evaluate and compare the efficacy, cleaning ability of hand and two rotary systems in root canal retreatment.

    PubMed

    Shivanand, Sunita; Patil, Chetan R; Thangala, Venugopal; Kumar, Pabbati Ravi; Sachdeva, Jyoti; Krishna, Akash

    2013-05-01

    To evaluate and compare the efficacy, cleaning ability of hand and two rotary systems in root canal retreatment. Sixty extracted premolars were retreated with following systems: Group -ProTaper Universal retreatment files, Group 2-ProFile system, Group 3-H-file. Specimens were split longitudinally and amount of remaining gutta-percha on the canal walls was assessed using direct visual scoring with the aid of stereomicroscope. Results were statistically analyzed using ANOVA test. Completely clean root canal walls were not achieved with any of the techniques investigated. However, all three systems proved to be effective for gutta-percha removal. Significant difference was found between ProTaper universal retreatment file and H-file, and also between ProFile and H-file. Under the conditions of the present study, ProTaper Universal retreatment files left significantly less guttapercha and sealer than ProFile and H-file. Rotary systems in combination with gutta-percha solvents can perform superiorly as compared to the time tested traditional hand instrumentation in root canal retreatment.

  9. Mission Operations Center (MOC) - Precipitation Processing System (PPS) Interface Software System (MPISS)

    NASA Technical Reports Server (NTRS)

    Ferrara, Jeffrey; Calk, William; Atwell, William; Tsui, Tina

    2013-01-01

    MPISS is an automatic file transfer system that implements a combination of standard and mission-unique transfer protocols required by the Global Precipitation Measurement Mission (GPM) Precipitation Processing System (PPS) to control the flow of data between the MOC and the PPS. The primary features of MPISS are file transfers (both with and without PPS specific protocols), logging of file transfer and system events to local files and a standard messaging bus, short term storage of data files to facilitate retransmissions, and generation of file transfer accounting reports. The system includes a graphical user interface (GUI) to control the system, allow manual operations, and to display events in real time. The PPS specific protocols are an enhanced version of those that were developed for the Tropical Rainfall Measuring Mission (TRMM). All file transfers between the MOC and the PPS use the SSH File Transfer Protocol (SFTP). For reports and data files generated within the MOC, no additional protocols are used when transferring files to the PPS. For observatory data files, an additional handshaking protocol of data notices and data receipts is used. MPISS generates and sends to the PPS data notices containing data start and stop times along with a checksum for the file for each observatory data file transmitted. MPISS retrieves the PPS generated data receipts that indicate the success or failure of the PPS to ingest the data file and/or notice. MPISS retransmits the appropriate files as indicated in the receipt when required. MPISS also automatically retrieves files from the PPS. The unique feature of this software is the use of both standard and PPS specific protocols in parallel. The advantage of this capability is that it supports users that require the PPS protocol as well as those that do not require it. The system is highly configurable to accommodate the needs of future users.

  10. Cause-and-effect analysis of risk management files to assess patient care in the emergency department.

    PubMed

    White, Andrew A; Wright, Seth W; Blanco, Roberto; Lemonds, Brent; Sisco, Janice; Bledsoe, Sandy; Irwin, Cindy; Isenhour, Jennifer; Pichert, James W

    2004-10-01

    Identifying the etiologies of adverse outcomes is an important first step in improving patient safety and reducing malpractice risks. However, relatively little is known about the causes of emergency department-related adverse outcomes. The objective was to describe a method for identification of common causes of adverse outcomes in an emergency department. This methodology potentially can suggest ways to improve care and might provide a model for identification of factors associated with adverse outcomes. This was a retrospective analysis of 74 consecutive files opened by a malpractice insurer between 1995 and 2000. Each risk-management file was analyzed to identify potential causes of adverse outcomes. The main outcomes were rater-assigned codes for alleged problems with care (e.g., failures of communication or problems related to diagnosis). About 50% of cases were related to injuries or abdominal complaints. A contributing cause was found in 92% of cases, and most had more than one contributing cause. The most frequent contributing categories included failure to diagnose (45%), supervision problems (31%), communication problems (30%), patient behavior (24%), administrative problems (20%), and documentation (20%). Specific relating factors within these categories, such as lack of timely resident supervision and failure to follow policies and procedures, were identified. This project documented that an aggregate analysis of risk-management files has the potential to identify shared causes related to real or perceived adverse outcomes. Several potentially correctable systems problems were identified using this methodology. These simple, descriptive management tools may be useful in identifying issues for problem solving and can be easily learned by physicians and managers.

  11. 47 CFR 1.10006 - Is electronic filing mandatory?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Is electronic filing mandatory? 1.10006 Section... International Bureau Filing System § 1.10006 Is electronic filing mandatory? Electronic filing is mandatory for... System (IBFS) form is available. Applications for which an electronic form is not available must be filed...

  12. Checkpoint-Restart in User Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CRUISE implements a user-space file system that stores data in main memory and transparently spills over to other storage, like local flash memory or the parallel file system, as needed. CRUISE also exposes file contents fo remote direct memory access, allowing external tools to copy files to the parallel file system in the background with reduced CPU interruption.

  13. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  14. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  15. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  16. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  17. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical records...

  18. Geosoft eXecutables (GX's) Developed by the U.S. Geological Survey, Version 2.0, with Notes on GX Development from Fortran Code

    USGS Publications Warehouse

    Phillips, Jeffrey D.

    2007-01-01

    Introduction Geosoft executables (GX's) are custom software modules for use with the Geosoft Oasis montaj geophysical data processing system, which currently runs under the Microsoft Windows 2000 or XP operating systems. The U.S. Geological Survey (USGS) uses Oasis montaj primarily for the processing and display of airborne geophysical data. The ability to add custom software modules to the Oasis montaj system is a feature employed by the USGS in order to take advantage of the large number of geophysical algorithms developed by the USGS during the past half century. This main part of this report, along with Appendix 1, describes Version 2.0 GX's developed by the USGS or specifically for the USGS by contractors. These GX's perform both basic and advanced operations. Version 1.0 GX's developed by the USGS were described by Phillips and others (2003), and are included in Version 2.0. Appendix 1 contains the help files for the individual GX's. Appendix 2 describes the new method that was used to create the compiled GX files, starting from legacy Fortran source code. Although the new method shares many steps with the approach presented in the Geosoft GX Developer manual, it differs from that approach in that it uses free, open-source Fortran and C compilers and avoids all Fortran-to-C conversion.

  19. Quantitative evaluation of apically extruded debris with different single-file systems: Reciproc, F360 and OneShape versus Mtwo.

    PubMed

    Bürklein, S; Benten, S; Schäfer, E

    2014-05-01

    To assess in a laboratory setting the amount of apically extruded debris associated with different single-file nickel-titanium instrumentation systems compared to one multiple-file rotary system. Eighty human mandibular central incisors were randomly assigned to four groups (n = 20 teeth per group). The root canals were instrumented according to the manufacturers' instructions using the reciprocating single-file system Reciproc, the single-file rotary systems F360 and OneShape and the multiple-file rotary Mtwo instruments. The apically extruded debris was collected and dried in pre-weighed glass vials. The amount of debris was assessed with a micro balance and statistically analysed using anova and post hoc Student-Newman-Keuls test. The time required to prepare the canals with the different instruments was also recorded. Reciproc produced significantly more debris compared to all other systems (P < 0.05). No significant difference was noted between the two single-file rotary systems and the multiple-file rotary system (P > 0.05). Instrumentation with the three single-file systems was significantly faster than with Mtwo (P < 0.05). Under the condition of this study, all systems caused apical debris extrusion. Rotary instrumentation was associated with less debris extrusion compared to reciprocal instrumentation. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  20. 78 FR 1286 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-08

    ... (``CCS'').\\12\\ CCS provides the Display Book[supreg] \\13\\ with the amount of shares that the DMM is willing to trade at price points outside, at and inside the Exchange Best Bid or Best Offer (``BBO''). CCS...

Top