Sample records for performance shared storage

  1. Shared Storage Usage Policy | High-Performance Computing | NREL

    Science.gov Websites

    Shared Storage Usage Policy Shared Storage Usage Policy To use NREL's high-performance computing (HPC) systems, you must abide by the Shared Storage Usage Policy. /projects NREL HPC allocations include storage space in the /projects filesystem. However, /projects is a shared resource and project

  2. Cryptonite: A Secure and Performant Data Repository on Public Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor

    2012-06-29

    Cloud storage has become immensely popular for maintaining synchronized copies of files and for sharing documents with collaborators. However, there is heightened concern about the security and privacy of Cloud-hosted data due to the shared infrastructure model and an implicit trust in the service providers. Emerging needs of secure data storage and sharing for domains like Smart Power Grids, which deal with sensitive consumer data, require the persistence and availability of Cloud storage but with client-controlled security and encryption, low key management overhead, and minimal performance costs. Cryptonite is a secure Cloud storage repository that addresses these requirements using amore » StrongBox model for shared key management.We describe the Cryptonite service and desktop client, discuss performance optimizations, and provide an empirical analysis of the improvements. Our experiments shows that Cryptonite clients achieve a 40% improvement in file upload bandwidth over plaintext storage using the Azure Storage Client API despite the added security benefits, while our file download performance is 5 times faster than the baseline for files greater than 100MB.« less

  3. An object-based storage model for distributed remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng

    2006-10-01

    It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.

  4. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  5. Digital Photograph Security: What Plastic Surgeons Need to Know.

    PubMed

    Thomas, Virginia A; Rugeley, Patricia B; Lau, Frank H

    2015-11-01

    Sharing and storing digital patient photographs occur daily in plastic surgery. Two major risks associated with the practice, data theft and Health Insurance Portability and Accountability Act (HIPAA) violations, have been dramatically amplified by high-speed data connections and digital camera ubiquity. The authors review what plastic surgeons need to know to mitigate those risks and provide recommendations for implementing an ideal, HIPAA-compliant solution for plastic surgeons' digital photography needs: smartphones and cloud storage. Through informal discussions with plastic surgeons, the authors identified the most common photograph sharing and storage methods. For each method, a literature search was performed to identify the risks of data theft and HIPAA violations. HIPAA violation risks were confirmed by the second author (P.B.R.), a compliance liaison and privacy officer. A comprehensive review of HIPAA-compliant cloud storage services was performed. When possible, informal interviews with cloud storage services representatives were conducted. The most common sharing and storage methods are not HIPAA compliant, and several are prone to data theft. The authors' review of cloud storage services identified six HIPAA-compliant vendors that have strong to excellent security protocols and policies. These options are reasonably priced. Digital photography and technological advances offer major benefits to plastic surgeons but are not without risks. A proper understanding of data security and HIPAA regulations needs to be applied to these technologies to safely capture their benefits. Cloud storage services offer efficient photograph sharing and storage with layers of security to ensure HIPAA compliance and mitigate data theft risk.

  6. SSeCloud: Using secret sharing scheme to secure keys

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Huang, Yang; Yang, Disheng; Zhang, Yuzhen; Liu, Hengchang

    2017-08-01

    With the use of cloud storage services, one of the concerns is how to protect sensitive data securely and privately. While users enjoy the convenience of data storage provided by semi-trusted cloud storage providers, they are confronted with all kinds of risks at the same time. In this paper, we present SSeCloud, a secure cloud storage system that improves security and usability by applying secret sharing scheme to secure keys. The system encrypts uploading files on the client side and splits encrypted keys into three shares. Each of them is respectively stored by users, cloud storage providers and the alternative third trusted party. Any two of the parties can reconstruct keys. Evaluation results of prototype system show that SSeCloud provides high security without too much performance penalty.

  7. Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure

    DOE PAGES

    Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R.; ...

    2016-04-05

    While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulkmore » data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.« less

  8. Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R.

    While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulkmore » data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.« less

  9. QoS support for end users of I/O-intensive applications using shared storage systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2011-01-19

    I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less

  10. The performance of disk arrays in shared-memory database machines

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  11. Implementing Journaling in a Linux Shared Disk File System

    NASA Technical Reports Server (NTRS)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  12. Neuroinformatics Database (NiDB) – A Modular, Portable Database for the Storage, Analysis, and Sharing of Neuroimaging Data

    PubMed Central

    Anderson, Beth M.; Stevens, Michael C.; Glahn, David C.; Assaf, Michal; Pearlson, Godfrey D.

    2013-01-01

    We present a modular, high performance, open-source database system that incorporates popular neuroimaging database features with novel peer-to-peer sharing, and a simple installation. An increasing number of imaging centers have created a massive amount of neuroimaging data since fMRI became popular more than 20 years ago, with much of that data unshared. The Neuroinformatics Database (NiDB) provides a stable platform to store and manipulate neuroimaging data and addresses several of the impediments to data sharing presented by the INCF Task Force on Neuroimaging Datasharing, including 1) motivation to share data, 2) technical issues, and 3) standards development. NiDB solves these problems by 1) minimizing PHI use, providing a cost effective simple locally stored platform, 2) storing and associating all data (including genome) with a subject and creating a peer-to-peer sharing model, and 3) defining a sample, normalized definition of a data storage structure that is used in NiDB. NiDB not only simplifies the local storage and analysis of neuroimaging data, but also enables simple sharing of raw data and analysis methods, which may encourage further sharing. PMID:23912507

  13. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    NASA Astrophysics Data System (ADS)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16 channels with 240MB/s network throughput.When it is integrated,sharing system can provide 1020MB/s write speed simultaneously.When the master storage server fails, the backup storage server takes over the normal service.The literacy of client will not be affected,in which switching time is less than 5s.The design and integrated storage system meet users requirements. Anyway, all-fiber way is too expensive in SAN; SCSI hard disk transfer rate may still be the bottleneck in the development of the entire storage system. Stornext can provide users with efficient sharing, management, automatic archiving of large numbers of files and hardware solutions. It occupies a leading position in big data management. Storage is the most popular sharing shareware, and there are drawbacks in Stornext: Firstly, Stornext software is expensive, in which charge by the sites. When the network scale is large, the purchase cost will be very high. Secondly, the parameters of Stornext software are more demands on the skills of technical staff. If there is a problem, it is difficult to exclude.

  14. Tuning HDF5 subfiling performance on parallel file systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less

  15. Fair-share scheduling algorithm for a tertiary storage system

    NASA Astrophysics Data System (ADS)

    Jakl, Pavel; Lauret, Jérôme; Šumbera, Michal

    2010-04-01

    Any experiment facing Peta bytes scale problems is in need for a highly scalable mass storage system (MSS) to keep a permanent copy of their valuable data. But beyond the permanent storage aspects, the sheer amount of data makes complete data-set availability onto live storage (centralized or aggregated space such as the one provided by Scalla/Xrootd) cost prohibitive implying that a dynamic population from MSS to faster storage is needed. One of the most challenging aspects of dealing with MSS is the robotic tape component. If a robotic system is used as the primary storage solution, the intrinsically long access times (latencies) can dramatically affect the overall performance. To speed the retrieval of such data, one could organize the requests according to criterion with an aim to deliver maximal data throughput. However, such approaches are often orthogonal to fair resource allocation and a trade-off between quality of service, responsiveness and throughput is necessary for achieving an optimal and practical implementation of a truly faire-share oriented file restore policy. Starting from an explanation of the key criterion of such a policy, we will present evaluations and comparisons of three different MSS file restoration algorithms which meet fair-share requirements, and discuss their respective merits. We will quantify their impact on a typical file restoration cycle for the RHIC/STAR experimental setup and this, within a development, analysis and production environment relying on a shared MSS service [1].

  16. 40 CFR 60.433 - Performance test and compliance provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... facilities routinely share the same raw ink storage/handling system with existing facilities, then temporary measurement procedures for segregating the raw inks, related coatings, VOC solvent, and water used at the... the purpose of measuring bulk storage tank quantities of each color of raw ink and each related...

  17. 40 CFR 60.433 - Performance test and compliance provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... facilities routinely share the same raw ink storage/handling system with existing facilities, then temporary measurement procedures for segregating the raw inks, related coatings, VOC solvent, and water used at the... the purpose of measuring bulk storage tank quantities of each color of raw ink and each related...

  18. 40 CFR 60.433 - Performance test and compliance provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... facilities routinely share the same raw ink storage/handling system with existing facilities, then temporary measurement procedures for segregating the raw inks, related coatings, VOC solvent, and water used at the... the purpose of measuring bulk storage tank quantities of each color of raw ink and each related...

  19. Data Storage and sharing for the long tail of science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B.; Pouchard, L.; Smith, P. M.

    Research data infrastructure such as storage must now accommodate new requirements resulting from trends in research data management that require researchers to store their data for the long term and make it available to other researchers. We propose Data Depot, a system and service that provides capabilities for shared space within a group, shared applications, flexible access patterns and ease of transfer at Purdue University. We evaluate Depot as a solution for storing and sharing multiterabytes of data produced in the long tail of science with a use case in soundscape ecology studies from the Human- Environment Modeling and Analysismore » Laboratory. We observe that with the capabilities enabled by Data Depot, researchers can easily deploy fine-grained data access control, manage data transfer and sharing, as well as integrate their workflows into a High Performance Computing environment.« less

  20. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  1. The Petascale Data Storage Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Garth; Long, Darrell; Honeyman, Peter

    2013-07-01

    Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.

  2. LADS: Optimizing Data Transfers using Layout-Aware Data Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R

    While future terabit networks hold the promise of signifi- cantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today s 100 gigabit networks to real- ize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink. Data stor- age infrastructure at both the source and sink and its in- terplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this paper, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network en- vironment, and we presentmore » a new bulk data movement framework called LADS for terabit networks. LADS ex- ploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to use zero-copy, OS-bypass hardware when available. It can further im- prove data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared stor- age resource, improving I/O bandwidth, and data transfer rates across the high speed networks.« less

  3. Dual-Wavelength Sensitized Photopolymer for Holographic Data Storage

    NASA Astrophysics Data System (ADS)

    Tao, Shiquan; Zhao, Yuxia; Wan, Yuhong; Zhai, Qianli; Liu, Pengfei; Wang, Dayong; Wu, Feipeng

    2010-08-01

    Novel photopolymers for holographic storage were investigated by combining acrylate monomers and/or vinyl monomers as recording media and liquid epoxy resins plus an amine harder as binder. In order to improve the holographic performances of the material at blue-green wavelength band two novel dyes were used as sensitizer. The methods of evaluating the holographic performances of the material, including the shrinkage and noise characteristics, are described in detail. Preliminary experiments show that samples with optimized composite have good holographic performances, and it is possible to record dual-wavelength hologram simultaneously in this photopolymer by sharing the same optical system, thus the storage density and data rate can be doubly increased.

  4. Cake: Enabling High-level SLOs on Shared Storage Systems

    DTIC Science & Technology

    2012-11-07

    Cake: Enabling High-level SLOs on Shared Storage Systems Andrew Wang Shivaram Venkataraman Sara Alspaugh Randy H. Katz Ion Stoica Electrical...Date) * * * * * * * Professor R. Katz Second Reader (Date) Cake: Enabling High-level SLOs on Shared Storage Systems Andrew Wang, Shivaram Venkataraman ...Report MIT-LCS-TR-667, MIT, Laboratory for Computer Science, 1995. [39] A. Wang, S. Venkataraman , S. Alspaugh, I. Stoica, and R. Katz. Sweet storage SLOs

  5. Establishment of key grid-connected performance index system for integrated PV-ES system

    NASA Astrophysics Data System (ADS)

    Li, Q.; Yuan, X. D.; Qi, Q.; Liu, H. M.

    2016-08-01

    In order to further promote integrated optimization operation of distributed new energy/ energy storage/ active load, this paper studies the integrated photovoltaic-energy storage (PV-ES) system which is connected with the distribution network, and analyzes typical structure and configuration selection for integrated PV-ES generation system. By combining practical grid- connected characteristics requirements and technology standard specification of photovoltaic generation system, this paper takes full account of energy storage system, and then proposes several new grid-connected performance indexes such as paralleled current sharing characteristic, parallel response consistency, adjusting characteristic, virtual moment of inertia characteristic, on- grid/off-grid switch characteristic, and so on. A comprehensive and feasible grid-connected performance index system is then established to support grid-connected performance testing on integrated PV-ES system.

  6. Mapping the developmental constraints on working memory span performance.

    PubMed

    Bayliss, Donna M; Jarrold, Christopher; Baddeley, Alan D; Gunn, Deborah M; Leigh, Eleanor

    2005-07-01

    This study investigated the constraints underlying developmental improvements in complex working memory span performance among 120 children of between 6 and 10 years of age. Independent measures of processing efficiency, storage capacity, rehearsal speed, and basic speed of processing were assessed to determine their contribution to age-related variance in complex span. Results showed that developmental improvements in complex span were driven by 2 age-related but separable factors: 1 associated with general speed of processing and 1 associated with storage ability. In addition, there was an age-related contribution shared between working memory, processing speed, and storage ability that was important for higher level cognition. These results pose a challenge for models of complex span performance that emphasize the importance of processing speed alone.

  7. Shared prefetching to reduce execution skew in multi-threaded systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichenberger, Alexandre E; Gunnels, John A

    Mechanisms are provided for optimizing code to perform prefetching of data into a shared memory of a computing device that is shared by a plurality of threads that execute on the computing device. A memory stream of a portion of code that is shared by the plurality of threads is identified. A set of prefetch instructions is distributed across the plurality of threads. Prefetch instructions are inserted into the instruction sequences of the plurality of threads such that each instruction sequence has a separate sub-portion of the set of prefetch instructions, thereby generating optimized code. Executable code is generated basedmore » on the optimized code and stored in a storage device. The executable code, when executed, performs the prefetches associated with the distributed set of prefetch instructions in a shared manner across the plurality of threads.« less

  8. Cricket: A Mapped, Persistent Object Store

    NASA Technical Reports Server (NTRS)

    Shekita, Eugene; Zwilling, Michael

    1996-01-01

    This paper describes Cricket, a new database storage system that is intended to be used as a platform for design environments and persistent programming languages. Cricket uses the memory management primitives of the Mach operating system to provide the abstraction of a shared, transactional single-level store that can be directly accessed by user applications. In this paper, we present the design and motivation for Cricket. We also present some initial performance results which show that, for its intended applications, Cricket can provide better performance than a general-purpose database storage system.

  9. Unbreakable distributed storage with quantum key distribution network and password-authenticated secret sharing

    PubMed Central

    Fujiwara, M.; Waseda, A.; Nojima, R.; Moriai, S.; Ogata, W.; Sasaki, M.

    2016-01-01

    Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir’s (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km). PMID:27363566

  10. Unbreakable distributed storage with quantum key distribution network and password-authenticated secret sharing.

    PubMed

    Fujiwara, M; Waseda, A; Nojima, R; Moriai, S; Ogata, W; Sasaki, M

    2016-07-01

    Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir's (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km).

  11. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  12. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less

  13. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine

    The purpose of this report is to clarify the challenges associated with storage for secure enclaves. The major focus areas for the report are: - review of relevant parallel filesystem technologies to identify assets and gaps; - review of filesystem isolation/protection mechanisms, to include native filesystem capabilities and auxiliary/layered techniques; - definition of storage architectures that can be used for customizable compute enclaves (i.e., clarification of use-cases that must be supported for shared storage scenarios); - investigate vendor products related to secure storage. This study provides technical details on the storage and filesystem used for HPC with particular attention onmore » elements that contribute to creating secure storage. We outline the pieces for a a shared storage architecture that balances protection and performance by leveraging the isolation capabilities available in filesystems and virtualization technologies to maintain the integrity of the data. Key Points: There are a few existing and in-progress protection features in Lustre related to secure storage, which are discussed in (Chapter 3.1). These include authentication capabilities like GSSAPI/Kerberos and the in-progress work for GSSAPI/Host-keys. The GPFS filesystem provides native support for encryption, which is not directly available in Lustre. Additionally, GPFS includes authentication/authorization mechanisms for inter-cluster sharing of filesystems (Chapter 3.2). The limitations of key importance for secure storage/filesystems are: (i) restricting sub-tree mounts for parallel filesystem (which is not directly supported in Lustre or GPFS), and (ii) segregation of hosts on the storage network and practical complications with dynamic additions to the storage network, e.g., LNET. A challenge for VM based use cases will be to provide efficient IO forwarding of the parallel filessytem from the host to the guest (VM). There are promising options like para-virtualized filesystems to help with this issue, which are a particular instances of the more general challenge of efficient host/guest IO that is the focus of interfaces like virtio. A collection of bridging technologies have been identified in Chapter 4, which can be helpful to overcome the limitations and challenges of supporting efficient storage for secure enclaves. The synthesis of native filesystem security mechanisms and bridging technologies led to an isolation-centric storage architecture that is proposed in Chapter 5, which leverages isolation mechanisms from different layers to facilitate secure storage for an enclave. Recommendations: The following highlights recommendations from the investigations done thus far. - The Lustre filesystem offers excellent performance but does not support some security related features, e.g., encryption, that are included in GPFS. If encryption is of paramount importance, then GPFS may be a more suitable choice. - There are several possible Lustre related enhancements that may provide functionality of use for secure-enclaves. However, since these features are not currently integrated, the use of Lustre as a secure storage system may require more direct involvement (support). (*The network that connects the storage subsystem and users, e.g., Lustre s LNET.) - The use of OpenStack with GPFS will be more streamlined than with Lustre, as there are available drivers for GPFS. - The Manilla project offers Filesystem as a Service for OpenStack and is worth further investigation. Manilla has some support for GPFS. - The proposed Lustre enhancement of Dynamic-LNET should be further investigated to provide more dynamic changes to the storage network which could be used to isolate hosts and their tenants. - The Linux namespaces offer a good solution for creating efficient restrictions to shared HPC filesystems. However, we still need to conduct a thorough round of storage/filesystem benchmarks. - Vendor products should be more closely reviewed, possibly to include evaluation of performance/protection of select products. (Note, we are investigation the option of evaluating equipment from Seagate/Xyratex.) Outline: The remainder of this report is structured as follows: - Section 1: Describes the growing importance of secure storage architectures and highlights some challenges for HPC. - Section 2: Provides background information on HPC storage architectures, relevant supporting technologies for secure storage and details on OpenStack components related to storage. Note, that background material on HPC storage architectures in this chapter can be skipped if the reader is already familiar with Lustre and GPFS. - Section 3: A review of protection mechanisms in two HPC filesystems; details about available isolation, authentication/authorization and performance capabilities are discussed. - Section 4: Describe technologies that can be used to bridge gaps in HPC storage and filesystems to facilitate...« less

  15. A Secure and Efficient Audit Mechanism for Dynamic Shared Data in Cloud Storage

    PubMed Central

    2014-01-01

    With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data. PMID:24959630

  16. A secure and efficient audit mechanism for dynamic shared data in cloud storage.

    PubMed

    Kwon, Ohmin; Koo, Dongyoung; Shin, Yongjoo; Yoon, Hyunsoo

    2014-01-01

    With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data.

  17. FORCEnet Net Centric Architecture - A Standards View

    DTIC Science & Technology

    2006-06-01

    SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION DATA MANAGEMENT APPLICATION...R V I C E P L A T F O R M S E R V I C E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM...E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION

  18. Parallel checksumming of data chunks of a shared data object using a log-structured file system

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-09-06

    Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.

  19. Virtual memory support for distributed computing environments using a shared data object model

    NASA Astrophysics Data System (ADS)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  20. Efficient Access to Massive Amounts of Tape-Resident Data

    NASA Astrophysics Data System (ADS)

    Yu, David; Lauret, Jérôme

    2017-10-01

    Randomly restoring files from tapes degrades the read performance primarily due to frequent tape mounts. The high latency and time-consuming tape mount and dismount is a major issue when accessing massive amounts of data from tape storage. BNL’s mass storage system currently holds more than 80 PB of data on tapes, managed by HPSS. To restore files from HPSS, we make use of a scheduler software, called ERADAT. This scheduler system was originally based on code from Oak Ridge National Lab, developed in the early 2000s. After some major modifications and enhancements, ERADAT now provides advanced HPSS resource management, priority queuing, resource sharing, web-browser visibility of real-time staging activities and advanced real-time statistics and graphs. ERADAT is also integrated with ACSLS and HPSS for near real-time mount statistics and resource control in HPSS. ERADAT is also the interface between HPSS and other applications such as the locally developed Data Carousel, providing fair resource-sharing policies and related capabilities. ERADAT has demonstrated great performance at BNL.

  1. Parallel-Vector Algorithm For Rapid Structural Anlysis

    NASA Technical Reports Server (NTRS)

    Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.

    1993-01-01

    New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.

  2. A Point to Share: Streamlining Access Services Workflow through Online Collaboration, Communication, and Storage with Microsoft SharePoint

    ERIC Educational Resources Information Center

    Diffin, Jennifer; Chirombo, Fanuel; Nangle, Dennis; de Jong, Mark

    2010-01-01

    This article explains how the document management team (circulation and interlibrary loan) at the University of Maryland University College implemented Microsoft's SharePoint product to create a central hub for online collaboration, communication, and storage. Enhancing the team's efficiency, organization, and cooperation was the primary goal.…

  3. Policies | High-Performance Computing | NREL

    Science.gov Websites

    Use Learn about policy governing user accountability, resource use, use by foreign nationals states. Data Security Learn about the data security policy, including data protection, data security retention policy, including project-centric and user-centric data. Shared Storage Usage Learn about a policy

  4. Architecture and method for a burst buffer using flash technology

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung

    2016-03-15

    A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.

  5. Lead/acid batteries in systems to improve power quality

    NASA Astrophysics Data System (ADS)

    Taylor, P.; Butler, P.; Nerbun, W.

    Increasing dependence on computer technology is driving needs for extremely high-quality power to prevent loss of information, material, and workers' time that represent billions of dollars annually. This cost has motivated commercial and Federal research and development of energy storage systems that detect and respond to power-quality failures in milliseconds. Electrochemical batteries are among the storage media under investigation for these systems. Battery energy storage systems that employ either flooded lead/acid or valve-regulated lead/acid battery technologies are becoming commercially available to capture a share of this emerging market. Cooperative research and development between the US Department of Energy and private industry have led to installations of lead/acid-based battery energy storage systems to improve power quality at utility and industrial sites and commercial development of fully integrated, modular battery energy storage system products for power quality. One such system by AC Battery Corporation, called the PQ2000, is installed at a test site at Pacific Gas and Electric Company (San Ramon, CA, USA) and at a customer site at Oglethorpe Power Corporation (Tucker, GA, USA). The PQ2000 employs off-the-shelf power electronics in an integrated methodology to control the factors that affect the performance and service life of production-model, low-maintenance, flooded lead/acid batteries. This system, and other members of this first generation of lead/acid-based energy storage systems, will need to compete vigorously for a share of an expanding, yet very aggressive, power quality market.

  6. Visual and spatial working memory are not that dissociated after all: a time-based resource-sharing account.

    PubMed

    Vergauwe, Evie; Barrouillet, Pierre; Camos, Valérie

    2009-07-01

    Examinations of interference between visual and spatial materials in working memory have suggested domain- and process-based fractionations of visuo-spatial working memory. The present study examined the role of central time-based resource sharing in visuo-spatial working memory and assessed its role in obtained interference patterns. Visual and spatial storage were combined with both visual and spatial on-line processing components in computer-paced working memory span tasks (Experiment 1) and in a selective interference paradigm (Experiment 2). The cognitive load of the processing components was manipulated to investigate its impact on concurrent maintenance for both within-domain and between-domain combinations of processing and storage components. In contrast to both domain- and process-based fractionations of visuo-spatial working memory, the results revealed that recall performance was determined by the cognitive load induced by the processing of items, rather than by the domain to which those items pertained. These findings are interpreted as evidence for a time-based resource-sharing mechanism in visuo-spatial working memory.

  7. How much electrical energy storage do we need? A synthesis for the U.S., Europe, and Germany

    DOE PAGES

    Cebulla, Felix; Haas, Jannik; Eichman, Josh; ...

    2018-02-03

    Electrical energy storage (EES) is a promising flexibility source for prospective low-carbon energy systems. In the last couple of years, many studies for EES capacity planning have been produced. However, these resulted in a very broad range of power and energy capacity requirements for storage, making it difficult for policymakers to identify clear storage planning recommendations. Therefore, we studied 17 recent storage expansion studies pertinent to the U.S., Europe, and Germany. We then systemized the storage requirement per variable renewable energy (VRE) share and generation technology. Our synthesis reveals that with increasing VRE shares, the EES power capacity increases linearly;more » and the energy capacity, exponentially. Further, by analyzing the outliers, the EES energy requirements can be at least halved. It becomes clear that grids dominated by photovoltaic energy call for more EES, while large shares of wind rely more on transmission capacity. Taking into account the energy mix clarifies - to a large degree - the apparent conflict of the storage requirements between the existing studies. Finally, there might exist a negative bias towards storage because transmission costs are frequently optimistic (by neglecting execution delays and social opposition) and storage can cope with uncertainties, but these issues are rarely acknowledged in the planning process.« less

  8. How much electrical energy storage do we need? A synthesis for the U.S., Europe, and Germany

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cebulla, Felix; Haas, Jannik; Eichman, Josh

    Electrical energy storage (EES) is a promising flexibility source for prospective low-carbon energy systems. In the last couple of years, many studies for EES capacity planning have been produced. However, these resulted in a very broad range of power and energy capacity requirements for storage, making it difficult for policymakers to identify clear storage planning recommendations. Therefore, we studied 17 recent storage expansion studies pertinent to the U.S., Europe, and Germany. We then systemized the storage requirement per variable renewable energy (VRE) share and generation technology. Our synthesis reveals that with increasing VRE shares, the EES power capacity increases linearly;more » and the energy capacity, exponentially. Further, by analyzing the outliers, the EES energy requirements can be at least halved. It becomes clear that grids dominated by photovoltaic energy call for more EES, while large shares of wind rely more on transmission capacity. Taking into account the energy mix clarifies - to a large degree - the apparent conflict of the storage requirements between the existing studies. Finally, there might exist a negative bias towards storage because transmission costs are frequently optimistic (by neglecting execution delays and social opposition) and storage can cope with uncertainties, but these issues are rarely acknowledged in the planning process.« less

  9. Sixth Goddard Conference on Mass Storage Systems and Technologies Held in Cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    1998-01-01

    This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence.

  10. Research and Development on the Storage Ring Vacuum System for the APS Upgrade Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stillwell, B.; Brajuskovic, B.; Carter, J.

    A number of research and development activities are underway at Argonne National Laboratory to build confidence in the designs for the storage ring vacuum system required for the Advanced Photon Source Up-grade project (APS-U) [1]. The predominant technical risks are: excessive residual gas pressures during operation; insufficient beam position monitor stability; excessive beam impedance; excessive heating by induced electrical surface currents; and insufficient operational reliability. Present efforts to mitigate these risks include: building and evaluating mockup assemblies; performing mechanical testing of chamber weld joints; developing computational tools; investigating design alternatives; and performing electrical bench measurements. Status of these activities andmore » some of what has been learned to date will be shared.« less

  11. POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.; Betts, W.

    2015-12-01

    The STAR online computing infrastructure has become an intensive dynamic system used for first-hand data collection and analysis resulting in a dense collection of data output. As we have transitioned to our current state, inefficient, limited storage systems have become an impediment to fast feedback to online shift crews. Motivation for a centrally accessible, scalable and redundant distributed storage system had become a necessity in this environment. OpenStack Swift Object Storage and Ceph Object Storage are two eye-opening technologies as community use and development have led to success elsewhere. In this contribution, OpenStack Swift and Ceph have been put to the test with single and parallel I/O tests, emulating real world scenarios for data processing and workflows. The Ceph file system storage, offering a POSIX compliant file system mounted similarly to an NFS share was of particular interest as it aligned with our requirements and was retained as our solution. I/O performance tests were run against the Ceph POSIX file system and have presented surprising results indicating true potential for fast I/O and reliability. STAR'S online compute farm historical use has been for job submission and first hand data analysis. The goal of reusing the online compute farm to maintain a storage cluster and job submission will be an efficient use of the current infrastructure.

  12. Software Defined Cyberinfrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, Ian; Blaiszik, Ben; Chard, Kyle

    Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policiesmore » by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.« less

  13. EOS developments

    NASA Astrophysics Data System (ADS)

    Sindrilaru, Elvin A.; Peters, Andreas J.; Adde, Geoffray M.; Duellmann, Dirk

    2017-10-01

    CERN has been developing and operating EOS as a disk storage solution successfully for over 6 years. The CERN deployment provides 135 PB and stores 1.2 billion replicas distributed over two computer centres. Deployment includes four LHC instances, a shared instance for smaller experiments and since last year an instance for individual user data as well. The user instance represents the backbone of the CERNBOX service for file sharing. New use cases like synchronisation and sharing, the planned migration to reduce AFS usage at CERN and the continuous growth has brought EOS to new challenges. Recent developments include the integration and evaluation of various technologies to do the transition from a single active in-memory namespace to a scale-out implementation distributed over many meta-data servers. The new architecture aims to separate the data from the application logic and user interface code, thus providing flexibility and scalability to the namespace component. Another important goal is to provide EOS as a CERN-wide mounted filesystem with strong authentication making it a single storage repository accessible via various services and front- ends (/eos initiative). This required new developments in the security infrastructure of the EOS FUSE implementation. Furthermore, there were a series of improvements targeting the end-user experience like tighter consistency and latency optimisations. In collaboration with Seagate as Openlab partner, EOS has a complete integration of OpenKinetic object drive cluster as a high-throughput, high-availability, low-cost storage solution. This contribution will discuss these three main development projects and present new performance metrics.

  14. Comparative Investigation of Shared Filesystems for the LHCb Online Cluster

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Neufeld, Niko

    2012-12-01

    This paper describes the investigative study undertaken to evaluate shared filesystem performance and suitability in the LHCb Online environment. Particular focus is given to the measurements and field tests designed and performed on an in-house OpenAFS setup; related comparisons with NFSv4 and GPFS (a clustered filesystem from IBM) are presented. The motivation for the investigation and the test setup arises from the need to serve common user-space like home directories, experiment software and control areas, and clustered log areas. Since the operational requirements on such user-space are stringent in terms of read-write operations (in frequency and access speed) and unobtrusive data relocation, test results are presented with emphasis on file-level performance, stability and “high-availability” of the shared filesystems. Use cases specific to the experiment operation in LHCb, including the specific handling of shared filesystems served to a cluster of 1500 diskless nodes, are described. Issues of prematurely expiring authenticated sessions are explicitly addressed, keeping in mind long-running analysis jobs on the Online cluster. In addition, quantitative test results are also presented with alternatives including NFSv4. Comparative measurements of filesystem performance benchmarks are presented, which are seen to be used as reference for decisions on potential migration of the current storage solution deployed in the LHCb online cluster.

  15. An International Review of the Development and Implementation of Shared Print Storage

    ERIC Educational Resources Information Center

    Genoni, Paul

    2013-01-01

    This article undertakes a review of the literature related to shared print storage and national repositories from 1980-2013. There is a separate overview of the relevant Australian literature. The coverage includes both relevant journal literature and major reports. In the process the article traces the developments in the theory and practice of…

  16. Peregrine System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a

  17. Scaling to diversity: The DERECHOS distributed infrastructure for analyzing and sharing data

    NASA Astrophysics Data System (ADS)

    Rilee, M. L.; Kuo, K. S.; Clune, T.; Oloso, A.; Brown, P. G.

    2016-12-01

    Integrating Earth Science data from diverse sources such as satellite imagery and simulation output can be expensive and time-consuming, limiting scientific inquiry and the quality of our analyses. Reducing these costs will improve innovation and quality in science. The current Earth Science data infrastructure focuses on downloading data based on requests formed from the search and analysis of associated metadata. And while the data products provided by archives may use the best available data sharing technologies, scientist end-users generally do not have such resources (including staff) available to them. Furthermore, only once an end-user has received the data from multiple diverse sources and has integrated them can the actual analysis and synthesis begin. The cost of getting from idea to where synthesis can start dramatically slows progress. In this presentation we discuss a distributed computational and data storage framework that eliminates much of the aforementioned cost. The SciDB distributed array database is central as it is optimized for scientific computing involving very large arrays, performing better than less specialized frameworks like Spark. Adding spatiotemporal functions to the SciDB creates a powerful platform for analyzing and integrating massive, distributed datasets. SciDB allows Big Earth Data analysis to be performed "in place" without the need for expensive downloads and end-user resources. Spatiotemporal indexing technologies such as the hierarchical triangular mesh enable the compute and storage affinity needed to efficiently perform co-located and conditional analyses minimizing data transfers. These technologies automate the integration of diverse data sources using the framework, a critical step beyond current metadata search and analysis. Instead of downloading data into their idiosyncratic local environments, end-users can generate and share data products integrated from diverse multiple sources using a common shared environment, turning distributed active archive centers (DAACs) from warehouses into distributed active analysis centers.

  18. Drafting Recommendations for a Shared Statewide High-Density Storage Facility: Experiences with the State University Libraries of Florida Proposal

    ERIC Educational Resources Information Center

    Walker, Ben

    2008-01-01

    In August 2007, an $11.2 million proposal for a shared statewide high-density storage facility was submitted to the Board of Governors, the governing body of the State University System in Florida. The project was subsequently approved at a slightly lower level and funding was delayed until 2010/2011. The experiences of coordinating data…

  19. Hydrogen storage container

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Feng, Zhili; Zhang, Wei

    An apparatus and system is described for storing high-pressure fluids such as hydrogen. An inner tank and pre-stressed concrete pressure vessel share the structural and/or pressure load on the inner tank. The system and apparatus provide a high performance and low cost container while mitigating hydrogen embrittlement of the metal tank. System is useful for distributing hydrogen to a power grid or to a vehicle refueling station.

  20. Glycomic Analysis of Prostate Cancer

    DTIC Science & Technology

    2012-07-01

    allowed measurements of N-glycans and the Clinical Molecular Epidemiology Shared Resources which provided services for biological sample storage and...select N-glycans for the detection of prostate cancer. Aim3. Perform an exploratory study of N-glycans in urine of the participants and correlation of...cases. We have designed a pooled-unpooled study where initial discovery is conducted in smaller number of pooled samples followed by analysis of

  1. Set-up of a pump as turbine use in micro-pumped hydro energy storage: a case of study in Froyennes Belgium

    NASA Astrophysics Data System (ADS)

    Morabito, A.; Steimes, J.; Bontems, O.; Zohbi, G. Al; Hendrick, P.

    2017-04-01

    Its maturity makes pumped hydro energy storage (PHES) the most used technology in energy storage. Micro-hydro plants (<100 kW) are globally emerging due to further increases in the share of renewable electricity production such as wind and solar power. This paper presents the design of a micro-PHES developed in Froyennes, Belgium, using a pump as turbine (PaT) coupled with a variable frequency driver (VFD). The methods adopted for the selection of the most suitable pump for pumping and reverse mode are compared and discussed. Controlling and monitoring the PaT performances represent a compulsory design phase in the analysis feasibility of PaT coupled with VFD in micro PHES plant. This study aims at answering technical research aspects of µ-PHES site used with reversible pumps.

  2. Human Milk Handling and Storage Practices Among Peer Milk-Sharing Mothers.

    PubMed

    Reyes-Foster, Beatriz M; Carter, Shannon K; Hinojosa, Melanie Sberna

    2017-02-01

    Peer milk sharing, the noncommercial sharing of human milk from one parent or caretaker directly to another for the purposes of feeding a child, appears to be an increasing infant-feeding practice. Although the U.S. Food and Drug Administration has issued a warning against the practice, little is known about how people who share human milk handle and store milk and whether these practices are consistent with clinical safety protocols. Research aim: This study aimed to learn about the milk-handling practices of expressed human milk by milk-sharing donors and recipient caretakers. In this article, we explore the degree to which donors and recipients adhere to the Academy of Breastfeeding Medicine clinical recommendations for safe handling and storage. Online surveys were collected from 321 parents engaged in peer milk sharing. Univariate descriptive statistics were used to describe the safe handling and storage procedures for milk donors and recipients. A two-sample t-test was used to compare safety items common to each group. Multivariate ordinary least squares regression analysis was used to examine sociodemographic correlates of milk safety practices within the sample group. Findings indicate that respondents engaged in peer milk sharing report predominantly positive safety practices. Multivariate analysis did not reveal any relationship between safety practices and sociodemographic characteristics. The number of safe practices did not differ between donors and recipients. Parents and caretakers who participate in peer human milk sharing report engaging in practices that should reduce risk of bacterial contamination of expressed peer shared milk. More research on this particular population is recommended.

  3. Motor-cognitive dual-task performance: effects of a concurrent motor task on distinct components of visual processing capacity.

    PubMed

    Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P

    2018-01-01

    Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.

  4. A class Hierarchical, object-oriented approach to virtual memory management

    NASA Technical Reports Server (NTRS)

    Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.

    1989-01-01

    The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.

  5. System and method for programmable bank selection for banked memory subsystems

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Hoenicke, Dirk; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan

    2010-09-07

    A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

  6. Cloud-based crowd sensing: a framework for location-based crowd analyzer and advisor

    NASA Astrophysics Data System (ADS)

    Aishwarya, K. C.; Nambi, A.; Hudson, S.; Nadesh, R. K.

    2017-11-01

    Cloud computing is an emerging field of computer science to integrate and explore large and powerful computing systems and storages for personal and also for enterprise requirements. Mobile Cloud Computing is the inheritance of this concept towards mobile hand-held devices. Crowdsensing, or to be precise, Mobile Crowdsensing is the process of sharing resources from an available group of mobile handheld devices that support sharing of different resources such as data, memory and bandwidth to perform a single task for collective reasons. In this paper, we propose a framework to use Crowdsensing and perform a crowd analyzer and advisor whether the user can go to the place or not. This is an ongoing research and is a new concept to which the direction of cloud computing has shifted and is viable for more expansion in the near future.

  7. Social Networking Adapted for Distributed Scientific Collaboration

    NASA Technical Reports Server (NTRS)

    Karimabadi, Homa

    2012-01-01

    Share is a social networking site with novel, specially designed feature sets to enable simultaneous remote collaboration and sharing of large data sets among scientists. The site will include not only the standard features found on popular consumer-oriented social networking sites such as Facebook and Myspace, but also a number of powerful tools to extend its functionality to a science collaboration site. A Virtual Observatory is a promising technology for making data accessible from various missions and instruments through a Web browser. Sci-Share augments services provided by Virtual Observatories by enabling distributed collaboration and sharing of downloaded and/or processed data among scientists. This will, in turn, increase science returns from NASA missions. Sci-Share also enables better utilization of NASA s high-performance computing resources by providing an easy and central mechanism to access and share large files on users space or those saved on mass storage. The most common means of remote scientific collaboration today remains the trio of e-mail for electronic communication, FTP for file sharing, and personalized Web sites for dissemination of papers and research results. Each of these tools has well-known limitations. Sci-Share transforms the social networking paradigm into a scientific collaboration environment by offering powerful tools for cooperative discourse and digital content sharing. Sci-Share differentiates itself by serving as an online repository for users digital content with the following unique features: a) Sharing of any file type, any size, from anywhere; b) Creation of projects and groups for controlled sharing; c) Module for sharing files on HPC (High Performance Computing) sites; d) Universal accessibility of staged files as embedded links on other sites (e.g. Facebook) and tools (e.g. e-mail); e) Drag-and-drop transfer of large files, replacing awkward e-mail attachments (and file size limitations); f) Enterprise-level data and messaging encryption; and g) Easy-to-use intuitive workflow.

  8. dCache, Sync-and-Share for Big Data

    NASA Astrophysics Data System (ADS)

    Millar, AP; Fuhrmann, P.; Mkrtchyan, T.; Behrmann, G.; Bernardt, C.; Buchholz, Q.; Guelzow, V.; Litvintsev, D.; Schwank, K.; Rossi, A.; van der Reest, P.

    2015-12-01

    The availability of cheap, easy-to-use sync-and-share cloud services has split the scientific storage world into the traditional big data management systems and the very attractive sync-and-share services. With the former, the location of data is well understood while the latter is mostly operated in the Cloud, resulting in a rather complex legal situation. Beside legal issues, those two worlds have little overlap in user authentication and access protocols. While traditional storage technologies, popular in HEP, are based on X.509, cloud services and sync-and-share software technologies are generally based on username/password authentication or mechanisms like SAML or Open ID Connect. Similarly, data access models offered by both are somewhat different, with sync-and-share services often using proprietary protocols. As both approaches are very attractive, dCache.org developed a hybrid system, providing the best of both worlds. To avoid reinventing the wheel, dCache.org decided to embed another Open Source project: OwnCloud. This offers the required modern access capabilities but does not support the managed data functionality needed for large capacity data storage. With this hybrid system, scientists can share files and synchronize their data with laptops or mobile devices as easy as with any other cloud storage service. On top of this, the same data can be accessed via established mechanisms, like GridFTP to serve the Globus Transfer Service or the WLCG FTS3 tool, or the data can be made available to worker nodes or HPC applications via a mounted filesystem. As dCache provides a flexible authentication module, the same user can access its storage via different authentication mechanisms; e.g., X.509 and SAML. Additionally, users can specify the desired quality of service or trigger media transitions as necessary, thus tuning data access latency to the planned access profile. Such features are a natural consequence of using dCache. We will describe the design of the hybrid dCache/OwnCloud system, report on several months of operations experience running it at DESY, and elucidate the future road-map.

  9. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  10. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  11. Resource Management and Risk Mitigation in Online Storage Grids

    ERIC Educational Resources Information Center

    Du, Ye

    2010-01-01

    This dissertation examines the economic value of online storage resources that could be traded and shared as potential commodities and the consequential investments and deployment of such resources. The value proposition of emergent business models such as Akamai and Amazon S3 in online storage grids is capacity provision and content delivery at…

  12. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  13. "Job-Sharing" Storage of Hydrogen in Ru/Li₂O Nanocomposites.

    PubMed

    Fu, Lijun; Tang, Kun; Oh, Hyunchul; Manickam, Kandavel; Bräuniger, Thomas; Chandran, C Vinod; Menzel, Alexander; Hirscher, Michael; Samuelis, Dominik; Maier, Joachim

    2015-06-10

    A "job-sharing" hydrogen storage mechanism is proposed and experimentally investigated in Ru/Li2O nanocomposites in which H(+) is accommodated on the Li2O side, while H(-) or e(-) is stored on the side of Ru. Thermal desorption-mass spectroscopy results show that after loading with D2, Ru/Li2O exhibits an extra desorption peak, which is in contrast to Ru nanoparticles or ball-milled Li2O alone, indicating a synergistic hydrogen storage effect due to the presence of both phases. By varying the ratio of the two phases, it is shown that the effect increases monotonically with the area of the heterojunctions, indicating interface related hydrogen storage. X-ray diffraction, Fourier transform infrared spectroscopy, and nuclear magnetic resonance results show that a weak LiO···D bond is formed after loading in Ru/Li2O nanocomposites with D2. The storage-pressure curve seems to favor H(+)/H(-) over H(+)/e(-) mechanism.

  14. Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul

    2002-07-29

    Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less

  15. Using GIS servers and interactive maps in spectral data sharing and administration: Case study of Ahvaz Spectral Geodatabase Platform (ASGP)

    NASA Astrophysics Data System (ADS)

    Karami, Mojtaba; Rangzan, Kazem; Saberi, Azim

    2013-10-01

    With emergence of air-borne and space-borne hyperspectral sensors, spectroscopic measurements are gaining more importance in remote sensing. Therefore, the number of available spectral reference data is constantly increasing. This rapid increase often exhibits a poor data management, which leads to ultimate isolation of data on disk storages. Spectral data without precise description of the target, methods, environment, and sampling geometry cannot be used by other researchers. Moreover, existing spectral data (in case it accompanied with good documentation) become virtually invisible or unreachable for researchers. Providing documentation and a data-sharing framework for spectral data, in which researchers are able to search for or share spectral data and documentation, would definitely improve the data lifetime. Relational Database Management Systems (RDBMS) are main candidates for spectral data management and their efficiency is proven by many studies and applications to date. In this study, a new approach to spectral data administration is presented based on spatial identity of spectral samples. This method benefits from scalability and performance of RDBMS for storage of spectral data, but uses GIS servers to provide users with interactive maps as an interface to the system. The spectral files, photographs and descriptive data are considered as belongings of a geospatial object. A spectral processing unit is responsible for evaluation of metadata quality and performing routine spectral processing tasks for newly-added data. As a result, by using internet browser software the users would be able to visually examine availability of data and/or search for data based on descriptive attributes associated to it. The proposed system is scalable and besides giving the users good sense of what data are available in the database, it facilitates participation of spectral reference data in producing geoinformation.

  16. Predictors of successful use of a web-based healthcare document storage and sharing system for pediatric cancer survivors: Cancer SurvivorLink™.

    PubMed

    Williamson, Rebecca; Meacham, Lillian; Cherven, Brooke; Hassen-Schilling, Leann; Edwards, Paula; Palgon, Michael; Espinoza, Sofia; Mertens, Ann

    2014-09-01

    Cancer SurvivorLink™, www.cancersurvivorlink.org , is a patient-controlled communication tool where survivors can electronically store and share documents with healthcare providers. Functionally, SurvivorLink serves as an electronic personal health record-a record of health-related information managed and controlled by the survivor. Recruitment methods to increase registration and the characteristics of registrants who completed each step of using SurvivorLink are described. Pediatric cancer survivors were recruited via mailings, survivor clinic, and community events. Recruitment method and Aflac Survivor Clinic attendance was determined for each registrant. Registration date, registrant type (parent vs. survivor), zip code, creation of a personal health record in SurvivorLink, storage of documents, and document sharing were measured. Logistic regression was used to determine the characteristics that predicted creation of a health record and storage of documents. To date, 275 survivors/parents have completed registration: 63 were recruited via mailing, 99 from clinic, 56 from community events, and 57 via other methods. Overall, 66.9 % registrants created a personal health record and 45.7 % of those stored a health document. There were no significant predictors for creating a personal health record. Attending a survivor clinic was the strongest predictor of document storage (p < 0.01). Of those with a document stored, 21.4 % shared with a provider. Having attended survivor clinic is the biggest predictor of registering and using SurvivorLink. Many survivors must advocate for their survivorship care. Survivor Link provides educational material and supports the dissemination of survivor-specific follow-up recommendations to facilitate shared clinical care decision making.

  17. Dynamic Collaboration Infrastructure for Hydrologic Science

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.

  18. 40 CFR 63.1360 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... process unit. If the greatest input to and/or output from a shared storage vessel is the same for two or... not have an intervening storage vessel. If two or more PAI process units have the same input to or... process unit that sends the most material to or receives the most material from the storage vessel. If two...

  19. Virtualization - A Key Cost Saver in NASA Multi-Mission Ground System Architecture

    NASA Technical Reports Server (NTRS)

    Swenson, Paul; Kreisler, Stephen; Sager, Jennifer A.; Smith, Dan

    2014-01-01

    With science team budgets being slashed, and a lack of adequate facilities for science payload teams to operate their instruments, there is a strong need for innovative new ground systems that are able to provide necessary levels of capability processing power, system availability and redundancy while maintaining a small footprint in terms of physical space, power utilization and cooling.The ground system architecture being presented is based off of heritage from several other projects currently in development or operations at Goddard, but was designed and built specifically to meet the needs of the Science and Planetary Operations Control Center (SPOCC) as a low-cost payload command, control, planning and analysis operations center. However, this SPOCC architecture was designed to be generic enough to be re-used partially or in whole by other labs and missions (since its inception that has already happened in several cases!)The SPOCC architecture leverages a highly available VMware-based virtualization cluster with shared SAS Direct-Attached Storage (DAS) to provide an extremely high-performing, low-power-utilization and small-footprint compute environment that provides Virtual Machine resources shared among the various tenant missions in the SPOCC. The storage is also expandable, allowing future missions to chain up to 7 additional 2U chassis of storage at an extremely competitive cost if they require additional archive or virtual machine storage space.The software architecture provides a fully-redundant GMSEC-based message bus architecture based on the ActiveMQ middleware to track all health and safety status within the SPOCC ground system. All virtual machines utilize the GMSEC system agents to report system host health over the GMSEC bus, and spacecraft payload health is monitored using the Hammers Integrated Test and Operations System (ITOS) Galaxy Telemetry and Command (TC) system, which performs near-real-time limit checking and data processing on the downlinked data stream and injects messages into the GMSEC bus that are monitored to automatically page the on-call operator or Systems Administrator (SA) when an off-nominal condition is detected. This architecture, like the LTSP thin clients, are shared across all tenant missions.Other required IT security controls are implemented at the ground system level, including physical access controls, logical system-level authentication authorization management, auditing and reporting, network management and a NIST 800-53 FISMA-Moderate IT Security plan Risk Assessment Contingency Plan, helping multiple missions share the cost of compliance with agency-mandated directives.The SPOCC architecture provides science payload control centers and backup mission operations centers with a cost-effective, standardized approach to virtualizing and monitoring resources that were traditionally multiple racks full of physical machines. The increased agility in deploying new virtual systems and thin client workstations can provide significant savings in personnel costs for maintaining the ground system. The cost savings in procurement, power, rack footprint and cooling as well as the shared multi-mission design greatly reduces upfront cost for missions moving into the facility. Overall, the authors hope that this architecture will become a model for how future NASA operations centers are constructed!

  20. EDGE3: A web-based solution for management and analysis of Agilent two color microarray experiments

    PubMed Central

    Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A

    2009-01-01

    Background The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE3 was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. Results EDGE3 has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE3 is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Conclusion Here, we present EDGE3, an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE3 provides a means for managing RNA samples and arrays during the hybridization process. EDGE3 is freely available for download at . PMID:19732451

  1. EDGE(3): a web-based solution for management and analysis of Agilent two color microarray experiments.

    PubMed

    Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A

    2009-09-04

    The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE(3) was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. EDGE(3) has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE(3) is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Here, we present EDGE(3), an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE(3) provides a means for managing RNA samples and arrays during the hybridization process. EDGE(3) is freely available for download at http://edge.oncology.wisc.edu/.

  2. Grid data access on widely distributed worker nodes using scalla and SRM

    NASA Astrophysics Data System (ADS)

    Jakl, P.; Lauret, J.; Hanushevsky, A.; Shoshani, A.; Sim, A.; Gu, J.

    2008-07-01

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.

  3. Novel Control Strategy for Multiple Run-of-the-River Hydro Power Plants to Provide Grid Ancillary Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob

    Electricity generated by Hydropower Plants (HPPs) contributes a considerable portion of bulk electricity generation and delivers it with a low carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which includes solar and wind energy. The increasing penetration of wind and solar penetration leads to a lowered inertia in the grid and hence poses stability challenges. In recent years, breakthrough in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments in power grids. Multiple ROR HPPs if integrated with scalable, multi time-step energy storage so that the total output canmore » be controlled. Although, the size of a single energy storage is far smaller than that of a typical reservoir, cohesively managing multiple sets of energy storage distributed in different locations is proposed. The ratings of storages and multiple ROR HPPs approximately equals the rating of a large, conventional HPP. The challenges associated with the system architecture and operation are described. Energy storage technologies such as supercapacitors, flywheels, batteries etc. can function as a dispatchable synthetic reservoir with a scalable size of energy storage will be integrated. Supercapacitors, flywheels, and battery are chosen to provide fast, medium, and slow responses to support grid requirements. Various dynamic and transient power grid conditions are simulated and performances of integrated ROR HPPs with energy storage is provided. The end goal of this research is to investigate the inertial equivalence of a large, conventional HPP with a unique set of multiple ROR HPPs and optimally rated energy storage systems.« less

  4. From Physics to industry: EOS outside HEP

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Lamanna, M.

    2017-10-01

    In the competitive market for large-scale storage solutions the current main disk storage system at CERN EOS has been showing its excellence in the multi-Petabyte high-concurrency regime. It has also shown a disruptive potential in powering the service in providing sync and share capabilities and in supporting innovative analysis environments along the storage of LHC data. EOS has also generated interest as generic storage solution ranging from university systems to very large installations for non-HEP applications.

  5. Integration of end-user Cloud storage for CMS analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  6. Integration of end-user Cloud storage for CMS analysis

    DOE PAGES

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...

    2017-05-19

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  7. Application of XML in DICOM

    NASA Astrophysics Data System (ADS)

    You, Xiaozhen; Yao, Zhihong

    2005-04-01

    As a standard of communication and storage for medical digital images, DICOM has been playing a very important role in integration of hospital information. In DICOM, tags are expressed by numbers, and only standard data elements can be shared by looking up Data Dictionary while private tags can not. As such, a DICOM file's readability and extensibility is limited. In addition, reading DICOM files needs special software. In our research, we introduced XML into DICOM, defining an XML-based DICOM special transfer format, XML-DCM, a DICOM storage format, X-DCM, as well as developing a program package to realize format interchange among DICOM, XML-DCM, and X-DCM. XML-DCM is based on the DICOM structure while replacing numeric tags with accessible XML character string tags. The merits are as following: a) every character string tag of XML-DCM has explicit meaning, so users can understand standard data elements and those private data elements easily without looking up the Data Dictionary. In this way, the readability and data sharing of DICOM files are greatly improved; b) According to requirements, users can set new character string tags with explicit meaning to their own system to extend the capacity of data elements; c) User can read the medical image and associated information conveniently through IE, ultimately enlarging the scope of data sharing. The application of storage format X-DCM will reduce data redundancy and save storage memory. The result of practical application shows that XML-DCM does favor integration and share of medical image data among different systems or devices.

  8. Methods for Specifying Scientific Data Standards and Modeling Relationships with Applications to Neuroscience

    PubMed Central

    Rübel, Oliver; Dougherty, Max; Prabhat; Denes, Peter; Conant, David; Chang, Edward F.; Bouchard, Kristofer

    2016-01-01

    Neuroscience continues to experience a tremendous growth in data; in terms of the volume and variety of data, the velocity at which data is acquired, and in turn the veracity of data. These challenges are a serious impediment to sharing of data, analyses, and tools within and across labs. Here, we introduce BRAINformat, a novel data standardization framework for the design and management of scientific data formats. The BRAINformat library defines application-independent design concepts and modules that together create a general framework for standardization of scientific data. We describe the formal specification of scientific data standards, which facilitates sharing and verification of data and formats. We introduce the concept of Managed Objects, enabling semantic components of data formats to be specified as self-contained units, supporting modular and reusable design of data format components and file storage. We also introduce the novel concept of Relationship Attributes for modeling and use of semantic relationships between data objects. Based on these concepts we demonstrate the application of our framework to design and implement a standard format for electrophysiology data and show how data standardization and relationship-modeling facilitate data analysis and sharing. The format uses HDF5, enabling portable, scalable, and self-describing data storage and integration with modern high-performance computing for data-driven discovery. The BRAINformat library is open source, easy-to-use, and provides detailed user and developer documentation and is freely available at: https://bitbucket.org/oruebel/brainformat. PMID:27867355

  9. Client/Server data serving for high performance computing

    NASA Technical Reports Server (NTRS)

    Wood, Chris

    1994-01-01

    This paper will attempt to examine the industry requirements for shared network data storage and sustained high speed (10's to 100's to thousands of megabytes per second) network data serving via the NFS and FTP protocol suite. It will discuss the current structural and architectural impediments to achieving these sorts of data rates cost effectively today on many general purpose servers and will describe and architecture and resulting product family that addresses these problems. The sustained performance levels that were achieved in the lab will be shown as well as a discussion of early customer experiences utilizing both the HIPPI-IP and ATM OC3-IP network interfaces.

  10. Intelligent Energy Management System for PV-Battery-based Microgrids in Future DC Homes

    NASA Astrophysics Data System (ADS)

    Chauhan, R. K.; Rajpurohit, B. S.; Gonzalez-Longatt, F. M.; Singh, S. N.

    2016-06-01

    This paper presents a novel intelligent energy management system (IEMS) for a DC microgrid connected to the public utility (PU), photovoltaic (PV) and multi-battery bank (BB). The control objectives of the proposed IEMS system are: (i) to ensure the load sharing (according to the source capacity) among sources, (ii) to reduce the power loss (high efficient) in the system, and (iii) to enhance the system reliability and power quality. The proposed IEMS is novel because it follows the ideal characteristics of the battery (with some assumptions) for the power sharing and the selection of the closest source to minimize the power losses. The IEMS allows continuous and accurate monitoring with intelligent control of distribution system operations such as battery bank energy storage (BBES) system, PV system and customer utilization of electric power. The proposed IEMS gives the better operational performance for operating conditions in terms of load sharing, loss minimization, and reliability enhancement of the DC microgrid.

  11. Adapting federated cyberinfrastructure for shared data collection facilities in structural biology

    PubMed Central

    Stokes-Rees, Ian; Levesque, Ian; Murphy, Frank V.; Yang, Wei; Deacon, Ashley; Sliz, Piotr

    2012-01-01

    Early stage experimental data in structural biology is generally unmaintained and inaccessible to the public. It is increasingly believed that this data, which forms the basis for each macromolecular structure discovered by this field, must be archived and, in due course, published. Furthermore, the widespread use of shared scientific facilities such as synchrotron beamlines complicates the issue of data storage, access and movement, as does the increase of remote users. This work describes a prototype system that adapts existing federated cyberinfra­structure technology and techniques to significantly improve the operational environment for users and administrators of synchrotron data collection facilities used in structural biology. This is achieved through software from the Virtual Data Toolkit and Globus, bringing together federated users and facilities from the Stanford Synchrotron Radiation Lightsource, the Advanced Photon Source, the Open Science Grid, the SBGrid Consortium and Harvard Medical School. The performance and experience with the prototype provide a model for data management at shared scientific facilities. PMID:22514186

  12. Adapting federated cyberinfrastructure for shared data collection facilities in structural biology.

    PubMed

    Stokes-Rees, Ian; Levesque, Ian; Murphy, Frank V; Yang, Wei; Deacon, Ashley; Sliz, Piotr

    2012-05-01

    Early stage experimental data in structural biology is generally unmaintained and inaccessible to the public. It is increasingly believed that this data, which forms the basis for each macromolecular structure discovered by this field, must be archived and, in due course, published. Furthermore, the widespread use of shared scientific facilities such as synchrotron beamlines complicates the issue of data storage, access and movement, as does the increase of remote users. This work describes a prototype system that adapts existing federated cyberinfrastructure technology and techniques to significantly improve the operational environment for users and administrators of synchrotron data collection facilities used in structural biology. This is achieved through software from the Virtual Data Toolkit and Globus, bringing together federated users and facilities from the Stanford Synchrotron Radiation Lightsource, the Advanced Photon Source, the Open Science Grid, the SBGrid Consortium and Harvard Medical School. The performance and experience with the prototype provide a model for data management at shared scientific facilities.

  13. Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakl, Pavel; /Prague, Inst. Phys.; Lauret, Jerome

    2011-11-10

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of themore » largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.« less

  14. A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction

    DOE PAGES

    Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...

    1995-01-01

    In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less

  15. A Combination Therapy of JO-I and Chemotherapy in Ovarian Cancer Models

    DTIC Science & Technology

    2013-10-01

    which consists of a 3PAR storage backend and is sharing data via a highly available NetApp storage gateway and 2 high throughput commodity storage...Environment is configured as self- service Enterprise cloud and currently hosts more than 700 virtual machines. The network infrastructure consists of...technology infrastructure and information system applications designed to integrate, automate, and standardize operations. These systems fuse state of

  16. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem architectures using PxFS and QFS were found to be incompatible with our software architecture, so sharing of data between systems is accomplished via traditional NFS. Linux was found to be limited in terms of deployment flexibility and consistency between versions. Despite the experimentation with various technologies, our current virtualized architecture is stable to the point of an average daily real time data return rate of 92.34% over the entire lifetime of the project to date.

  17. Effects of Scandinavian hydro power on storage needs in a fully renewable European power system for various transmission capacity scenarios

    NASA Astrophysics Data System (ADS)

    Kies, Alexander; Nag, Kabitri; von Bremen, Lueder; Lorenz, Elke; Heinemann, Detlev

    2015-04-01

    The penetration of renewable energies in the European power system has increased in the last decades (23.5% share of renewables in the gross electricity consumption of the EU-28 in 2012) and is expected to increase further up to very high shares close to 100%. Planning and organizing this European energy transition towards sustainable power sources will be one of the major challenges of the 21st century. It is very likely that in a fully renewable European power system wind and photovoltaics (pv) will contribute the largest shares to the generation mix followed by hydro power. However, feed-in from wind and pv is due to the weather dependant nature of their resources fluctuating and non-controllable. To match generation and consumption several solutions and their combinations were proposed like very high backup-capacities of conventional power generation (e.g. fossile or nuclear), storages or the extension of the transmission grid. Apart from those options hydro power can be used to counterbalance fluctuating wind and pv generation to some extent. In this work we investigate the effects of hydro power from Norway and Sweden on residual storage needs in Europe depending on the overlaying grid scenario. High temporally and spatially resolved weather data with a spatial resolution of 7 x 7 km and a temporal resolution of 1 hour was used to model the feed-in from wind and pv for 34 investigated European countries for the years 2003-2012. Inflow into hydro storages and generation by run-of-river power plants were computed from ERA-Interim reanalysis runoff data at a spatial resolution of 0.75° x 0.75° and a daily temporal resolution. Power flows in a simplified transmission grid connecting the 34 European countries were modelled minimizing dissipation using a DC-flow approximation. Previous work has shown that hydro power, namely in Norway and Sweden, can reduce storage needs in a renewable European power system by a large extent. A 15% share of hydro power in Europe can reduce storage needs by up to 50% with respect to stored energy. This requires however large transmission capacities between the major hydro power producers in Scandinavia and the largest consumers of electrical energy in Western Europe. We show how Scandinavian hydro power can reduce storage needs in dependency of the transmission grid for two fully renewable scenarios: The first one has its wind and pv generation capacities distributed according to an empirically derived approach. The second scenario has an optimal spatial distribution to minimize storage needs distribution of wind and pv generation capacities across Europe. We show that in both cases hydro power together with a well developed transmission grid has the potential to contribute a large share to the solution of the generation-consumption mismatch problem. The work is part of the RESTORE 2050 project (BMBF) that investigates the requirements for cross-country grid extensions, usage of storage technologies and capacities and the development of new balancing technologies.

  18. Cloud computing applications for biomedical science: A perspective.

    PubMed

    Navale, Vivek; Bourne, Philip E

    2018-06-01

    Biomedical research has become a digital data-intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research.

  19. Cloud computing applications for biomedical science: A perspective

    PubMed Central

    2018-01-01

    Biomedical research has become a digital data–intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research. PMID:29902176

  20. Dryden Flight Research Center Chemical Pharmacy Program

    NASA Technical Reports Server (NTRS)

    Davis, Bette

    1997-01-01

    The Dryden Flight Research Center (DFRC) Chemical Pharmacy "Crib" is a chemical sharing system which loans chemicals to users, rather than issuing them or having each individual organization or group purchasing the chemicals. This cooperative system of sharing chemicals eliminates multiple ownership of the same chemicals and also eliminates stockpiles. Chemical management duties are eliminated for each of the participating organizations. The chemical storage issues, hazards and responsibilities are eliminated. The system also ensures safe storage of chemicals and proper disposal practices. The purpose of this program is to reduce the total releases and transfers of toxic chemicals. The initial cost of the program to DFRC was $585,000. A savings of $69,000 per year has been estimated for the Center. This savings includes the reduced costs in purchasing, disposal and chemical inventory/storage responsibilities. DFRC has chemicals stored in 47 buildings and at 289 locations. When the program is fully implemented throughout the Center, there will be three chemical locations at this facility. The benefits of this program are the elimination of chemical management duties; elimination of the hazard associated with chemical storage; elimination of stockpiles; assurance of safe storage; assurance of proper disposal practices; assurance of a safer workplace; and more accurate emissions reports.

  1. Taking advantage of HTML5 browsers to realize the concepts of session state and workflow sharing in web-tool applications

    NASA Astrophysics Data System (ADS)

    Suftin, I.; Read, J. S.; Walker, J.

    2013-12-01

    Scientists prefer not having to be tied down to a specific machine or operating system in order to analyze local and remote data sets or publish work. Increasingly, analysis has been migrating to decentralized web services and data sets, using web clients to provide the analysis interface. While simplifying workflow access, analysis, and publishing of data, the move does bring with it its own unique set of issues. Web clients used for analysis typically offer workflows geared towards a single user, with steps and results that are often difficult to recreate and share with others. Furthermore, workflow results often may not be easily used as input for further analysis. Older browsers further complicate things by having no way to maintain larger chunks of information, often offloading the job of storage to the back-end server or trying to squeeze it into a cookie. It has been difficult to provide a concept of "session storage" or "workflow sharing" without a complex orchestration of the back-end for storage depending on either a centralized file system or database. With the advent of HTML5, browsers gained the ability to store more information through the use of the Web Storage API (a browser-cookie holds a maximum of 4 kilobytes). Web Storage gives us the ability to store megabytes of arbitrary data in-browser either with an expiration date or just for a session. This allows scientists to create, update, persist and share their workflow without depending on the backend to store session information, providing the flexibility for new web-based workflows to emerge. In the DSASWeb portal ( http://cida.usgs.gov/DSASweb/ ), using these techniques, the representation of every step in the analyst's workflow is stored as plain-text serialized JSON, which we can generate as a text file and provide to the analyst as an upload. This file may then be shared with others and loaded back into the application, restoring the application to the state it was in when the session file was generated. A user may then view results produced during that session or go back and alter input parameters, creating new results and producing new, unique sessions which they can then again share. This technique not only provides independence for the user to manage their session as they like, but also allows much greater freedom for the application provider to scale out without having to worry about carrying over user information or maintaining it in a central location.

  2. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  3. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  4. The open science grid

    NASA Astrophysics Data System (ADS)

    Pordes, Ruth; OSG Consortium; Petravick, Don; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Würthwein, Frank; Foster, Ian; Gardner, Rob; Wilde, Mike; Blatecky, Alan; McGee, John; Quick, Rob

    2007-07-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support it's use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.

  5. 40 CFR 60.482-1a - Standards: General.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...

  6. 40 CFR 60.482-1a - Standards: General.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...

  7. 40 CFR 60.482-1a - Standards: General.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...

  8. 40 CFR 60.482-1a - Standards: General.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...

  9. 40 CFR 60.482-1a - Standards: General.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...

  10. The Shared Bibliographic Input Network (SBIN): A Summary of the Experiment.

    ERIC Educational Resources Information Center

    Cotter, Gladys A.

    As part of its mission to provide centralized services for the acquisition, storage, retrieval, and dissemination of scientific and technical information (STI) to support Department of Defense (DoD) research, development, and engineering studies programs, the Defense Technical Information Center (DTIC) sponsors the Shared Bibliographic Input…

  11. Secure key storage and distribution

    DOEpatents

    Agrawal, Punit

    2015-06-02

    This disclosure describes a distributed, fault-tolerant security system that enables the secure storage and distribution of private keys. In one implementation, the security system includes a plurality of computing resources that independently store private keys provided by publishers and encrypted using a single security system public key. To protect against malicious activity, the security system private key necessary to decrypt the publication private keys is not stored at any of the computing resources. Rather portions, or shares of the security system private key are stored at each of the computing resources within the security system and multiple security systems must communicate and share partial decryptions in order to decrypt the stored private key.

  12. Composition and Realization of Source-to-Sink High-Performance Flows: File Systems, Storage, Hosts, LAN and WAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi

    A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. To support such capabilities, significant progress has been made in various components including the deployment of 100 Gbps networks with future 1 Tbps bandwidth, increases in end-host capabilities with multiple cores and buses, capacity improvements in large disk arrays, and deployment of parallel file systems such as Lustre and GPFS. High-performance source-to-sink datamore » flows must be composed of these component systems, which requires significant optimizations of the storage-to-host data and execution paths to match the edge and long-haul network connections. In particular, end systems are currently supported by 10-40 Gbps Network Interface Cards (NIC) and 8-32 Gbps storage Host Channel Adapters (HCAs), which carry the individual flows that collectively must reach network speeds of 100 Gbps and higher. Indeed, such data flows must be synthesized using multicore, multibus hosts connected to high-performance storage systems on one side and to the network on the other side. Current experimental results show that the constituent flows must be optimally composed and preserved from storage systems, across the hosts and the networks with minimal interference. Furthermore, such a capability must be made available transparently to the science users without placing undue demands on them to account for the details of underlying systems and networks. And, this task is expected to become even more complex in the future due to the increasing sophistication of hosts, storage systems, and networks that constitute the high-performance flows. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to achieve source-to-sink high-performance flows, and (2) develop tools that provide these capabilities through simple interfaces to users and applications. In terms of the former, we propose to develop (1) optimization methods that align and transition multiple storage flows to multiple network flows on multicore, multibus hosts; and (2) edge and long-haul network path realization and maintenance using advanced provisioning methods including OSCARS and OpenFlow. We also propose synthesis methods that combine these individual technologies to compose high-performance flows using a collection of constituent storage-network flows, and realize them across the storage and local network connections as well as long-haul connections. We propose to develop automated user tools that profile the hosts, storage systems, and network connections; compose the source-to-sink complex flows; and set up and maintain the needed network connections. These solutions will be tested using (1) 100 Gbps connection(s) between Oak Ridge National Laboratory (ORNL) and Argonne National Laboratory (ANL) with storage systems supported by Lustre and GPFS file systems with an asymmetric connection to University of Memphis (UM); (2) ORNL testbed with multicore and multibus hosts, switches with OpenFlow capabilities, and network emulators; and (3) 100 Gbps connections from ESnet and their Openflow testbed, and other experimental connections. This proposal brings together the expertise and facilities of the two national laboratories, ORNL and ANL, and UM. It also represents a collaboration between DOE and the Department of Defense (DOD) projects at ORNL by sharing technical expertise and personnel costs, and leveraging the existing DOD Extreme Scale Systems Center (ESSC) facilities at ORNL.« less

  13. THE WIDE-AREA ENERGY STORAGE AND MANAGEMENT SYSTEM PHASE II Final Report - Flywheel Field Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Makarov, Yuri V.; Weimar, Mark R.

    2010-08-31

    This research was conducted by Pacific Northwest National Laboratory (PNNL) operated for the U.S. department of Energy (DOE) by Battelle Memorial Institute for Bonneville Power Administration (BPA), California Institute for Energy and Environment (CIEE) and California Energy Commission (CEC). A wide-area energy management system (WAEMS) is a centralized control system that operates energy storage devices (ESDs) located in different places to provide energy and ancillary services that can be shared among balancing authorities (BAs). The goal of this research is to conduct flywheel field tests, investigate the technical characteristics and economics of combined hydro-flywheel regulation services that can be sharedmore » between Bonneville Power Administration (BPA) and California Independent System Operator (CAISO) controlled areas. This report is the second interim technical report for Phase II of the WAEMS project. This report presents: 1) the methodology of sharing regulation service between balancing authorities, 2) the algorithm to allocate the regulation signal between the flywheel and hydro power plant to minimize the wear-and-tear of the hydro power plants, 3) field results of the hydro-flywheel regulation service (conducted by the Beacon Power), and 4) the performance metrics and economic analysis of the combined hydro-flywheel regulation service.« less

  14. An innovative privacy preserving technique for incremental datasets on cloud computing.

    PubMed

    Aldeen, Yousra Abdul Alsahib S; Salleh, Mazleena; Aljeroudi, Yazan

    2016-08-01

    Cloud computing (CC) is a magnificent service-based delivery with gigantic computer processing power and data storage across connected communications channels. It imparted overwhelming technological impetus in the internet (web) mediated IT industry, where users can easily share private data for further analysis and mining. Furthermore, user affable CC services enable to deploy sundry applications economically. Meanwhile, simple data sharing impelled various phishing attacks and malware assisted security threats. Some privacy sensitive applications like health services on cloud that are built with several economic and operational benefits necessitate enhanced security. Thus, absolute cyberspace security and mitigation against phishing blitz became mandatory to protect overall data privacy. Typically, diverse applications datasets are anonymized with better privacy to owners without providing all secrecy requirements to the newly added records. Some proposed techniques emphasized this issue by re-anonymizing the datasets from the scratch. The utmost privacy protection over incremental datasets on CC is far from being achieved. Certainly, the distribution of huge datasets volume across multiple storage nodes limits the privacy preservation. In this view, we propose a new anonymization technique to attain better privacy protection with high data utility over distributed and incremental datasets on CC. The proficiency of data privacy preservation and improved confidentiality requirements is demonstrated through performance evaluation. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Alternative Fuels Data Center: Installing B20 Equipment

    Science.gov Websites

    operations to share the fueling site with you. Secure Permits, Adhere to State Requirements The contractor is storage tanks. The contractor will register storage tanks with the state environmental agency, which must the contractor and client to ensure the completed project meets expectations. Maps & Data U.S

  16. Consolidated Storage Facilities: Camel's Nose or Shared Burden? - 13112

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, James M.

    2013-07-01

    The Blue Ribbon Commission (BRC) made a strong argument why the reformulated nuclear waste program should make prompt efforts to develop one or more consolidated storage facilities (CSFs), and recommended the amendment of NWPA Section 145(b) 2 (linking 'monitored retrievable storage' to repository development) as an essential means to that end. However, other than recommending that the siting of CSFs should be 'consent-based' and that spent nuclear fuel (SNF) at stranded sites should be first-in-line for removal, the Commission made few recommendations regarding how CSF development should proceed. Working with three other key Senators, Jeff Bingaman attempted in the 112.more » Congress to craft legislation (S. 3469) to put the BRC recommendations into legislative language. The key reason why the Nuclear Waste Administration Act of 2012 did not proceed was the inability of the four senators to agree on whether and how to amend NWPA Section 145(b). A brief review of efforts to site consolidated storage since the Nuclear Waste Policy Amendments Act of 1987 suggests a strong and consistent motivation to shift the burden to someone (anyone) else. This paper argues that modification of NWPA Section 145(b) should be accompanied by guidelines for regional development and operation of CSFs. After review of the BRC recommendations regarding CSFs, and the 'camel's nose' prospects if implementation is not accompanied by further guidelines, the paper outlines a proposal for implementation of CSFs on a regional basis, including priorities for removal from reactor sites and subsequently from CSFs to repositories. Rather than allowing repository siting to be prejudiced by the location of a single remote CSF, the regional approach limits transport for off-site acceptance and storage, increases the efficiency of removal operations, provides a useful basis for compensation to states and communities that accept CSFs, and gives states with shared circumstances a shared stake in storage and disposal in an integrated national program. (authors)« less

  17. The NASA Ames Life Sciences Data Archive: Biobanking for the Final Frontier

    NASA Technical Reports Server (NTRS)

    Rask, Jon; Chakravarty, Kaushik; French, Alison J.; Choi, Sungshin; Stewart, Helen J.

    2017-01-01

    The NASA Ames Institutional Scientific Collection involves the Ames Life Sciences Data Archive (ALSDA) and a biospecimen repository, which are responsible for archiving information and non-human biospecimens collected from spaceflight and matching ground control experiments. The ALSDA also manages a biospecimen sharing program, performs curation and long-term storage operations, and facilitates distribution of biospecimens for research purposes via a public website (https:lsda.jsc.nasa.gov). As part of our best practices, a tissue viability testing plan has been developed for the repository, which will assess the quality of samples subjected to long-term storage. We expect that the test results will confirm usability of the samples, enable broader science community interest, and verify operational efficiency of the archives. This work will also support NASA open science initiatives and guides development of NASA directives and policy for curation of biological collections.

  18. 2012 ARPA-E Energy Innovation Summit: Profiling City University of New York (CUNY): Reinventing Batteries for Grid Storage (Performer Video)

    ScienceCinema

    None Available

    2017-12-09

    The third annual ARPA-E Energy Innovation Summit was held in Washington D.C. in February, 2012. The event brought together key players from across the energy ecosystem - researchers, entrepreneurs, investors, corporate executives, and government officials - to share ideas for developing and deploying the next generation of energy technologies. A few videos were selected for showing during the Summit to attendees. These 'performer videos' highlight innovative research that is ongoing and related to the main topics of the Summit's sessions. Featured in this video are Sanjoy Banerjee, Director of CUNY Energy Institute and Dan Steingart (Assistant Professor of Chemical Engineering, CUNY). The City University of New York's Energy Institute, with the help of ARPA-E funding, is creating safe, low cost, rechargeable, long lifecycle batteries that could be used as modular distributed storage for the electrical grid. The batteries could be used at the building level or the utility level to offer benefits such as capture of renewable energy, peak shaving and microgridding, for a safer, cheaper, and more secure electrical grid.

  19. 2012 ARPA-E Energy Innovation Summit: Profiling City University of New York (CUNY): Reinventing Batteries for Grid Storage (Performer Video)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Sanjoy; Steingart, Dan

    The third annual ARPA-E Energy Innovation Summit was held in Washington D.C. in February, 2012. The event brought together key players from across the energy ecosystem - researchers, entrepreneurs, investors, corporate executives, and government officials - to share ideas for developing and deploying the next generation of energy technologies. A few videos were selected for showing during the Summit to attendees. These "performer videos" highlight innovative research that is ongoing and related to the main topics of the Summit's sessions. Featured in this video are Sanjoy Banerjee, Director of CUNY Energy Institute and Dan Steingart (Assistant Professor of Chemical Engineering,more » CUNY). The City University of New York's Energy Institute, with the help of ARPA-E funding, is creating safe, low cost, rechargeable, long lifecycle batteries that could be used as modular distributed storage for the electrical grid. The batteries could be used at the building level or the utility level to offer benefits such as capture of renewable energy, peak shaving and microgridding, for a safer, cheaper, and more secure electrical grid.« less

  20. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.

    PubMed

    Simonyan, Vahan; Mazumder, Raja

    2014-09-30

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  1. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis

    PubMed Central

    Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953

  2. Integrating TRENCADIS components in gLite to share DICOM medical images and structured reports.

    PubMed

    Blanquer, Ignacio; Hernández, Vicente; Salavert, José; Segrelles, Damià

    2010-01-01

    The problem of sharing medical information among different centres has been tackled by many projects. Several of them target the specific problem of sharing DICOM images and structured reports (DICOM-SR), such as the TRENCADIS project. In this paper we propose sharing and organizing DICOM data and DICOM-SR metadata benefiting from the existent deployed Grid infrastructures compliant with gLite such as EGEE or the Spanish NGI. These infrastructures contribute with a large amount of storage resources for creating knowledge databases and also provide metadata storage resources (such as AMGA) to semantically organize reports in a tree-structure. First, in this paper, we present the extension of TRENCADIS architecture to use gLite components (LFC, AMGA, SE) on the shake of increasing interoperability. Using the metadata from DICOM-SR, and maintaining its tree structure, enables federating different but compatible diagnostic structures and simplifies the definition of complex queries. This article describes how to do this in AMGA and it shows an approach to efficiently code radiology reports to enable the multi-centre federation of data resources.

  3. Simulating cloud environment for HIS backup using secret sharing.

    PubMed

    Kuroda, Tomohiro; Kimura, Eizen; Matsumura, Yasushi; Yamashita, Yoshinori; Hiramatsu, Haruhiko; Kume, Naoto

    2013-01-01

    In the face of a disaster hospitals are expected to be able to continue providing efficient and high-quality care to patients. It is therefore crucial for hospitals to develop business continuity plans (BCPs) that identify their vulnerabilities, and prepare procedures to overcome them. A key aspect of most hospitals' BCPs is creating the backup of the hospital information system (HIS) data at multiple remote sites. However, the need to keep the data confidential dramatically increases the costs of making such backups. Secret sharing is a method to split an original secret message so that individual pieces are meaningless, but putting sufficient number of pieces together reveals the original message. It allows creation of pseudo-redundant arrays of independent disks for privacy-sensitive data over the Internet. We developed a secret sharing environment for StarBED, a large-scale network experiment environment, and evaluated its potential and performance during disaster recovery. Simulation results showed that the entire main HIS database of Kyoto University Hospital could be retrieved within three days even if one of the distributed storage systems crashed during a disaster.

  4. A Framework for Managing Inter-Site Storage Area Networks using Grid Technologies

    NASA Technical Reports Server (NTRS)

    Kobler, Ben; McCall, Fritz; Smorul, Mike

    2006-01-01

    The NASA Goddard Space Flight Center and the University of Maryland Institute for Advanced Computer Studies are studying mechanisms for installing and managing Storage Area Networks (SANs) that span multiple independent collaborating institutions using Storage Area Network Routers (SAN Routers). We present a framework for managing inter-site distributed SANs that uses Grid Technologies to balance the competing needs to control local resources, share information, delegate administrative access, and manage the complex trust relationships between the participating sites.

  5. Performances of multiprocessor multidisk architectures for continuous media storage

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.

    1996-03-01

    Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.

  6. Measuring household consumption and waste in unmetered, intermittent piped water systems

    NASA Astrophysics Data System (ADS)

    Kumpel, Emily; Woelfle-Erskine, Cleo; Ray, Isha; Nelson, Kara L.

    2017-01-01

    Measurements of household water consumption are extremely difficult in intermittent water supply (IWS) regimes in low- and middle-income countries, where water is delivered for short durations, taps are shared, metering is limited, and household storage infrastructure varies widely. Nonetheless, consumption estimates are necessary for utilities to improve water delivery. We estimated household water use in Hubli-Dharwad, India, with a mixed-methods approach combining (limited) metered data, storage container inventories, and structured observations. We developed a typology of household water access according to infrastructure conditions based on the presence of an overhead storage tank and a shared tap. For households with overhead tanks, container measurements and metered data produced statistically similar consumption volumes; for households without overhead tanks, stored volumes underestimated consumption because of significant water use directly from the tap during delivery periods. Households that shared taps consumed much less water than those that did not. We used our water use calculations to estimate waste at the household level and in the distribution system. Very few households used 135 L/person/d, the Government of India design standard for urban systems. Most wasted little water even when unmetered, however, unaccounted-for water in the neighborhood distribution systems was around 50%. Thus, conservation efforts should target loss reduction in the network rather than at households.

  7. CERN data services for LHC computing

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.

    2017-10-01

    Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.

  8. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine

    PubMed Central

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2016-01-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473

  9. Storage resource manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Bakken, J.; Petravick, D.

    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less

  10. Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    NASA Astrophysics Data System (ADS)

    Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel

    2015-12-01

    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.

  11. Hybrid lithium-ion capacitor with LiFePO4/AC composite cathode - Long term cycle life study, rate effect and charge sharing analysis

    NASA Astrophysics Data System (ADS)

    Shellikeri, A.; Yturriaga, S.; Zheng, J. S.; Cao, W.; Hagen, M.; Read, J. A.; Jow, T. R.; Zheng, J. P.

    2018-07-01

    Energy storage devices, which can combine the advantages of lithium-ion battery with that of electric double layer capacitor, are of prime interest. Recently, composite cathodes, which combine a battery material with capacitor material, have shown promise in enhancing life cycle and energy/power performances. Lithium-ion capacitor (LIC), with unique charge storage mechanism of combining a pre-lithiated battery anode with a capacitor cathode, is one such device which has the potential to synergistically incorporate the composite cathode to enhance capacity and cycle life. We report here a hybrid LIC consisting of a lithium iron phosphate (LiFePO4-LFP)/Activated Carbon composite cathode in combination with a hard carbon anode, by integrating the cycle life and capacity enhancing strategies of a dry method of electrode fabrication, anode pre-lithiation and a 3:1 anode to cathode capacity ratio, demonstrating a long cycle life, while elaborating on the charge sharing between the faradaic and non-faradaic mechanism in the battery and capacitor materials, respectively in the composite cathode. An excellent cell capacity retention of 94% (1000 cycles at 1C) and 92% (100,000 cycles at 60C) were demonstrated, while retaining 78% (over 6000 cycles at 2.7C) and 67% (over 70,000 cycles at 43C) of the LFP capacity in the composite cathode.

  12. A web platform for integrated surface water - groundwater modeling and data management

    NASA Astrophysics Data System (ADS)

    Fatkhutdinov, Aybulat; Stefan, Catalin; Junghanns, Ralf

    2016-04-01

    Model-based decision support systems are considered to be reliable and time-efficient tools for resources management in various hydrology related fields. However, searching and acquisition of the required data, preparation of the data sets for simulations as well as post-processing, visualization and publishing of the simulations results often requires significantly more work and time than performing the modeling itself. The purpose of the developed software is to combine data storage facilities, data processing instruments and modeling tools in a single platform which potentially can reduce time required for performing simulations, hence decision making. The system is developed within the INOWAS (Innovative Web Based Decision Support System for Water Sustainability under a Changing Climate) project. The platform integrates spatially distributed catchment scale rainfall - runoff, infiltration and groundwater flow models with data storage, processing and visualization tools. The concept is implemented in a form of a web-GIS application and is build based on free and open source components, including the PostgreSQL database management system, Python programming language for modeling purposes, Mapserver for visualization and publishing the data, Openlayers for building the user interface and others. Configuration of the system allows performing data input, storage, pre- and post-processing and visualization in a single not disturbed workflow. In addition, realization of the decision support system in the form of a web service provides an opportunity to easily retrieve and share data sets as well as results of simulations over the internet, which gives significant advantages for collaborative work on the projects and is able to significantly increase usability of the decision support system.

  13. A Study of Practical Proxy Reencryption with a Keyword Search Scheme considering Cloud Storage Structure

    PubMed Central

    Lee, Im-Yeong

    2014-01-01

    Data outsourcing services have emerged with the increasing use of digital information. They can be used to store data from various devices via networks that are easy to access. Unlike existing removable storage systems, storage outsourcing is available to many users because it has no storage limit and does not require a local storage medium. However, the reliability of storage outsourcing has become an important topic because many users employ it to store large volumes of data. To protect against unethical administrators and attackers, a variety of cryptography systems are used, such as searchable encryption and proxy reencryption. However, existing searchable encryption technology is inconvenient for use in storage outsourcing environments where users upload their data to be shared with others as necessary. In addition, some existing schemes are vulnerable to collusion attacks and have computing cost inefficiencies. In this paper, we analyze existing proxy re-encryption with keyword search. PMID:24693240

  14. A study of practical proxy reencryption with a keyword search scheme considering cloud storage structure.

    PubMed

    Lee, Sun-Ho; Lee, Im-Yeong

    2014-01-01

    Data outsourcing services have emerged with the increasing use of digital information. They can be used to store data from various devices via networks that are easy to access. Unlike existing removable storage systems, storage outsourcing is available to many users because it has no storage limit and does not require a local storage medium. However, the reliability of storage outsourcing has become an important topic because many users employ it to store large volumes of data. To protect against unethical administrators and attackers, a variety of cryptography systems are used, such as searchable encryption and proxy reencryption. However, existing searchable encryption technology is inconvenient for use in storage outsourcing environments where users upload their data to be shared with others as necessary. In addition, some existing schemes are vulnerable to collusion attacks and have computing cost inefficiencies. In this paper, we analyze existing proxy re-encryption with keyword search.

  15. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    PubMed

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  16. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  17. Sequential data access with Oracle and Hadoop: a performance comparison

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Canali, Luca; Grancher, Eric

    2014-06-01

    The Hadoop framework has proven to be an effective and popular approach for dealing with "Big Data" and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's "shared nothing" architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions-addressing cost/performance as well as raw performance- based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.

  18. What CFOs should know before venturing into the cloud.

    PubMed

    Rajendran, Janakan

    2013-05-01

    There are three major trends in the use of cloud-based services for healthcare IT: Cloud computing involves the hosting of health IT applications in a service provider cloud. Cloud storage is a data storage service that can involve, for example, long-term storage and archival of information such as clinical data, medical images, and scanned documents. Data center colocation involves rental of secure space in the cloud from a vendor, an approach that allows a hospital to share power capacity and proven security protocols, reducing costs.

  19. Final Test and Evaluation Results from the Solar Two Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BRADSHAW, ROBERT W.; DAWSON, DANIEL B.; DE LA ROSA, WILFREDO

    Solar Two was a collaborative, cost-shared project between 11 U. S. industry and utility partners and the U. S. Department of Energy to validate molten-salt power tower technology. The Solar Two plant, located east of Barstow, CA, comprised 1926 heliostats, a receiver, a thermal storage system, a steam generation system, and steam-turbine power block. Molten nitrate salt was used as the heat transfer fluid and storage media. The steam generator powered a 10-MWe (megawatt electric), conventional Rankine cycle turbine. Solar Two operated from June 1996 to April 1999. The major objective of the test and evaluation phase of the projectmore » was to validate the technical characteristics of a molten salt power tower. This report describes the significant results from the test and evaluation activities, the operating experience of each major system, and overall plant performance. Tests were conducted to measure the power output (MW) of the each major system, the efficiencies of the heliostat, receiver, thermal storage, and electric power generation systems and the daily energy collected, daily thermal-to-electric conversion, and daily parasitic energy consumption. Also included are detailed test and evaluation reports.« less

  20. Extreme I/O on HPC for HEP using the Burst Buffer at NERSC

    NASA Astrophysics Data System (ADS)

    Bhimji, Wahid; Bard, Debbie; Burleigh, Kaylan; Daley, Chris; Farrell, Steve; Fasel, Markus; Friesen, Brian; Gerhardt, Lisa; Liu, Jialin; Nugent, Peter; Paul, Dave; Porter, Jeff; Tsulaia, Vakho

    2017-10-01

    In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy analysis workloads on HPC facilities at NERSC, for the ATLAS and ALICE LHC collaborations as well as astronomical image analysis for DESI and BOSS. To do this we exploit a new 900 TB NVRAM-based storage system recently installed at NERSC, termed a Burst Buffer. This is a novel approach to HPC storage that builds on-demand filesystems on all-SSD hardware that is placed on the high-speed network of the new Cori supercomputer. We describe the hardware and software involved in this system, and give an overview of its capabilities, before focusing in detail on how the ATLAS, ALICE and astronomical workflows were adapted to work on this system. We describe these modifications and the resulting performance results, including comparisons to other filesystems. We demonstrate that we can meet the challenging I/O requirements of HEP experiments and scale to many thousands of cores accessing a single shared storage system.

  1. Outlook and application analysis of energy storage in power system with high renewable energy penetration

    NASA Astrophysics Data System (ADS)

    Feng, Junshu; Zhang, Fuqiang

    2018-02-01

    To realize low-emission and low-carbon energy production and consumption, large-scale development and utilization of renewable energy has been put into practice in China. And it has been recognized that power system of future high renewable energy shares can operate more reliably with the participation of energy storage. Considering the significant role of storage playing in the future power system, this paper focuses on the application of energy storage with high renewable energy penetration. Firstly, two application modes are given, including demand side application mode and centralized renewable energy farm application mode. Afterwards, a high renewable energy penetration scenario of northwest region in China is designed, and its production simulation with application of energy storage in 2050 has been calculated and analysed. Finally, a development path and outlook of energy storage is given.

  2. If I do not have enough water, then how could I bring additional water for toilet cleaning?! Addressing water scarcity to promote hygienic use of shared toilets in Dhaka, Bangladesh.

    PubMed

    Saxton, Ronald E; Yeasmin, Farzana; Alam, Mahbub-Ul; Al-Masud, Abdullah; Dutta, Notan Chandra; Yeasmin, Dalia; Luby, Stephen P; Unicomb, Leanne; Winch, Peter J

    2017-09-01

    Provision of toilets is necessary but not sufficient to impact health as poor maintenance may impair toilet function and discourage their consistent use. Water in urban slums is both scarce and a prerequisite for toilet maintenance behaviours. We describe the development of behaviour change communications and selection of low-cost water storage hardware to facilitate adequate flushing among users of shared toilets. We conducted nine focus group discussions and six ranking exercises with adult users of shared toilets (50 females, 35 males), then designed and implemented three pilot interventions to facilitate regular flushing and improve hygienic conditions of shared toilets. We conducted follow-up assessments 1 and 2 months post-pilot including nine in-depth interviews and three focus group discussions with adult residents (23 females, 15 males) and three landlords in the pilot communities. Periodic water scarcity was common in the study communities. Residents felt embarrassed to carry water for flushing. Reserving water adjacent to the shared toilet enabled slum residents to flush regularly. Signs depicting rules for toilet use empowered residents and landlords to communicate these expectations for flushing to transient tenants. Residents in the pilot reported improvements in cleanliness and reduced odour inside toilet cubicles. Our pilot demonstrates the potential efficacy of low-cost water storage and behaviour change communications to improve maintenance of and user satisfaction with shared toilets in urban slum settings. © 2017 John Wiley & Sons Ltd.

  3. Irrigation infrastructure and water appropriation rules for food security

    NASA Astrophysics Data System (ADS)

    Gohar, Abdelaziz A.; Amer, Saud A.; Ward, Frank A.

    2015-01-01

    In the developing world's irrigated areas, water management and planning is often motivated by the need for lasting food security. Two important policy measures to address this need are improving the flexibility of water appropriation rules and developing irrigation storage infrastructure. Little research to date has investigated the performance of these two policy measures in a single analysis while maintaining a basin wide water balance. This paper examines impacts of storage capacity and water appropriation rules on total economic welfare in irrigated agriculture, while maintaining a water balance. The application is to a river basin in northern Afghanistan. A constrained optimization framework is developed to examine economic consequences on food security and farm income resulting from each policy measure. Results show that significant improvements in both policy aims can be achieved through expanding existing storage capacity to capture up to 150 percent of long-term average annual water supplies when added capacity is combined with either a proportional sharing of water shortages or unrestricted water trading. An important contribution of the paper is to show how the benefits of storage and a changed water appropriation system operate under a variable climate. Results show that the hardship of droughts can be substantially lessened, with the largest rewards taking place in the most difficult periods. Findings provide a comprehensive framework for addressing future water scarcity, rural livelihoods, and food security in the developing world's irrigated regions.

  4. Efficient Management of Certificate Revocation Lists in Smart Grid Advanced Metering Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cebe, Mumin; Akkaya, Kemal

    Advanced Metering Infrastructure (AMI) forms a communication network for the collection of power data from smart meters in Smart Grid. As the communication within an AMI needs to be secure, key management becomes an issue due to overhead and limited resources. While using public-keys eliminate some of the overhead of key management, there is still challenges regarding certificates that store and certify the publickeys. In particular, distribution and storage of certificate revocation list (CRL) is major a challenge due to cost of distribution and storage in AMI networks which typically consist of wireless multi-hop networks. Motivated by the need ofmore » keeping the CRL distribution and storage cost effective and scalable, in this paper, we present a distributed CRL management model utilizing the idea of distributed hash trees (DHTs) from peer-to-peer (P2P) networks. The basic idea is to share the burden of storage of CRLs among all the smart meters by exploiting the meshing capability of the smart meters among each other. Thus, using DHTs not only reduces the space requirements for CRLs but also makes the CRL updates more convenient. We implemented this structure on ns-3 using IEEE 802.11s mesh standard as a model for AMI and demonstrated its superior performance with respect to traditional methods of CRL management through extensive simulations.« less

  5. Knowledge Management Initiatives Used to Maintain Regulatory Expertise in Transportation and Storage of Radioactive Materials - 12177

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindsay, Haile; Garcia-Santos, Norma; Saverot, Pierre

    2012-07-01

    The U.S. Nuclear Regulatory Commission (NRC) was established in 1974 with the mission to license and regulate the civilian use of nuclear materials for commercial, industrial, academic, and medical uses in order to protect public health and safety, and the environment, and promote the common defense and security. Currently, approximately half (∼49%) of the workforce at the NRC has been with the Agency for less than six years. As part of the Agency's mission, the NRC has partial responsibility for the oversight of the transportation and storage of radioactive materials. The NRC has experienced a significant level of expertise leavingmore » the Agency due to staff attrition. Factors that contribute to this attrition include retirement of the experienced nuclear workforce and mobility of staff within or outside the Agency. Several knowledge management (KM) initiatives have been implemented within the Agency, with one of them including the formation of a Division of Spent Fuel Storage and Transportation (SFST) KM team. The team, which was formed in the fall of 2008, facilitates capturing, transferring, and documenting regulatory knowledge for staff to effectively perform their safety oversight of transportation and storage of radioactive materials, regulated under Title 10 of the Code of Federal Regulations (10 CFR) Part 71 and Part 72. In terms of KM, the SFST goal is to share critical information among the staff to reduce the impact from staff's mobility and attrition. KM strategies in place to achieve this goal are: (1) development of communities of practice (CoP) (SFST Qualification Journal and the Packaging and Storing Radioactive Material) in the on-line NRC Knowledge Center (NKC); (2) implementation of a SFST seminar program where the seminars are recorded and placed in the Agency's repository, Agency-wide Documents Access and Management System (ADAMS); (3) meeting of technical discipline group programs to share knowledge within specialty areas; (4) development of written guidance to capture 'administrative and technical' knowledge (e.g., office instructions (OIs), generic communications (e.g., bulletins, generic letters, regulatory issue summary), standard review plans (SRPs), interim staff guidance (ISGs)); (5) use of mentoring strategies for experienced staff to train new staff members; (6) use of Microsoft SharePoint portals in capturing, transferring, and documenting knowledge for staff across the Division from Division management and administrative assistants to the project managers, inspectors, and technical reviewers; and (7) development and implementation of a Division KM Plan. A discussion and description of the successes and challenges of implementing these KM strategies at the NRC/SFST will be provided. (authors)« less

  6. Informatics methods to enable sharing of quantitative imaging research data.

    PubMed

    Levy, Mia A; Freymann, John B; Kirby, Justin S; Fedorov, Andriy; Fennessy, Fiona M; Eschrich, Steven A; Berglund, Anders E; Fenstermacher, David A; Tan, Yongqiang; Guo, Xiaotao; Casavant, Thomas L; Brown, Bartley J; Braun, Terry A; Dekker, Andre; Roelofs, Erik; Mountz, James M; Boada, Fernando; Laymon, Charles; Oborski, Matt; Rubin, Daniel L

    2012-11-01

    The National Cancer Institute Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community. We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN. There are a variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network. As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Towards building a team of intelligent robots

    NASA Technical Reports Server (NTRS)

    Varanasi, Murali R.; Mehrotra, R.

    1987-01-01

    Topics addressed include: collision-free motion planning of multiple robot arms; two-dimensional object recognition; and pictorial databases (storage and sharing of the representations of three-dimensional objects).

  8. Sharing lattice QCD data over a widely distributed file system

    NASA Astrophysics Data System (ADS)

    Amagasa, T.; Aoki, S.; Aoki, Y.; Aoyama, T.; Doi, T.; Fukumura, K.; Ishii, N.; Ishikawa, K.-I.; Jitsumoto, H.; Kamano, H.; Konno, Y.; Matsufuru, H.; Mikami, Y.; Miura, K.; Sato, M.; Takeda, S.; Tatebe, O.; Togawa, H.; Ukawa, A.; Ukita, N.; Watanabe, Y.; Yamazaki, T.; Yoshie, T.

    2015-12-01

    JLDG is a data-grid for the lattice QCD (LQCD) community in Japan. Several large research groups in Japan have been working on lattice QCD simulations using supercomputers distributed over distant sites. The JLDG provides such collaborations with an efficient method of data management and sharing. File servers installed on 9 sites are connected to the NII SINET VPN and are bound into a single file system with the GFarm. The file system looks the same from any sites, so that users can do analyses on a supercomputer on a site, using data generated and stored in the JLDG at a different site. We present a brief description of hardware and software of the JLDG, including a recently developed subsystem for cooperating with the HPCI shared storage, and report performance and statistics of the JLDG. As of April 2015, 15 research groups (61 users) store their daily research data of 4.7PB including replica and 68 million files in total. Number of publications for works which used the JLDG is 98. The large number of publications and recent rapid increase of disk usage convince us that the JLDG has grown up into a useful infrastructure for LQCD community in Japan.

  9. Wide-area-distributed storage system for a multimedia database

    NASA Astrophysics Data System (ADS)

    Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro

    1998-12-01

    We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.

  10. High-Pressure Oxygen Generation for Outpost EVA Study

    NASA Technical Reports Server (NTRS)

    Jeng, Frank F.; Conger, Bruce; Ewert, Michael K.; Anderson, Molly S.

    2009-01-01

    The amount of oxygen consumption for crew extravehicular activity (EVA) in future lunar exploration missions will be significant. Eight technologies to provide high pressure EVA O2 were investigated. They are: high pressure O2 storage, liquid oxygen (LOX) storage followed by vaporization, scavenging LOX from Lander followed by vaporization, LOX delivery followed by sorption compression, water electrolysis followed by compression, stand-alone high pressure water electrolyzer, Environmental Control and Life Support System (ECLSS) and Power Elements sharing a high pressure water electrolyzer, and ECLSS and In-Situ Resource Utilization (ISRU) Elements sharing a high pressure electrolyzer. A trade analysis was conducted comparing launch mass and equivalent system mass (ESM) of the eight technologies in open and closed ECLSS architectures. Technologies considered appropriate for the two architectures were selected and suggested for development.

  11. Extended outlook: description, utilization, and daily applications of cloud technology in radiology.

    PubMed

    Gerard, Perry; Kapadia, Neil; Chang, Patricia T; Acharya, Jay; Seiler, Michael; Lefkovitz, Zvi

    2013-12-01

    The purpose of this article is to discuss the concept of cloud technology, its role in medical applications and radiology, the role of the radiologist in using and accessing these vast resources of information, and privacy concerns and HIPAA compliance strategies. Cloud computing is the delivery of shared resources, software, and information to computers and other devices as a metered service. This technology has a promising role in the sharing of patient medical information and appears to be particularly suited for application in radiology, given the field's inherent need for storage and access to large amounts of data. The radiology cloud has significant strengths, such as providing centralized storage and access, reducing unnecessary repeat radiologic studies, and potentially allowing radiologic second opinions more easily. There are significant cost advantages to cloud computing because of a decreased need for infrastructure and equipment by the institution. Private clouds may be used to ensure secure storage of data and compliance with HIPAA. In choosing a cloud service, there are important aspects, such as disaster recovery plans, uptime, and security audits, that must be considered. Given that the field of radiology has become almost exclusively digital in recent years, the future of secure storage and easy access to imaging studies lies within cloud computing technology.

  12. An MPA-IO interface to HPSS

    NASA Technical Reports Server (NTRS)

    Jones, Terry; Mark, Richard; Martin, Jeanne; May, John; Pierce, Elsie; Stanberry, Linda

    1996-01-01

    This paper describes an implementation of the proposed MPI-IO (Message Passing Interface - Input/Output) standard for parallel I/O. Our system uses third-party transfer to move data over an external network between the processors where it is used and the I/O devices where it resides. Data travels directly from source to destination, without the need for shuffling it among processors or funneling it through a central node. Our distributed server model lets multiple compute nodes share the burden of coordinating data transfers. The system is built on the High Performance Storage System (HPSS), and a prototype version runs on a Meiko CS-2 parallel computer.

  13. Energy Storage via Polyvinylidene Fluoride Dielectric on the Counterelectrode of Dye-Sensitized Solar Cells.

    PubMed

    Huang, Xuezhen; Zhang, Xi; Jiang, Hongrui

    2014-02-15

    To study the fundamental energy storage mechanism of photovoltaically self-charging cells (PSCs) without involving light-responsive semiconductor materials such as Si powder and ZnO nanowires, we fabricate a two-electrode PSC with the dual functions of photocurrent output and energy storage by introducing a PVDF film dielectric on the counterelectrode of a dye-sensitized solar cell. A layer of ultrathin Au film used as a quasi-electrode establishes a shared interface for the I - /I 3 - redox reaction and for the contact between the electrolyte and the dielectric for the energy storage, and prohibits recombination during the discharging period because of its discontinuity. PSCs with a 10-nm-thick PVDF provide a steady photocurrent output and achieve a light-to-electricity conversion efficiency ( η) of 3.38%, and simultaneously offer energy storage with a charge density of 1.67 C g -1 . Using this quasi-electrode design, optimized energy storage structures may be used in PSCs for high energy storage density.

  14. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    PubMed

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  15. Performance management of high performance computing for medical image processing in Amazon Web Services

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  16. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services

    PubMed Central

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-01-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335

  17. 40 CFR 60.434 - Monitoring of operations and recordkeeping.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... affected facility using waterborne ink systems or solvent-borne ink systems with solvent recovery systems...) If affected facilities share the same raw ink storage/handling system with existing facilities...

  18. 40 CFR 60.434 - Monitoring of operations and recordkeeping.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... affected facility using waterborne ink systems or solvent-borne ink systems with solvent recovery systems...) If affected facilities share the same raw ink storage/handling system with existing facilities...

  19. 40 CFR 60.434 - Monitoring of operations and recordkeeping.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... affected facility using waterborne ink systems or solvent-borne ink systems with solvent recovery systems...) If affected facilities share the same raw ink storage/handling system with existing facilities...

  20. A tissue retrieval and postharvest processing regimen for rodent reproductive tissues compatible with long-term storage on the international space station and postflight biospecimen sharing program.

    PubMed

    Gupta, Vijayalaxmi; Holets-Bondar, Lesya; Roby, Katherine F; Enders, George; Tash, Joseph S

    2015-01-01

    Collection and processing of tissues to preserve space flight effects from animals after return to Earth is challenging. Specimens must be harvested with minimal time after landing to minimize postflight readaptation alterations in protein expression/translation, posttranslational modifications, and expression, as well as changes in gene expression and tissue histological degradation after euthanasia. We report the development of a widely applicable strategy for determining the window of optimal species-specific and tissue-specific posteuthanasia harvest that can be utilized to integrate into multi-investigator Biospecimen Sharing Programs. We also determined methods for ISS-compatible long-term tissue storage (10 months at -80°C) that yield recovery of high quality mRNA and protein for western analysis after sample return. Our focus was reproductive tissues. The time following euthanasia where tissues could be collected and histological integrity was maintained varied with tissue and species ranging between 1 and 3 hours. RNA quality was preserved in key reproductive tissues fixed in RNAlater up to 40 min after euthanasia. Postfixation processing was also standardized for safe shipment back to our laboratory. Our strategy can be adapted for other tissues under NASA's Biospecimen Sharing Program or similar multi-investigator tissue sharing opportunities.

  1. Storage battery market: profiles and trade opportunities

    NASA Astrophysics Data System (ADS)

    Stonfer, D.

    1985-04-01

    The export market for domestically produced storage batteries is a modest one, typically averaging 6 to 7% of domestic industry shipments. Exports in 1984 totalled about $167 million. Canada and Mexico were the largest export markets for US storage batteries in 1984, accounting for slightly more than half of the total. The United Kingdom, Saudi Arabia, and the Netherlands round out the top five export markets. Combined, these five markets accounted for two-thirds of all US exports of storage batteries in 1984. On a regional basis, the North American (Canada), Central American, and European markets accounted for three-quarters of total storage battery exports. Lead-acid batteries accounted for 42% of total battery exports. Battery parts followed lead-acid batteries with a 29% share. Nicad batteries accounted for 16% of the total while other batteries accounted for 13%.

  2. Analysis of Energy Storage System with Distributed Hydrogen Production and Gas Turbine

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Bartela, Łukasz; Dubiel-Jurgaś, Klaudia

    2017-12-01

    Paper presents the concept of energy storage system based on power-to-gas-to-power (P2G2P) technology. The system consists of a gas turbine co-firing hydrogen, which is supplied from a distributed electrolysis installations, powered by the wind farms located a short distance from the potential construction site of the gas turbine. In the paper the location of this type of investment was selected. As part of the analyses, the area of wind farms covered by the storage system and the share of the electricity production which is subjected storage has been changed. The dependence of the changed quantities on the potential of the hydrogen production and the operating time of the gas turbine was analyzed. Additionally, preliminary economic analyses of the proposed energy storage system were carried out.

  3. Peer-to-peer architecture for multi-departmental distributed PACS

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Heuberger, Joris; Pysher, Lance; Ratib, Osman

    2006-03-01

    We have elected to explore peer-to-peer technology as an alternative to centralized PACS architecture for the increasing requirements for wide access to images inside and outside a radiology department. The goal being to allow users across the enterprise to access any study anytime without the need for prefetching or routing of images from central archive. Images can be accessed between different workstations and local storage nodes. We implemented "bonjour" a new remote file access technology developed by Apple allowing applications to share data and files remotely with optimized data access and data transfer. Our Open-source image display platform called OsiriX was adapted to allow sharing of local DICOM images through direct access of each local SQL database to be accessible from any other OsiriX workstation over the network. A server version of Osirix Core Data database also allows to access distributed archives servers in the same way. The infrastructure implemented allows fast and efficient access to any image anywhere anytime independently from the actual physical location of the data. It also allows benefiting from the performance of distributed low-cost and high capacity storage servers that can provide efficient caching of PACS data that was found to be 10 to 20 x faster that accessing the same date from the central PACS archive. It is particularly suitable for large hospitals and academic environments where clinical conferences, interdisciplinary discussions and successive sessions of image processing are often part of complex workflow or patient management and decision making.

  4. Storing, Browsing, Querying, and Sharing Data: the THREDDS Data Repository (TDR)

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D.; Baltzer, T.

    2005-12-01

    The Unidata Internet Data Distribution (IDD) network delivers gigabytes of data per day in near real time to sites across the U.S. and beyond. The THREDDS Data Server (TDS) supports public browsing of metadata and data access via OPeNDAP enabled URLs for datasets such as these. With such large quantities of data, sites generally employ a simple data management policy, keeping the data for a relatively short term on the order of hours to perhaps a week or two. In order to save interesting data in longer term storage and make it available for sharing, a user must move the data herself. In this case the user is responsible for determining where space is available, executing the data movement, generating any desired metadata, and setting access control to enable sharing. This task sequence is generally based on execution of a sequence of low level operating system specific commands with significant user involvement. The LEAD (Linked Environments for Atmospheric Discovery) project is building a cyberinfrastructure to support research and education in mesoscale meteorology. LEAD orchestrations require large, robust, and reliable storage with speedy access to stage data and store both intermediate and final results. These requirements suggest storage solutions that involve distributed storage, replication, and interfacing to archival storage systems such as mass storage systems and tape or removable disks. LEAD requirements also include metadata generation and access in order to support querying. In support of both THREDDS and LEAD requirements, Unidata is designing and prototyping the THREDDS Data Repository (TDR), a framework for a modular data repository to support distributed data storage and retrieval using a variety of back end storage media and interchangeable software components. The TDR interface will provide high level abstractions for long term storage, controlled, fast and reliable access, and data movement capabilities via a variety of technologies such as OPeNDAP and gridftp. The modular structure will allow substitution of software components so that both simple and complex storage media can be integrated into the repository. It will also allow integration of different varieties of supporting software. For example, if replication is desired, replica management could be handled via a simple hash table or a complex solution such as Replica Locater Service (RLS). In order to ensure that metadata is available for all the data in the repository, the TDR will also generate THREDDS metadata when necessary. Users will be able to establish levels of access control to their metadata and data. Coupled with a THREDDS Data Server, both browsing via THREDDS catalogs and querying capabilities will be supported. This presentation will describe the motivating factors, current status, and future plans of the TDR. References: IDD: http://www.unidata.ucar.edu/content/software/idd/index.html THREDDS: http://www.unidata.ucar.edu/content/projects/THREDDS/tech/server/ServerStatus.html LEAD: http://lead.ou.edu/ RLS: http://www.isi.edu/~annc/papers/chervenakRLSjournal05.pdf

  5. Author Correction: Decoupling electron and ion storage and the path from interfacial storage to artificial electrodes

    NASA Astrophysics Data System (ADS)

    Chen, Chia-Chin; Maier, Joachim

    2018-05-01

    In the version of this Perspective originally published, in the sentence "It is worthy of note that the final LiF-free situation characterized by MnO taking up the holes and the (F- containing) MnO surface taking up the lithium ions is also a subcase of the job-sharing concept23.", the word `holes' should have been `electrons'. This has now been corrected.

  6. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    NASA Astrophysics Data System (ADS)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  7. Methods to assess geological CO2 storage capacity: Status and best practice

    USGS Publications Warehouse

    Heidug, Wolf; Brennan, Sean T.; Holloway, Sam; Warwick, Peter D.; McCoy, Sean; Yoshimura, Tsukasa

    2013-01-01

    To understand the emission reduction potential of carbon capture and storage (CCS), decision makers need to understand the amount of CO2 that can be safely stored in the subsurface and the geographical distribution of storage resources. Estimates of storage resources need to be made using reliable and consistent methods. Previous estimates of CO2 storage potential for a range of countries and regions have been based on a variety of methodologies resulting in a correspondingly wide range of estimates. Consequently, there has been uncertainty about which of the methodologies were most appropriate in given settings, and whether the estimates produced by these methods were useful to policy makers trying to determine the appropriate role of CCS. In 2011, the IEA convened two workshops which brought together experts for six national surveys organisations to review CO2 storage assessment methodologies and make recommendations on how to harmonise CO2 storage estimates worldwide. This report presents the findings of these workshops and an internationally shared guideline for quantifying CO2 storage resources.

  8. Above the cloud computing orbital services distributed data model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  9. GTZ: a fast compression and cloud transmission tool optimized for FASTQ files.

    PubMed

    Xing, Yuting; Li, Gen; Wang, Zhenguo; Feng, Bolun; Song, Zhuo; Wu, Chengkun

    2017-12-28

    The dramatic development of DNA sequencing technology is generating real big data, craving for more storage and bandwidth. To speed up data sharing and bring data to computing resource faster and cheaper, it is necessary to develop a compression tool than can support efficient compression and transmission of sequencing data onto the cloud storage. This paper presents GTZ, a compression and transmission tool, optimized for FASTQ files. As a reference-free lossless FASTQ compressor, GTZ treats different lines of FASTQ separately, utilizes adaptive context modelling to estimate their characteristic probabilities, and compresses data blocks with arithmetic coding. GTZ can also be used to compress multiple files or directories at once. Furthermore, as a tool to be used in the cloud computing era, it is capable of saving compressed data locally or transmitting data directly into cloud by choice. We evaluated the performance of GTZ on some diverse FASTQ benchmarks. Results show that in most cases, it outperforms many other tools in terms of the compression ratio, speed and stability. GTZ is a tool that enables efficient lossless FASTQ data compression and simultaneous data transmission onto to cloud. It emerges as a useful tool for NGS data storage and transmission in the cloud environment. GTZ is freely available online at: https://github.com/Genetalks/gtz .

  10. Server-side Log Data Analytics for I/O Workload Characterization and Coordination on Large Shared Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Gunasekaran, Raghul; Ma, Xiaosong

    2016-01-01

    Inter-application I/O contention and performance interference have been recognized as severe problems. In this work, we demonstrate, through measurement from Titan (world s No. 3 supercomputer), that high I/O variance co-exists with the fact that individual storage units remain under-utilized for the majority of the time. This motivates us to propose AID, a system that performs automatic application I/O characterization and I/O-aware job scheduling. AID analyzes existing I/O traffic and batch job history logs, without any prior knowledge on applications or user/developer involvement. It identifies the small set of I/O-intensive candidates among all applications running on a supercomputer and subsequentlymore » mines their I/O patterns, using more detailed per-I/O-node traffic logs. Based on such auto- extracted information, AID provides online I/O-aware scheduling recommendations to steer I/O-intensive applications away from heavy ongoing I/O activities. We evaluate AID on Titan, using both real applications (with extracted I/O patterns validated by contacting users) and our own pseudo-applications. Our results confirm that AID is able to (1) identify I/O-intensive applications and their detailed I/O characteristics, and (2) significantly reduce these applications I/O performance degradation/variance by jointly evaluating out- standing applications I/O pattern and real-time system l/O load.« less

  11. Thermal energy storage for CSP (Concentrating Solar Power)

    NASA Astrophysics Data System (ADS)

    Py, Xavier; Sadiki, Najim; Olives, Régis; Goetz, Vincent; Falcoz, Quentin

    2017-07-01

    The major advantage of concentrating solar power before photovoltaic is the possibility to store thermal energy at large scale allowing dispatchability. Then, only CSP solar power plants including thermal storage can be operated 24 h/day using exclusively the solar resource. Nevertheless, due to a too low availability in mined nitrate salts, the actual mature technology of the two tanks molten salts cannot be applied to achieve the expected international share in the power production for 2050. Then alternative storage materials are under studies such as natural rocks and recycled ceramics made from industrial wastes. The present paper is a review of those alternative approaches.

  12. Energy storage at the threshold: Smart mobility and the grid of the future

    NASA Astrophysics Data System (ADS)

    Crabtree, George

    2018-01-01

    Energy storage is poised to drive transformations in transportation and the electricity grid that personalize access to mobility and energy services, not unlike the transformation of smart phones that personalized access to people and information. Storage will work with other emerging technologies such as electric vehicles, ride-sharing, self-driving and connected cars in transportation and with renewable generation, distributed energy resources and smart energy management on the grid to create mobility and electricity as services matched to customer needs replacing the conventional one-size-fits-all approach. This survey outlines the prospects, challenges and impacts of the coming mobility and electricity transformations.

  13. Storages Are Not Forever

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cambria, Erik; Chattopadhyay, Anupam; Linn, Eike

    Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. Here, we chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering outmore » noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.« less

  14. Storages Are Not Forever

    DOE PAGES

    Cambria, Erik; Chattopadhyay, Anupam; Linn, Eike; ...

    2017-05-27

    Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. Here, we chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering outmore » noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.« less

  15. Comparative assessment of status and opportunities for carbon Dioxide Capture and storage and Radioactive Waste Disposal In North America

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oldenburg, C.; Birkholzer, J.T.

    Aside from the target storage regions being underground, geologic carbon sequestration (GCS) and radioactive waste disposal (RWD) share little in common in North America. The large volume of carbon dioxide (CO{sub 2}) needed to be sequestered along with its relatively benign health effects present a sharp contrast to the limited volumes and hazardous nature of high-level radioactive waste (RW). There is well-documented capacity in North America for 100 years or more of sequestration of CO{sub 2} from coal-fired power plants. Aside from economics, the challenges of GCS include lack of fully established legal and regulatory framework for ownership of injectedmore » CO{sub 2}, the need for an expanded pipeline infrastructure, and public acceptance of the technology. As for RW, the USA had proposed the unsaturated tuffs of Yucca Mountain, Nevada, as the region's first high-level RWD site before removing it from consideration in early 2009. The Canadian RW program is currently evolving with options that range from geologic disposal to both decentralized and centralized permanent storage in surface facilities. Both the USA and Canada have established legal and regulatory frameworks for RWD. The most challenging technical issue for RWD is the need to predict repository performance on extremely long time scales (10{sup 4}-10{sup 6} years). While attitudes toward nuclear power are rapidly changing as fossil-fuel costs soar and changes in climate occur, public perception remains the most serious challenge to opening RW repositories. Because of the many significant differences between RWD and GCS, there is little that can be shared between them from regulatory, legal, transportation, or economic perspectives. As for public perception, there is currently an opportunity to engage the public on the benefits and risks of both GCS and RWD as they learn more about the urgent energy-climate crisis created by greenhouse gas emissions from current fossil-fuel combustion practices.« less

  16. Tech Transfer Webinar: Amoeba Cysts as Natural Containers for the Transport and Storage of Pathogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El-Etr, Sahar

    2014-10-08

    Sahar El-Etr, Biomedical Scientist at the Lawrence Livermore National Laboratory, shares a unique method for transporting clinical samples from the field to a laboratory. The use of amoeba as “natural” containers for pathogens was utilized to develop the first living system for the transport and storage of pathogens. The amoeba system works at ambient temperature for extended periods of time—capabilities currently not available for biological sample transport.

  17. A thermal storage capacity market for non dispatchable renewable energies

    NASA Astrophysics Data System (ADS)

    Bennouna, El Ghali; Mouaky, Ammar; Arrad, Mouad; Ghennioui, Abdellatif; Mimet, Abdelaziz

    2017-06-01

    Due to the increasingly high capacity of wind power and solar PV in Germany and some other European countries and the high share of variable renewable energy resources in comparison to fossil and nuclear capacity, a power reserve market structured by auction systems was created to facilitate the exchange of balance power capacities between systems and even grid operators. Morocco has a large potential for both wind and solar energy and is engaged in a program to deploy 2000MW of wind capacity by 2020 and 3000 MW of solar capacity by 2030. Although the competitiveness of wind energy is very strong, it appears clearly that the wind program could be even more ambitious than what it is, especially when compared to the large exploitable potential. On the other hand, heavy investments on concentrated solar power plants equipped with thermal energy storage have triggered a few years ago including the launching of the first part of the Nour Ouarzazate complex, the goal being to reach stable, dispatchable and affordable electricity especially during evening peak hours. This paper aims to demonstrate the potential of shared thermal storage capacity between dispatchable and non dispatchable renewable energies and particularly CSP and wind power. Thus highlighting the importance of a storage capacity market in parallel to the power reserve market and the and how it could enhance the development of both wind and CSP market penetration.

  18. Design of the transfer line from booster to storage ring at 3 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayar, C., E-mail: cafer.bayar@cern.ch; Ciftci, A. K., E-mail: abbas.kenan.ciftci@cern.ch

    The Synchrotron Booster Ring accelerates the e-beam up to 3 GeV and particles are transported from booster to storage ring by transfer line. In this study, two options are considered, the first one is a long booster which shares the same tunnel with storage ring and the second one is a compact booster. As a result, two transfer line are designed based on booster options. The optical design is constrained by the e-beam Twiss parameters entering and leaving the transfer line. Twiss parameters in the extraction point of booster are used for the entrance of transfer line and are matchedmore » in the exit of transfer line to the injection point of the storage ring.« less

  19. Vehicle-to-Grid Automatic Load Sharing with Driver Preference in Micro-Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yubo; Nazaripouya, Hamidreza; Chu, Chi-Cheng

    Integration of Electrical Vehicles (EVs) with power grid not only brings new challenges for load management, but also opportunities for distributed storage and generation. This paper comprehensively models and analyzes distributed Vehicle-to-Grid (V2G) for automatic load sharing with driver preference. In a micro-grid with limited communications, V2G EVs need to decide load sharing based on their own power and voltage profile. A droop based controller taking into account driver preference is proposed in this paper to address the distributed control of EVs. Simulations are designed for three fundamental V2G automatic load sharing scenarios that include all system dynamics of suchmore » applications. Simulation results demonstrate that active power sharing is achieved proportionally among V2G EVs with consideration of driver preference. In additional, the results also verify the system stability and reactive power sharing analysis in system modelling, which sheds light on large scale V2G automatic load sharing in more complicated cases.« less

  20. A new standardized data collection system for interdisciplinary thyroid cancer management: Thyroid COBRA.

    PubMed

    Tagliaferri, Luca; Gobitti, Carlo; Colloca, Giuseppe Ferdinando; Boldrini, Luca; Farina, Eleonora; Furlan, Carlo; Paiar, Fabiola; Vianello, Federica; Basso, Michela; Cerizza, Lorenzo; Monari, Fabio; Simontacchi, Gabriele; Gambacorta, Maria Antonietta; Lenkowicz, Jacopo; Dinapoli, Nicola; Lanzotti, Vito; Mazzarotto, Renzo; Russi, Elvio; Mangoni, Monica

    2018-07-01

    The big data approach offers a powerful alternative to Evidence-based medicine. This approach could guide cancer management thanks to machine learning application to large-scale data. Aim of the Thyroid CoBRA (Consortium for Brachytherapy Data Analysis) project is to develop a standardized web data collection system, focused on thyroid cancer. The Metabolic Radiotherapy Working Group of Italian Association of Radiation Oncology (AIRO) endorsed the implementation of a consortium directed to thyroid cancer management and data collection. The agreement conditions, the ontology of the collected data and the related software services were defined by a multicentre ad hoc working-group (WG). Six Italian cancer centres were firstly started the project, defined and signed the Thyroid COBRA consortium agreement. Three data set tiers were identified: Registry, Procedures and Research. The COBRA-Storage System (C-SS) appeared to be not time-consuming and to be privacy respecting, as data can be extracted directly from the single centre's storage platforms through a secured connection that ensures reliable encryption of sensible data. Automatic data archiving could be directly performed from Image Hospital Storage System or the Radiotherapy Treatment Planning Systems. The C-SS architecture will allow "Cloud storage way" or "distributed learning" approaches for predictive model definition and further clinical decision support tools development. The development of the Thyroid COBRA data Storage System C-SS through a multicentre consortium approach appeared to be a feasible tool in the setup of complex and privacy saving data sharing system oriented to the management of thyroid cancer and in the near future every cancer type. Copyright © 2018 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  1. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  2. An optimization method for speech enhancement based on deep neural network

    NASA Astrophysics Data System (ADS)

    Sun, Haixia; Li, Sikun

    2017-06-01

    Now, this document puts forward a deep neural network (DNN) model with more credible data set and more robust structure. First, we take two regularization skills, dropout and sparsity constraint to strengthen the generalization ability of the model. In this way, not only the model is able to reach the consistency between the pre-training model and the fine-tuning model, but also it reduce resource consumption. Then network compression by weights sharing and quantization is allowed to reduce storage cost. In the end, we evaluate the quality of the reconstructed speech according to different criterion. The result proofs that the improved framework has good performance on speech enhancement and meets the requirement of speech processing.

  3. BRISK--research-oriented storage kit for biology-related data.

    PubMed

    Tan, Alan; Tripp, Ben; Daley, Denise

    2011-09-01

    In genetic science, large-scale international research collaborations represent a growing trend. These collaborations have demanding and challenging database, storage, retrieval and communication needs. These studies typically involve demographic and clinical data, in addition to the results from numerous genomic studies (omics studies) such as gene expression, eQTL, genome-wide association and methylation studies, which present numerous challenges, thus the need for data integration platforms that can handle these complex data structures. Inefficient methods of data transfer and access control still plague research collaboration. As science becomes more and more collaborative in nature, the need for a system that adequately manages data sharing becomes paramount. Biology-Related Information Storage Kit (BRISK) is a package of several web-based data management tools that provide a cohesive data integration and management platform. It was specifically designed to provide the architecture necessary to promote collaboration and expedite data sharing between scientists. The software, documentation, Java source code and demo are available at http://genapha.icapture.ubc.ca/brisk/index.jsp. BRISK was developed in Java, and tested on an Apache Tomcat 6 server with a MySQL database. denise.daley@hli.ubc.ca.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, C.W.; Giraud, K.M.

    Newcomer countries expected to develop new nuclear power programs by 2030 are being encouraged by the International Atomic Energy Agency to explore the use of shared facilities for spent fuel storage and geologic disposal. Multinational underground nuclear parks (M-UNPs) are an option for sharing such facilities. Newcomer countries with suitable bedrock conditions could volunteer to host M-UNPs. M-UNPs would include back-end fuel cycle facilities, in open or closed fuel cycle configurations, with sufficient capacity to enable M-UNP host countries to provide for-fee waste management services to partner countries, and to manage waste from the M-UNP power reactors. M-UNP potential advantagesmore » include: the option for decades of spent fuel storage; fuel-cycle policy flexibility; increased proliferation resistance; high margin of physical security against attack; and high margin of containment capability in the event of beyond-design-basis accidents, thereby reducing the risk of Fukushima-like radiological contamination of surface lands. A hypothetical M-UNP in crystalline rock with facilities for small modular reactors, spent fuel storage, reprocessing, and geologic disposal is described using a room-and-pillar reference-design cavern. Underground construction cost is judged tractable through use of modern excavation technology and careful site selection. (authors)« less

  5. Library Automation.

    ERIC Educational Resources Information Center

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  6. Design Considerations for a Web-based Database System of ELISpot Assay in Immunological Research

    PubMed Central

    Ma, Jingming; Mosmann, Tim; Wu, Hulin

    2005-01-01

    The enzyme-linked immunospot (ELISpot) assay has been a primary means in immunological researches (such as HIV-specific T cell response). Due to huge amount of data involved in ELISpot assay testing, the database system is needed for efficient data entry, easy retrieval, secure storage, and convenient data process. Besides, the NIH has recently issued a policy to promote the sharing of research data (see http://grants.nih.gov/grants/policy/data_sharing). The Web-based database system will be definitely benefit to data sharing among broad research communities. Here are some considerations for a database system of ELISpot assay (DBSEA). PMID:16779326

  7. Minimum information required for a DMET experiment reporting.

    PubMed

    Kumuthini, Judit; Mbiyavanga, Mamana; Chimusa, Emile R; Pathak, Jyotishman; Somervuo, Panu; Van Schaik, Ron Hn; Dolzan, Vita; Mizzi, Clint; Kalideen, Kusha; Ramesar, Raj S; Macek, Milan; Patrinos, George P; Squassina, Alessio

    2016-09-01

    To provide pharmacogenomics reporting guidelines, the information and tools required for reporting to public omic databases. For effective DMET data interpretation, sharing, interoperability, reproducibility and reporting, we propose the Minimum Information required for a DMET Experiment (MIDE) reporting. MIDE provides reporting guidelines and describes the information required for reporting, data storage and data sharing in the form of XML. The MIDE guidelines will benefit the scientific community with pharmacogenomics experiments, including reporting pharmacogenomics data from other technology platforms, with the tools that will ease and automate the generation of such reports using the standardized MIDE XML schema, facilitating the sharing, dissemination, reanalysis of datasets through accessible and transparent pharmacogenomics data reporting.

  8. Optimizing carbon storage and biodiversity protection in tropical agricultural landscapes.

    PubMed

    Gilroy, James J; Woodcock, Paul; Edwards, Felicity A; Wheeler, Charlotte; Medina Uribe, Claudia A; Haugaasen, Torbjørn; Edwards, David P

    2014-07-01

    With the rapidly expanding ecological footprint of agriculture, the design of farmed landscapes will play an increasingly important role for both carbon storage and biodiversity protection. Carbon and biodiversity can be enhanced by integrating natural habitats into agricultural lands, but a key question is whether benefits are maximized by including many small features throughout the landscape ('land-sharing' agriculture) or a few large contiguous blocks alongside intensive farmland ('land-sparing' agriculture). In this study, we are the first to integrate carbon storage alongside multi-taxa biodiversity assessments to compare land-sparing and land-sharing frameworks. We do so by sampling carbon stocks and biodiversity (birds and dung beetles) in landscapes containing agriculture and forest within the Colombian Chocó-Andes, a zone of high global conservation priority. We show that woodland fragments embedded within a matrix of cattle pasture hold less carbon per unit area than contiguous primary or advanced secondary forests (>15 years). Farmland sites also support less diverse bird and dung beetle communities than contiguous forests, even when farmland retains high levels of woodland habitat cover. Landscape simulations based on these data suggest that land-sparing strategies would be more beneficial for both carbon storage and biodiversity than land-sharing strategies across a range of production levels. Biodiversity benefits of land-sparing are predicted to be similar whether spared lands protect primary or advanced secondary forests, owing to the close similarity of bird and dung beetle communities between the two forest classes. Land-sparing schemes that encourage the protection and regeneration of natural forest blocks thus provide a synergy between carbon and biodiversity conservation, and represent a promising strategy for reducing the negative impacts of agriculture on tropical ecosystems. However, further studies examining a wider range of ecosystem services will be necessary to fully understand the links between land-allocation strategies and long-term ecosystem service provision. © 2014 John Wiley & Sons Ltd.

  9. Tech Transfer Webinar: Amoeba Cysts as Natural Containers for the Transport and Storage of Pathogens

    ScienceCinema

    El-Etr, Sahar

    2018-01-16

    Sahar El-Etr, Biomedical Scientist at the Lawrence Livermore National Laboratory, shares a unique method for transporting clinical samples from the field to a laboratory. The use of amoeba as “natural” containers for pathogens was utilized to develop the first living system for the transport and storage of pathogens. The amoeba system works at ambient temperature for extended periods of time—capabilities currently not available for biological sample transport.

  10. Research in Functionally Distributed Computer Systems Development. Volume XII. Design Considerations in Distributed Data Base Management Systems.

    DTIC Science & Technology

    1977-04-01

    task of data organization, management, and storage has been given to a select group of specialists . These specialists (the Data Base Administrators...report writers, etc.) the task of data organi?9tion, management, and storage has been given to a select group of specialists . These specialists (the...distributed DBMS Involves first identifying a set of two or more tasks blocking each other from a collection of shared 12 records. Once the set of

  11. Design and Implementation of Telemedicine based on Java Media Framework

    NASA Astrophysics Data System (ADS)

    Xiong, Fengguang; Jia, Zhiyan

    According to analyze the importance and problem of telemedicine in this paper, a telemedicine system based on JMF is proposed to design and implement capturing, compression, storage, transmission, reception and play of a medical audio and video. The telemedicine system can solve existing problems that medical information is not shared, platform-dependent is high, software is incompatibilities and so on. Experimental data prove that the system has low hardware cost, and is easy to transmission and storage, and is portable and powerful.

  12. Digital radiography: spatial and contrast resolution

    NASA Astrophysics Data System (ADS)

    Bjorkholm, Paul; Annis, M.; Frederick, E.; Stein, J.; Swift, R.

    1981-07-01

    The addition of digital image collection and storage to standard and newly developed x-ray imaging techniques has allowed spectacular improvements in some diagnostic procedures. There is no reason to expect that the developments in this area are yet complete. But no matter what further developments occur in this field, all the techniques will share a common element, digital image storage and processing. This common element alone determines some of the important imaging characteristics. These will be discussed using one system, the Medical MICRODOSE System as an example.

  13. Prototyping an online wetland ecosystem services model using open model sharing standards

    USGS Publications Warehouse

    Feng, M.; Liu, S.; Euliss, N.H.; Young, Caitlin; Mushet, D.M.

    2011-01-01

    Great interest currently exists for developing ecosystem models to forecast how ecosystem services may change under alternative land use and climate futures. Ecosystem services are diverse and include supporting services or functions (e.g., primary production, nutrient cycling), provisioning services (e.g., wildlife, groundwater), regulating services (e.g., water purification, floodwater retention), and even cultural services (e.g., ecotourism, cultural heritage). Hence, the knowledge base necessary to quantify ecosystem services is broad and derived from many diverse scientific disciplines. Building the required interdisciplinary models is especially challenging as modelers from different locations and times may develop the disciplinary models needed for ecosystem simulations, and these models must be identified and made accessible to the interdisciplinary simulation. Additional difficulties include inconsistent data structures, formats, and metadata required by geospatial models as well as limitations on computing, storage, and connectivity. Traditional standalone and closed network systems cannot fully support sharing and integrating interdisciplinary geospatial models from variant sources. To address this need, we developed an approach to openly share and access geospatial computational models using distributed Geographic Information System (GIS) techniques and open geospatial standards. We included a means to share computational models compliant with Open Geospatial Consortium (OGC) Web Processing Services (WPS) standard to ensure modelers have an efficient and simplified means to publish new models. To demonstrate our approach, we developed five disciplinary models that can be integrated and shared to simulate a few of the ecosystem services (e.g., water storage, waterfowl breeding) that are provided by wetlands in the Prairie Pothole Region (PPR) of North America.

  14. Optimisation of the Management of Higher Activity Waste in the UK - 13537

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Ciara; Buckley, Matthew

    2013-07-01

    The Upstream Optioneering project was created in the Nuclear Decommissioning Authority (UK) to support the development and implementation of significant opportunities to optimise activities across all the phases of the Higher Activity Waste management life cycle (i.e. retrieval, characterisation, conditioning, packaging, storage, transport and disposal). The objective of the Upstream Optioneering project is to work in conjunction with other functions within NDA and the waste producers to identify and deliver solutions to optimise the management of higher activity waste. Historically, optimisation may have occurred on aspects of the waste life cycle (considered here to include retrieval, conditioning, treatment, packaging, interimmore » storage, transport to final end state, which may be geological disposal). By considering the waste life cycle as a whole, critical analysis of assumed constraints may lead to cost savings for the UK Tax Payer. For example, it may be possible to challenge the requirements for packaging wastes for disposal to deliver an optimised waste life cycle. It is likely that the challenges faced in the UK are shared in other countries. It is therefore likely that the opportunities identified may also apply elsewhere, with the potential for sharing information to enable value to be shared. (authors)« less

  15. A Workflow-based Intelligent Network Data Movement Advisor with End-to-end Performance Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Michelle M.; Wu, Chase Q.

    2013-11-07

    Next-generation eScience applications often generate large amounts of simulation, experimental, or observational data that must be shared and managed by collaborative organizations. Advanced networking technologies and services have been rapidly developed and deployed to facilitate such massive data transfer. However, these technologies and services have not been fully utilized mainly because their use typically requires significant domain knowledge and in many cases application users are even not aware of their existence. By leveraging the functionalities of an existing Network-Aware Data Movement Advisor (NADMA) utility, we propose a new Workflow-based Intelligent Network Data Movement Advisor (WINDMA) with end-to-end performance optimization formore » this DOE funded project. This WINDMA system integrates three major components: resource discovery, data movement, and status monitoring, and supports the sharing of common data movement workflows through account and database management. This system provides a web interface and interacts with existing data/space management and discovery services such as Storage Resource Management, transport methods such as GridFTP and GlobusOnline, and network resource provisioning brokers such as ION and OSCARS. We demonstrate the efficacy of the proposed transport-support workflow system in several use cases based on its implementation and deployment in DOE wide-area networks.« less

  16. Integration of Variable Speed Pumped Hydro Storage in Automatic Generation Control Systems

    NASA Astrophysics Data System (ADS)

    Fulgêncio, N.; Moreira, C.; Silva, B.

    2017-04-01

    Pumped storage power (PSP) plants are expected to be an important player in modern electrical power systems when dealing with increasing shares of new renewable energies (NRE) such as solar or wind power. The massive penetration of NRE and consequent replacement of conventional synchronous units will significantly affect the controllability of the system. In order to evaluate the capability of variable speed PSP plants participation in the frequency restoration reserve (FRR) provision, taking into account the expected performance in terms of improved ramp response capability, a comparison with conventional hydro units is presented. In order to address this issue, a three area test network was considered, as well as the corresponding automatic generation control (AGC) systems, being responsible for re-dispatching the generation units to re-establish power interchange between areas as well as the system nominal frequency. The main issue under analysis in this paper is related to the benefits of the fast response of variable speed PSP with respect to its capability of providing fast power balancing in a control area.

  17. Effect of glycine functionalization of 2D titanium carbide (MXene) on charge storage

    DOE PAGES

    Chen, Chi; Boota, Muhammad; Urbankowski, Patrick; ...

    2018-02-20

    Restacking of two-dimensional (2D) flakes reduces the accessibility of electrolyte ions and is a problem in energy storage and other applications. Organic molecules can be used to prevent restacking and keep the interlayer space open. In this paper, we report on a combined theoretical and experimental investigation of the interaction between 2D titanium carbide (MXene), Ti 3C 2T x, and glycine. From first principle calculations, we presented the functionalization of glycine on the Ti 3C 2O 2 surface, evidenced by the shared electrons between Ti and N atoms. To experimentally validate our predictions, we synthesized flexible freestanding films of Timore » 3C 2T x/glycine hybrids. X-ray diffraction and X-ray photoelectron spectroscopy confirmed the increased interlayer spacing and possible Ti–N bonding, respectively, which agree with our theoretical predictions. Finally, the Ti 3C 2T x/glycine hybrid films exhibited an improved rate and cycling performances compared to pristine Ti 3C 2T x, possibly due to better charge percolation within expanded Ti 3C 2T x.« less

  18. Computational biology in the cloud: methods and new insights from computing at scale.

    PubMed

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  19. Cloud-enabled microscopy and droplet microfluidic platform for specific detection of Escherichia coli in water.

    PubMed

    Golberg, Alexander; Linshiz, Gregory; Kravets, Ilia; Stawski, Nina; Hillson, Nathan J; Yarmush, Martin L; Marks, Robert S; Konry, Tania

    2014-01-01

    We report an all-in-one platform - ScanDrop - for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a "cloud" network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2-4 days for other currently available standard detection methods.

  20. A Techno-Economic Assessment of Hybrid Cooling Systems for Coal- and Natural-Gas-Fired Power Plants with and without Carbon Capture and Storage.

    PubMed

    Zhai, Haibo; Rubin, Edward S

    2016-04-05

    Advanced cooling systems can be deployed to enhance the resilience of thermoelectric power generation systems. This study developed and applied a new power plant modeling option for a hybrid cooling system at coal- or natural-gas-fired power plants with and without amine-based carbon capture and storage (CCS) systems. The results of the plant-level analyses show that the performance and cost of hybrid cooling systems are affected by a range of environmental, technical, and economic parameters. In general, when hot periods last the entire summer, the wet unit of a hybrid cooling system needs to share about 30% of the total plant cooling load in order to minimize the overall system cost. CCS deployment can lead to a significant increase in the water use of hybrid cooling systems, depending on the level of CO2 capture. Compared to wet cooling systems, widespread applications of hybrid cooling systems can substantially reduce water use in the electric power sector with only a moderate increase in the plant-level cost of electricity generation.

  1. Cloud-Enabled Microscopy and Droplet Microfluidic Platform for Specific Detection of Escherichia coli in Water

    PubMed Central

    Kravets, Ilia; Stawski, Nina; Hillson, Nathan J.; Yarmush, Martin L.; Marks, Robert S.; Konry, Tania

    2014-01-01

    We report an all-in-one platform – ScanDrop – for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a “cloud” network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2–4 days for other currently available standard detection methods. PMID:24475107

  2. Sharing from Scratch: How To Network CD-ROM.

    ERIC Educational Resources Information Center

    Doering, David

    1998-01-01

    Examines common CD-ROM networking architectures: via existing operating systems (OS), thin server towers, and dedicated servers. Discusses digital video disc (DVD) and non-CD/DVD optical storage solutions and presents case studies of networks that work. (PEN)

  3. Parallel file system with metadata distributed across partitioned key-value store c

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  4. Slycat™ User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crossno, Patricia J.; Gittinger, Jaxon; Hunt, Warren L.

    Slycat™ is a web-based system for performing data analysis and visualization of potentially large quantities of remote, high-dimensional data. Slycat™ specializes in working with ensemble data. An ensemble is a group of related data sets, which typically consists of a set of simulation runs exploring the same problem space. An ensemble can be thought of as a set of samples within a multi-variate domain, where each sample is a vector whose value defines a point in high-dimensional space. To understand and describe the underlying problem being modeled in the simulations, ensemble analysis looks for shared behaviors and common features acrossmore » the group of runs. Additionally, ensemble analysis tries to quantify differences found in any members that deviate from the rest of the group. The Slycat™ system integrates data management, scalable analysis, and visualization. Results are viewed remotely on a user’s desktop via commodity web clients using a multi-tiered hierarchy of computation and data storage, as shown in Figure 1. Our goal is to operate on data as close to the source as possible, thereby reducing time and storage costs associated with data movement. Consequently, we are working to develop parallel analysis capabilities that operate on High Performance Computing (HPC) platforms, to explore approaches for reducing data size, and to implement strategies for staging computation across the Slycat™ hierarchy. Within Slycat™, data and visual analysis are organized around projects, which are shared by a project team. Project members are explicitly added, each with a designated set of permissions. Although users sign-in to access Slycat™, individual accounts are not maintained. Instead, authentication is used to determine project access. Within projects, Slycat™ models capture analysis results and enable data exploration through various visual representations. Although for scientists each simulation run is a model of real-world phenomena given certain conditions, we use the term model to refer to our modeling of the ensemble data, not the physics. Different model types often provide complementary perspectives on data features when analyzing the same data set. Each model visualizes data at several levels of abstraction, allowing the user to range from viewing the ensemble holistically to accessing numeric parameter values for a single run. Bookmarks provide a mechanism for sharing results, enabling interesting model states to be labeled and saved.« less

  5. High Level Synthesis in ASP

    DTIC Science & Technology

    1986-08-19

    Thus in and g (X, Y) A and X share one element, and B and Y share another. Assigning a value to A (via its storage element) also assigns that value to X...functionality as well as generate it. i4 29 References [Ada] ’ADA as a Hardware Description Language: An Initial Report’ M.R. Bar- bacci, S. Grout, G ...1985; pp. 303-320. (Expert] ’An Expert-System Paradigm for Design’ Forrest D. Brewer, Daniel D. Gajski ; 23rd Design Automation Conference, 1986; pp

  6. Understanding the I/O Performance Gap Between Cori KNL and Haswell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jialin; Koziol, Quincey; Tang, Houjun

    2017-05-01

    The Cori system at NERSC has two compute partitions with different CPU architectures: a 2,004 node Haswell partition and a 9,688 node KNL partition, which ranked as the 5th most powerful and fastest supercomputer on the November 2016 Top 500 list. The compute partitions share a common storage configuration, and understanding the IO performance gap between them is important, impacting not only to NERSC/LBNL users and other national labs, but also to the relevant hardware vendors and software developers. In this paper, we have analyzed performance of single core and single node IO comprehensively on the Haswell and KNL partitions,more » and have discovered the major bottlenecks, which include CPU frequencies and memory copy performance. We have also extended our performance tests to multi-node IO and revealed the IO cost difference caused by network latency, buffer size, and communication cost. Overall, we have developed a strong understanding of the IO gap between Haswell and KNL nodes and the lessons learned from this exploration will guide us in designing optimal IO solutions in many-core era.« less

  7. From storage to manipulation: How the neural correlates of verbal working memory reflect varying demands on inner speech.

    PubMed

    Marvel, Cherie L; Desmond, John E

    2012-01-01

    The ability to store and manipulate online information may be enhanced by an inner speech mechanism that draws upon motor brain regions. Neural correlates of this mechanism were examined using event-related functional magnetic resonance imaging (fMRI). Sixteen participants completed two conditions of a verbal working memory task. In both conditions, participants viewed one or two target letters. In the "storage" condition, these targets were held in mind across a delay. Then a probe letter was presented, and participants indicated by button press whether the probe matched the targets. In the "manipulation" condition, participants identified new targets by thinking two alphabetical letters forward of each original target (e.g., f→h). Participants subsequently indicated whether the probe matched the newly derived targets. Brain activity during the storage and manipulation conditions was examined specifically during the delay phase in order to directly compare manipulation versus storage processes. Activations that were common to both conditions, yet disproportionately greater with manipulation, were observed in the left inferior frontal cortex, premotor cortex, and anterior insula, bilaterally in the parietal lobes and superior cerebellum, and in the right inferior cerebellum. This network shares substrates with overt speech and may represent an inner speech pathway that increases activity with greater working memory demands. Additionally, an inverse correlation was observed between manipulation-related brain activity (on correct trials) and test accuracy in the left premotor cortex, anterior insula, and bilateral superior cerebellum. This inverse relationship may represent intensification of inner speech as one struggles to maintain performance levels. © 2011 Elsevier Inc. All rights reserved.

  8. Development of climate data storage and processing model

    NASA Astrophysics Data System (ADS)

    Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.

    2016-11-01

    We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.

  9. Patterns of Storage, Use, and Disposal of Opioids Among Cancer Outpatients

    PubMed Central

    de la Cruz, Maxine; Rodriguez, Eden Mae; Thames, Jessica; Wu, Jimin; Chisholm, Gary; Liu, Diane; Frisbee-Hume, Susan; Yennurajalingam, Sriram; Hui, David; Cantu, Hilda; Marin, Alejandra; Gayle, Vicki; Shinn, Nancy; Xu, Angela; Williams, Janet; Bruera, Eduardo

    2014-01-01

    Purpose. Improper storage, use, and disposal of prescribed opioids can lead to diversion or accidental poisoning. Our objective was to determine the patterns of storage, utilization, and disposal of opioids among cancer outpatients. Patients and Methods. We surveyed 300 adult cancer outpatients receiving opioids in our supportive care center and collected information regarding opioid use, storage, and disposal, along with scores on the CAGE (cut down, annoyed, guilty, eye-opener) alcoholism screening questionnaire. Unsafe use was defined as sharing or losing opioids; unsafe storage was defined as storing opioids in plain sight. Results. The median age was 57 years. CAGE was positive in 58 of 300 patients (19%), and 26 (9%) had a history of illicit drug use. Fifty-six (19%) stored opioids in plain sight, 208 (69%) kept opioids hidden but unlocked, and only 28 (9%) locked their opioids. CAGE-positive patients (p = .007) and those with a history of illicit drug use (p = .0002) or smoking (p = .03) were more likely to lock their opioids. Seventy-eight (26%) reported unsafe use by sharing (9%) or losing (17%) their opioids. Patients who were never married or single (odds ratio: 2.92; 95% confidence interval: 1.48–5.77; p = .006), were CAGE positive (40% vs. 21%; p = .003), or had a history of illicit drug use (42% vs. 23%; p = .031) were more likely to use opioids unsafely. Overall, 223 of 300 patients (74%) were unaware of proper opioid disposal methods, and 138 (46%) had unused opioids at home. Conclusion. A large proportion of cancer patients improperly and unsafely use, store, and dispose of opioids, highlighting the need for establishment of easily accessed patient education and drug take-back programs. PMID:24868100

  10. Secure Enclaves: An Isolation-centric Approach for Creating Secure High Performance Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A.; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges formore » the use of shared infrastructure in HPC environments. This report details current state-of-the-art in virtualization, reconfigurable network enclaving via Software Defined Networking (SDN), and storage architectures and bridging techniques for creating secure enclaves in HPC environments.« less

  11. Eighteen Years of Safe Storage and Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moren, Richard J.; Morton, M.

    2016-03-28

    The purpose of this paper is to share the status and condition of the six reactor buildings at the Hanford Site in Washington State that are in this SAFSTOR condition (for between 4 and 18 years as of summer of 2016).

  12. Frequency of unsafe storage, use, and disposal practices of opioids among cancer patients presenting to the emergency department.

    PubMed

    Silvestre, Julio; Reddy, Akhila; de la Cruz, Maxine; Wu, Jimin; Liu, Diane; Bruera, Eduardo; Todd, Knox H

    2017-12-01

    Approximately 75% of prescription opioid abusers obtain the drug from an acquaintance, which may be a consequence of improper opioid storage, use, disposal, and lack of patient education. We aimed to determine the opioid storage, use, and disposal patterns in patients presenting to the emergency department (ED) of a comprehensive cancer center. We surveyed 113 patients receiving opioids for at least 2 months upon presenting to the ED and collected information regarding opioid use, storage, and disposal. Unsafe storage was defined as storing opioids in plain sight, and unsafe use was defined as sharing or losing opioids. The median age was 53 years, 55% were female, 64% were white, and 86% had advanced cancer. Of those surveyed, 36% stored opioids in plain sight, 53% kept them hidden but unlocked, and only 15% locked their opioids. However, 73% agreed that they would use a lockbox if given one. Patients who reported that others had asked them for their pain medications (p = 0.004) and those who would use a lockbox if given one (p = 0.019) were more likely to keep them locked. Some 13 patients (12%) used opioids unsafely by either sharing (5%) or losing (8%) them. Patients who reported being prescribed more pain pills than required (p = 0.032) were more likely to practice unsafe use. Most (78%) were unaware of proper opioid disposal methods, 6% believed they were prescribed more medication than required, and 67% had unused opioids at home. Only 13% previously received education about safe disposal of opioids. Overall, 77% (87) of patients reported unsafe storage, unsafe use, or possessed unused opioids at home. Many cancer patients presenting to the ED improperly and unsafely store, use, or dispose of opioids, thus highlighting a need to investigate the impact of patient education on such practices.

  13. Decibel: The Relational Dataset Branching System

    PubMed Central

    Maddox, Michael; Goehring, David; Elmore, Aaron J.; Madden, Samuel; Parameswaran, Aditya; Deshpande, Amol

    2017-01-01

    As scientific endeavors and data analysis become increasingly collaborative, there is a need for data management systems that natively support the versioning or branching of datasets to enable concurrent analysis, cleaning, integration, manipulation, or curation of data across teams of individuals. Common practice for sharing and collaborating on datasets involves creating or storing multiple copies of the dataset, one for each stage of analysis, with no provenance information tracking the relationships between these datasets. This results not only in wasted storage, but also makes it challenging to track and integrate modifications made by different users to the same dataset. In this paper, we introduce the Relational Dataset Branching System, Decibel, a new relational storage system with built-in version control designed to address these shortcomings. We present our initial design for Decibel and provide a thorough evaluation of three versioned storage engine designs that focus on efficient query processing with minimal storage overhead. We also develop an exhaustive benchmark to enable the rigorous testing of these and future versioned storage engine designs. PMID:28149668

  14. The development of large-scale de-identified biomedical databases in the age of genomics-principles and challenges.

    PubMed

    Dankar, Fida K; Ptitsyn, Andrey; Dankar, Samar K

    2018-04-10

    Contemporary biomedical databases include a wide range of information types from various observational and instrumental sources. Among the most important features that unite biomedical databases across the field are high volume of information and high potential to cause damage through data corruption, loss of performance, and loss of patient privacy. Thus, issues of data governance and privacy protection are essential for the construction of data depositories for biomedical research and healthcare. In this paper, we discuss various challenges of data governance in the context of population genome projects. The various challenges along with best practices and current research efforts are discussed through the steps of data collection, storage, sharing, analysis, and knowledge dissemination.

  15. A plasma membrane sucrose-binding protein that mediates sucrose uptake shares structural and sequence similarity with seed storage proteins but remains functionally distinct.

    PubMed

    Overvoorde, P J; Chao, W S; Grimes, H D

    1997-06-20

    Photoaffinity labeling of a soybean cotyledon membrane fraction identified a sucrose-binding protein (SBP). Subsequent studies have shown that the SBP is a unique plasma membrane protein that mediates the linear uptake of sucrose in the presence of up to 30 mM external sucrose when ectopically expressed in yeast. Analysis of the SBP-deduced amino acid sequence indicates it lacks sequence similarity with other known transport proteins. Data presented here, however, indicate that the SBP shares significant sequence and structural homology with the vicilin-like seed storage proteins that organize into homotrimers. These similarities include a repeated sequence that forms the basis of the reiterated domain structure characteristic of the vicilin-like protein family. In addition, analytical ultracentrifugation and nonreducing SDS-polyacrylamide gel electrophoresis demonstrate that the SBP appears to be organized into oligomeric complexes with a Mr indicative of the existence of SBP homotrimers and homodimers. The structural similarity shared by the SBP and vicilin-like proteins provides a novel framework to explore the mechanistic basis of SBP-mediated sucrose uptake. Expression of the maize Glb protein (a vicilin-like protein closely related to the SBP) in yeast demonstrates that a closely related vicilin-like protein is unable to mediate sucrose uptake. Thus, despite sequence and structural similarities shared by the SBP and the vicilin-like protein family, the SBP is functionally divergent from other members of this group.

  16. e!DAL - a framework to store, share and publish research data

    PubMed Central

    2014-01-01

    Background The life-science community faces a major challenge in handling “big data”, highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the “big data life cycle”. The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. Results e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed “out-of-the-box” as an on-site repository. Conclusions e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK’s role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de. PMID:24958009

  17. e!DAL--a framework to store, share and publish research data.

    PubMed

    Arend, Daniel; Lange, Matthias; Chen, Jinbo; Colmsee, Christian; Flemming, Steffen; Hecht, Denny; Scholz, Uwe

    2014-06-24

    The life-science community faces a major challenge in handling "big data", highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the "big data life cycle". The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed "out-of-the-box" as an on-site repository. e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK's role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de.

  18. The challenges of archiving networked-based multimedia performances (Performance cryogenics)

    NASA Astrophysics Data System (ADS)

    Cohen, Elizabeth; Cooperstock, Jeremy; Kyriakakis, Chris

    2002-11-01

    Music archives and libraries have cultural preservation at the core of their charters. New forms of art often race ahead of the preservation infrastructure. The ability to stream multiple synchronized ultra-low latency streams of audio and video across a continent for a distributed interactive performance such as music and dance with high-definition video and multichannel audio raises a series of challenges for the architects of digital libraries and those responsible for cultural preservation. The archiving of such performances presents numerous challenges that go beyond simply recording each stream. Case studies of storage and subsequent retrieval issues for Internet2 collaborative performances are discussed. The development of shared reality and immersive environments generate issues about, What constitutes an archived performance that occurs across a network (in multiple spaces over time)? What are the families of necessary metadata to reconstruct this virtual world in another venue or era? For example, if the network exhibited changes in latency the performers most likely adapted. In a future recreation, the latency will most likely be completely different. We discuss the parameters of immersive environment acquisition and rendering, network architectures, software architecture, musical/choreographic scores, and environmental acoustics that must be considered to address this problem.

  19. Flexible operation of thermal plants with integrated energy storage technologies

    NASA Astrophysics Data System (ADS)

    Koytsoumpa, Efthymia Ioanna; Bergins, Christian; Kakaras, Emmanouil

    2017-08-01

    The energy system in the EU requires today as well as towards 2030 to 2050 significant amounts of thermal power plants in combination with the continuously increasing share of Renewables Energy Sources (RES) to assure the grid stability and to secure electricity supply as well as to provide heat. The operation of the conventional fleet should be harmonised with the fluctuating renewable energy sources and their intermittent electricity production. Flexible thermal plants should be able to reach their lowest minimum load capabilities while keeping the efficiency drop moderate as well as to increase their ramp up and down rates. A novel approach for integrating energy storage as an evolutionary measure to overcome many of the challenges, which arise from increasing RES and balancing with thermal power is presented. Energy storage technologies such as Power to Fuel, Liquid Air Energy Storage and Batteries are investigated in conjunction with flexible power plants.

  20. Low delay and area efficient soft error correction in arbitration logic

    DOEpatents

    Sugawara, Yutaka

    2013-09-10

    There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.

  1. Thermal storage for industrial process and reject heat

    NASA Technical Reports Server (NTRS)

    Duscha, R. A.; Masica, W. J.

    1978-01-01

    Industrial production uses about 40 percent of the total energy consumed in the United States. The major share of this is derived from fossil fuel. Potential savings of scarce fuel is possible through the use of thermal energy storage (TES) of reject or process heat for subsequent use. Three especially significant industries where high temperature TES appears attractive - paper and pulp, iron and steel, and cement are discussed. Potential annual fuel savings, with large scale implementation of near-term TES systems for these three industries, is nearly 9,000,000 bbl of oil.

  2. Emerging Security Mechanisms for Medical Cyber Physical Systems.

    PubMed

    Kocabas, Ovunc; Soyata, Tolga; Aktas, Mehmet K

    2016-01-01

    The following decade will witness a surge in remote health-monitoring systems that are based on body-worn monitoring devices. These Medical Cyber Physical Systems (MCPS) will be capable of transmitting the acquired data to a private or public cloud for storage and processing. Machine learning algorithms running in the cloud and processing this data can provide decision support to healthcare professionals. There is no doubt that the security and privacy of the medical data is one of the most important concerns in designing an MCPS. In this paper, we depict the general architecture of an MCPS consisting of four layers: data acquisition, data aggregation, cloud processing, and action. Due to the differences in hardware and communication capabilities of each layer, different encryption schemes must be used to guarantee data privacy within that layer. We survey conventional and emerging encryption schemes based on their ability to provide secure storage, data sharing, and secure computation. Our detailed experimental evaluation of each scheme shows that while the emerging encryption schemes enable exciting new features such as secure sharing and secure computation, they introduce several orders-of-magnitude computational and storage overhead. We conclude our paper by outlining future research directions to improve the usability of the emerging encryption schemes in an MCPS.

  3. The SERI solar energy storage program

    NASA Technical Reports Server (NTRS)

    Copeland, R. J.; Wright, J. D.; Wyman, C. E.

    1980-01-01

    In support of the DOE thermal and chemical energy storage program, the solar energy storage program (SERI) provides research on advanced technologies, systems analyses, and assessments of thermal energy storage for solar applications in support of the Thermal and Chemical Energy Storage Program of the DOE Division of Energy Storage Systems. Currently, research is in progress on direct contact latent heat storage and thermochemical energy storage and transport. Systems analyses are being performed of thermal energy storage for solar thermal applications, and surveys and assessments are being prepared of thermal energy storage in solar applications. A ranking methodology for comparing thermal storage systems (performance and cost) is presented. Research in latent heat storage and thermochemical storage and transport is reported.

  4. “It’s my blood”: ethical complexities in the use, storage and export of biological samples: perspectives from South African research participants

    PubMed Central

    2014-01-01

    Background The use of biological samples in research raises a number of ethical issues in relation to consent, storage, export, benefit sharing and re-use of samples. Participant perspectives have been explored in North America and Europe, with only a few studies reported in Africa. The amount of research being conducted in Africa is growing exponentially with volumes of biological samples being exported from the African continent. In order to investigate the perspectives of African research participants, we conducted a study at research sites in the Western Cape and Gauteng, South Africa. Methods Data were collected using a semi-structured questionnaire that captured both quantitative and qualitative information at 6 research sites in South Africa. Interviews were conducted in English and Afrikaans. Data were analysed both quantitatively and qualitatively. Results Our study indicates that while the majority of participants were supportive of providing samples for research, serious concerns were voiced about future use, benefit sharing and export of samples. While researchers view the provision of biosamples as a donation, participants believe that they still have ownership rights and are therefore in favour of benefit sharing. Almost half of the participants expressed a desire to be re-contacted for consent for future use of their samples. Interesting opinions were expressed with respect to export of samples. Conclusions Eliciting participant perspectives is an important part of community engagement in research involving biological sample collection, export, storage and future use. A tiered consent process appears to be more acceptable to participants in this study. Eliciting opinions of researchers and research ethics committee (REC) members would contribute multiple perspectives. Further research is required to interrogate the concept of ownership and the consent process in research involving biological samples. PMID:24447822

  5. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  6. An Efficient Searchable Encryption Against Keyword Guessing Attacks for Sharable Electronic Medical Records in Cloud-based System.

    PubMed

    Wu, Yilun; Lu, Xicheng; Su, Jinshu; Chen, Peixin

    2016-12-01

    Preserving the privacy of electronic medical records (EMRs) is extremely important especially when medical systems adopt cloud services to store patients' electronic medical records. Considering both the privacy and the utilization of EMRs, some medical systems apply searchable encryption to encrypt EMRs and enable authorized users to search over these encrypted records. Since individuals would like to share their EMRs with multiple persons, how to design an efficient searchable encryption for sharable EMRs is still a very challenge work. In this paper, we propose a cost-efficient secure channel free searchable encryption (SCF-PEKS) scheme for sharable EMRs. Comparing with existing SCF-PEKS solutions, our scheme reduces the storage overhead and achieves better computation performance. Moreover, our scheme can guard against keyword guessing attack, which is neglected by most of the existing schemes. Finally, we implement both our scheme and a latest medical-based scheme to evaluate the performance. The evaluation results show that our scheme performs much better performance than the latest one for sharable EMRs.

  7. The architecture of the High Performance Storage System (HPSS)

    NASA Technical Reports Server (NTRS)

    Teaff, Danny; Watson, Dick; Coyne, Bob

    1994-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements or large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  8. Where the Cloud Meets the Commons

    ERIC Educational Resources Information Center

    Ipri, Tom

    2011-01-01

    Changes presented by cloud computing--shared computing services, applications, and storage available to end users via the Internet--have the potential to seriously alter how libraries provide services, not only remotely, but also within the physical library, specifically concerning challenges facing the typical desktop computing experience.…

  9. 40 CFR 60.434 - Monitoring of operations and recordkeeping.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... recordkeeping. 60.434 Section 60.434 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... affected facility using waterborne ink systems or solvent-borne ink systems with solvent recovery systems...) If affected facilities share the same raw ink storage/handling system with existing facilities...

  10. 40 CFR 60.434 - Monitoring of operations and recordkeeping.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... recordkeeping. 60.434 Section 60.434 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... affected facility using waterborne ink systems or solvent-borne ink systems with solvent recovery systems...) If affected facilities share the same raw ink storage/handling system with existing facilities...

  11. Developing regionally specific grazing practices to promote production, profitability, and environmental quality

    USDA-ARS?s Scientific Manuscript database

    Rangelands are valued for their capacity to provide diverse suites of ecosystem services, from food production to carbon storage to biological diversity. Although rangelands worldwide share common characteristics, differences among biogeographic regions result in differences in the types of opportun...

  12. Low-Carbon Computing

    ERIC Educational Resources Information Center

    Hignite, Karla

    2009-01-01

    Green information technology (IT) is grabbing more mainstream headlines--and for good reason. Computing, data processing, and electronic file storage collectively account for a significant and growing share of energy consumption in the business world and on higher education campuses. With greater scrutiny of all activities that contribute to an…

  13. The lysosomal storage disease continuum with ageing-related neurodegenerative disease.

    PubMed

    Lloyd-Evans, Emyr; Haslett, Luke J

    2016-12-01

    Lysosomal storage diseases and diseases of ageing share many features both at the physiological level and with respect to the mechanisms that underlie disease pathogenesis. Although the exact pathophysiology is not exactly the same, it is astounding how many similar pathways are altered in all of these diseases. The aim of this review is to provide a summary of the shared disease mechanisms, outlining the similarities and differences and how genetics, insight into rare diseases and functional research has changed our perspective on the causes underlying common diseases of ageing. The lysosome should no longer be considered as just the stomach of the cell or as a suicide bag, it has an emerging role in cellular signalling, nutrient sensing and recycling. The lysosome is of fundamental importance in the pathophysiology of diseases of ageing and by comparing against the LSDs we not only identify common pathways but also therapeutic targets so that ultimately more effective treatments can be developed for all neurodegenerative diseases. Copyright © 2016. Published by Elsevier B.V.

  14. Dynamic provisioning of local and remote compute resources with OpenStack

    NASA Astrophysics Data System (ADS)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  15. [Study of sharing platform of web-based enhanced extracorporeal counterpulsation hemodynamic waveform data].

    PubMed

    Huang, Mingbo; Hu, Ding; Yu, Donglan; Zheng, Zhensheng; Wang, Kuijian

    2011-12-01

    Enhanced extracorporeal counterpulsation (EECP) information consists of both text and hemodynamic waveform data. At present EECP text information has been successfully managed through Web browser, while the management and sharing of hemodynamic waveform data through Internet has not been solved yet. In order to manage EECP information completely, based on the in-depth analysis of EECP hemodynamic waveform file of digital imaging and communications in medicine (DICOM) format and its disadvantages in Internet sharing, we proposed the use of the extensible markup language (XML), which is currently the Internet popular data exchange standard, as the storage specification for the sharing of EECP waveform data. Then we designed a web-based sharing system of EECP hemodynamic waveform data via ASP. NET 2.0 platform. Meanwhile, we specifically introduced the four main system function modules and their implement methods, including DICOM to XML conversion module, EECP waveform data management module, retrieval and display of EECP waveform module and the security mechanism of the system.

  16. Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation

    PubMed Central

    Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi

    2016-01-01

    After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784

  17. Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.

    PubMed

    Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi

    2016-01-01

    After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.

  18. Use of HSM with Relational Databases

    NASA Technical Reports Server (NTRS)

    Breeden, Randall; Burgess, John; Higdon, Dan

    1996-01-01

    Hierarchical storage management (HSM) systems have evolved to become a critical component of large information storage operations. They are built on the concept of using a hierarchy of storage technologies to provide a balance in performance and cost. In general, they migrate data from expensive high performance storage to inexpensive low performance storage based on frequency of use. The predominant usage characteristic is that frequency of use is reduced with age and in most cases quite rapidly. The result is that HSM provides an economical means for managing and storing massive volumes of data. Inherent in HSM systems is system managed storage, where the system performs most of the work with minimum operations personnel involvement. This automation is generally extended to include: backup and recovery, data duplexing to provide high availability, and catastrophic recovery through use of off-site storage.

  19. Certification of Completion of Level-2 Milestone 464: Complete Phase 1 Integration of Site-Wide Global Parallel File System (SWGPFS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidelberg, S T; Fitzgerald, K J; Richmond, G H

    2006-01-24

    There has been substantial development of the Lustre parallel filesystem prior to the configuration described below for this milestone. The initial Lustre filesystems that were deployed were directly connected to the cluster interconnect, i.e. Quadrics Elan3. That is, the clients (OSSes) and Meta-data Servers (MDS) were all directly connected to the cluster's internal high speed interconnect. This configuration serves a single cluster very well, but does not provide sharing of the filesystem among clusters. LLNL funded the development of high-efficiency ''portals router'' code by CFS (the company that develops Lustre) to enable us to move the Lustre servers to amore » GigE-connected network configuration, thus making it possible to connect to the servers from several clusters. With portals routing available, here is what changes: (1) another storage-only cluster is deployed to front the Lustre storage devices (these become the Lustre OSSes and MDS), (2) this ''Lustre cluster'' is attached via GigE connections to a large GigE switch/router cloud, (3) a small number of compute-cluster nodes are designated as ''gateway'' or ''portal router'' nodes, and (4) the portals router nodes are GigE-connected to the switch/router cloud. The Lustre configuration is then changed to reflect the new network paths. A typical example of this is a compute cluster and a related visualization cluster: the compute cluster produces the data (writes it to the Lustre filesystem), and the visualization cluster consumes some of the data (reads it from the Lustre filesystem). This process can be expanded by aggregating several collections of Lustre backend storage resources into one or more ''centralized'' Lustre filesystems, and then arranging to have several ''client'' clusters mount these centralized filesystems. The ''client clusters'' can be any combination of compute, visualization, archiving, or other types of cluster. This milestone demonstrates the operation and performance of a scaled-down version of such a large, centralized, shared Lustre filesystem concept.« less

  20. Cost and performance of thermal storage concepts in solar thermal systems, Phase 2-liquid metal receivers

    NASA Astrophysics Data System (ADS)

    McKenzie, A. W.

    Cost and performance of various thermal storage concepts in a liquid metal receiver solar thermal power system application have been evaluated. The objectives of this study are to provide consistently calculated cost and performance data for thermal storage concepts integrated into solar thermal systems. Five alternative storage concepts are evaluated for a 100-MW(e) liquid metal-cooled receiver solar thermal power system for 1, 6, and 15 hours of storage: sodium 2-tank (reference system), molten draw salt 2-tank, sand moving bed, air/rock, and latent heat (phase change) with tube-intensive heat exchange (HX). The results indicate that the all sodium 2-tank thermal storage concept is not cost-effective for storage in excess of 3 or 4 hours; the molten draw salt 2-tank storage concept provides significant cost savings over the reference sodium 2-tank concept; and the air/rock storage concept with pressurized sodium buffer tanks provides the lowest evaluated cost of all storage concepts considered above 6 hours of storage.

  1. Battery Energy Storage Systems to Mitigate the Variability of Photovoltaic Power Generation

    NASA Astrophysics Data System (ADS)

    Gurganus, Heath Alan

    Methods of generating renewable energy such as through solar photovoltaic (PV) cells and wind turbines offer great promise in terms of a reduced carbon footprint and overall impact on the environment. However, these methods also share the attribute of being highly stochastic, meaning they are variable in such a way that is difficult to forecast with sufficient accuracy. While solar power currently constitutes a small amount of generating potential in most regions, the cost of photovoltaics continues to decline and a trend has emerged to build larger PV plants than was once feasible. This has brought the matter of increased variability to the forefront of research in the industry. Energy storage has been proposed as a means of mitigating this increased variability --- and thus reducing the need to utilize traditional spinning reserves --- as well as offering auxiliary grid services such as peak-shifting and frequency control. This thesis addresses the feasibility of using electrochemical storage methods (i.e. batteries) to decrease the ramp rates of PV power plants. By building a simulation of a grid-connected PV array and a typical Battery Energy Storage System (BESS) in the NetLogo simulation environment, I have created a parameterized tool that can be tailored to describe almost any potential PV setup. This thesis describes the design and function of this model, and makes a case for the accuracy of its measurements by comparing its simulated output to that of well-documented real world sites. Finally, a set of recommendations for the design and operational parameters of such a system are then put forth based on the results of several experiments performed using this model.

  2. Data publication and sharing using the SciDrive service

    NASA Astrophysics Data System (ADS)

    Mishin, Dmitry; Medvedev, D.; Szalay, A. S.; Plante, R. L.

    2014-01-01

    Despite the last years progress in scientific data storage, still remains the problem of public data storage and sharing system for relatively small scientific datasets. These are collections forming the “long tail” of power log datasets distribution. The aggregated size of the long tail data is comparable to the size of all data collections from large archives, and the value of data is significant. The SciDrive project's main goal is providing the scientific community with a place to reliably and freely store such data and provide access to it to broad scientific community. The primary target audience of the project is astoromy community, and it will be extended to other fields. We're aiming to create a simple way of publishing a dataset, which can be then shared with other people. Data owner controls the permissions to modify and access the data and can assign a group of users or open the access to everyone. The data contained in the dataset will be automaticaly recognized by a background process. Known data formats will be extracted according to the user's settings. Currently tabular data can be automatically extracted to the user's MyDB table where user can make SQL queries to the dataset and merge it with other public CasJobs resources. Other data formats can be processed using a set of plugins that upload the data or metadata to user-defined side services. The current implementation targets some of the data formats commonly used by the astronomy communities, including FITS, ASCII and Excel tables, TIFF images, and YT simulations data archives. Along with generic metadata, format-specific metadata is also processed. For example, basic information about celestial objects is extracted from FITS files and TIFF images, if present. A 100TB implementation has just been put into production at Johns Hopkins University. The system features public data storage REST service supporting VOSpace 2.0 and Dropbox protocols, HTML5 web portal, command-line client and Java standalone client to synchronize a local folder with the remote storage. We use VAO SSO (Single Sign On) service from NCSA for users authentication that provides free registration for everyone.

  3. Birds of a Feather - Developments towards shared, regional geological disposal in the EU?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codee, H.D.K.; Verhoef, E.V.; McCombie, Ch.

    2008-07-01

    Geological disposal is an essential component of the long-term management of spent fuel, high level and other long-lived radioactive waste. In the EU, all 25 member states generate radioactive waste. Of course, there are large differences in type and quantity between the member states, but all of them need a long-term solution. Even a country with only lightning rods with radium will need a long-term solution for the disposal. The 1600 year half-life of radium does not fit in a solution with a span of control of just a few hundred years. Implementation of a suitable deep repository may, however,more » be difficult or impossible for countries with small volumes of waste, because of the high costs involved. Will economy of scale force these birds of a feather to wait to flock together and share a repository? Implementing a small repository and operating it for very long times is very costly. There are past and current examples of countries being prepared to accept radioactive waste from others if a better environmental solution is thus achieved and if the arrangements are fair for all parties involved. The need for supranational surveillance also points to shared solutions. Although the European Parliament and the Commission have both supported the concept of shared regional repositories in Europe, (national) political and societal constraints have hampered the realization of such facilities up to now. The first step in this staged process was the EC funded project, SAPIERR I. The project (2003 to 2005) studied the feasibility of shared regional storage facilities and geological repositories, for use by European countries. It showed that, if shared regional repositories are to be implemented even some decades ahead, efforts must already be increased now. The next step in the process is to develop a practical implementation strategy and organizational structures to work on shared EU radioactive waste storage and disposal activities. This is addressed in the EC funded project SAPIERR II (2006-2008). The paper gives an update of the SAPIERR II project and describes the progress achieved. (authors)« less

  4. A Distributed Multi-Agent System for Collaborative Information Management and Learning

    NASA Technical Reports Server (NTRS)

    Chen, James R.; Wolfe, Shawn R.; Wragg, Stephen D.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this paper, we present DIAMS, a system of distributed, collaborative agents to help users access, manage, share and exchange information. A DIAMS personal agent helps its owner find information most relevant to current needs. It provides tools and utilities for users to manage their information repositories with dynamic organization and virtual views. Flexible hierarchical display is integrated with indexed query search-to support effective information access. Automatic indexing methods are employed to support user queries and communication between agents. Contents of a repository are kept in object-oriented storage to facilitate information sharing. Collaboration between users is aided by easy sharing utilities as well as automated information exchange. Matchmaker agents are designed to establish connections between users with similar interests and expertise. DIAMS agents provide needed services for users to share and learn information from one another on the World Wide Web.

  5. Facilitating a culture of responsible and effective sharing of cancer genome data.

    PubMed

    Siu, Lillian L; Lawler, Mark; Haussler, David; Knoppers, Bartha Maria; Lewin, Jeremy; Vis, Daniel J; Liao, Rachel G; Andre, Fabrice; Banks, Ian; Barrett, J Carl; Caldas, Carlos; Camargo, Anamaria Aranha; Fitzgerald, Rebecca C; Mao, Mao; Mattison, John E; Pao, William; Sellers, William R; Sullivan, Patrick; Teh, Bin Tean; Ward, Robyn L; ZenKlusen, Jean Claude; Sawyers, Charles L; Voest, Emile E

    2016-05-05

    Rapid and affordable tumor molecular profiling has led to an explosion of clinical and genomic data poised to enhance the diagnosis, prognostication and treatment of cancer. A critical point has now been reached at which the analysis and storage of annotated clinical and genomic information in unconnected silos will stall the advancement of precision cancer care. Information systems must be harmonized to overcome the multiple technical and logistical barriers to data sharing. Against this backdrop, the Global Alliance for Genomic Health (GA4GH) was established in 2013 to create a common framework that enables responsible, voluntary and secure sharing of clinical and genomic data. This Perspective from the GA4GH Clinical Working Group Cancer Task Team highlights the data-aggregation challenges faced by the field, suggests potential collaborative solutions and describes how GA4GH can catalyze a harmonized data-sharing culture.

  6. From Rosalind Franklin to Barack Obama: Data Sharing Challenges and Solutions in Genomics and Personalised Medicine

    PubMed Central

    Lawler, Mark; Maughan, Tim

    2017-01-01

    The collection, storage and use of genomic and clinical data from patients and healthy individuals is a key component of personalised medicine enterprises such as the Precision Medicine Initiative, the Cancer Moonshot and the 100,000 Genomes Project. In order to maximise the value of this data, it is important to embed a culture within the scientific, medical and patient communities that supports the appropriate sharing of genomic and clinical information. However, this aspiration raises a number of ethical, legal and regulatory challenges that need to be addressed. The Global Alliance for Genomics and Health, a worldwide coalition of researchers, healthcare professionals, patients and industry partners, is developing innovative solutions to support the responsible and effective sharing of genomic and clinical data. This article identifies the challenges that a data sharing culture poses and highlights a series of practical solutions that will benefit patients, researchers and society. PMID:28517986

  7. From Rosalind Franklin to Barack Obama: Data Sharing Challenges and Solutions in Genomics and Personalised Medicine.

    PubMed

    Lawler, Mark; Maughan, Tim

    2017-04-01

    The collection, storage and use of genomic and clinical data from patients and healthy individuals is a key component of personalised medicine enterprises such as the Precision Medicine Initiative, the Cancer Moonshot and the 100,000 Genomes Project. In order to maximise the value of this data, it is important to embed a culture within the scientific, medical and patient communities that supports the appropriate sharing of genomic and clinical information. However, this aspiration raises a number of ethical, legal and regulatory challenges that need to be addressed. The Global Alliance for Genomics and Health, a worldwide coalition of researchers, healthcare professionals, patients and industry partners, is developing innovative solutions to support the responsible and effective sharing of genomic and clinical data. This article identifies the challenges that a data sharing culture poses and highlights a series of practical solutions that will benefit patients, researchers and society.

  8. Teleradiology mobile internet system with a new information security solution

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kusumoto, Masahiko; Kaneko, Masahiro; Moriyama, Noriyuki

    2014-03-01

    We have developed an external storage system by using secret sharing scheme and tokenization for regional medical cooperation, PHR service and information preservation. The use of mobile devices such as smart phones and tablets will be accelerated for a PHR service, and the confidential medical information is exposed to the risk of damage and intercept. We verified the transfer rate of the sending and receiving of data to and from the external storage system that connected it with PACS by the Internet this time. External storage systems are the data centers that exist in Okinawa, in Osaka, in Sapporo and in Tokyo by using secret sharing scheme. PACS continuously transmitted 382 CT images to the external data centers. Total capacity of the CT images is about 200MB. The total time that had been required to transmit was about 250 seconds. Because the preservation method to use secret sharing scheme is applied, security is strong. But, it also takes the information transfer time of this system too much. Therefore, DICOM data is masked to the header information part because it is made to anonymity in our method. The DICOM data made anonymous is preserved in the data base in the hospital. Header information including individual information is divided into two or more tallies by secret sharing scheme, and preserved at two or more external data centers. The token to relate the DICOM data anonymity made to header information preserved outside is strictly preserved in the token server. The capacity of header information that contains patient's individual information is only about 2% of the entire DICOM data. This total time that had been required to transmit was about 5 seconds. Other, common solutions that can protect computer communication networks from attacks are classified as cryptographic techniques or authentication techniques. Individual number IC card is connected with electronic certification authority of web medical image conference system. Individual number IC card is given only to the person to whom the authority to operate web medical image conference system was given.

  9. Beating the tyranny of scale with a private cloud configured for Big Data

    NASA Astrophysics Data System (ADS)

    Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag

    2015-04-01

    The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.

  10. CARGO: effective format-free compressed storage of genomic information

    PubMed Central

    Roguski, Łukasz; Ribeca, Paolo

    2016-01-01

    The recent super-exponential growth in the amount of sequencing data generated worldwide has put techniques for compressed storage into the focus. Most available solutions, however, are strictly tied to specific bioinformatics formats, sometimes inheriting from them suboptimal design choices; this hinders flexible and effective data sharing. Here, we present CARGO (Compressed ARchiving for GenOmics), a high-level framework to automatically generate software systems optimized for the compressed storage of arbitrary types of large genomic data collections. Straightforward applications of our approach to FASTQ and SAM archives require a few lines of code, produce solutions that match and sometimes outperform specialized format-tailored compressors and scale well to multi-TB datasets. All CARGO software components can be freely downloaded for academic and non-commercial use from http://bio-cargo.sourceforge.net. PMID:27131376

  11. Metabolic pathways in tropical dicotyledonous albuminous seeds: Coffea arabica as a case study

    PubMed Central

    Joët, Thierry; Laffargue, Andréina; Salmona, Jordi; Doulbeau, Sylvie; Descroix, Frédéric; Bertrand, Benoît; de Kochko, Alexandre; Dussert, Stéphane

    2009-01-01

    The genomic era facilitates the understanding of how transcriptional networks are interconnected to program seed development and filling. However, to date, little information is available regarding dicot seeds with a transient perisperm and a persistent, copious endosperm. Coffea arabica is the subject of increasing genomic research and is a model for nonorthodox albuminous dicot seeds of tropical origin. The aim of this study was to reconstruct the metabolic pathways involved in the biosynthesis of the main coffee seed storage compounds, namely cell wall polysaccharides, triacylglycerols, sucrose, and chlorogenic acids. For this purpose, we integrated transcriptomic and metabolite analyses, combining real-time RT-PCR performed on 137 selected genes (of which 79 were uncharacterized in Coffea) and metabolite profiling. Our map-drawing approach derived from model plants enabled us to propose a rationale for the peculiar traits of the coffee endosperm, such as its unusual fatty acid composition, remarkable accumulation of chlorogenic acid and cell wall polysaccharides. Comparison with the developmental features of exalbuminous seeds described in the literature revealed that the two seed types share important regulatory mechanisms for reserve biosynthesis, independent of the origin and ploidy level of the storage tissue. PMID:19207685

  12. A high reliability battery management system

    NASA Technical Reports Server (NTRS)

    Moody, M. H.

    1986-01-01

    Over a period of some 5 years Canadian Astronautics Limited (CAL) has developed a system to autonomously manage, and thus prolong the life of, secondary storage batteries. During the development, the system was aimed at the space vehicle application using nickel cadmium batteries, but is expected to be able to enhance the life and performance of any rechargeable electrochemical couple. The system handles the cells of a battery individually and thus avoids the problems of over, and under, drive that inevitably occur in a battery of cells managed by an averaging system. This individual handling also allow cells to be totally bypassed in the event of failure, thus avoiding the losses associated with low capacity, partial short circuit, and the catastrophe of open circuit. The system has an optional capability of managing redundant batteries simultaneously, adding the advantage of on line reconditioning of one battery, while the other maintains the energy storage capability of the overall system. As developed, the system contains a dedicated, redundant, microprocessor, but the capability exists to have this computing capability time shared, or remote, and operating through a data link. As adjuncts to the basic management system CAL has developed high efficiency, polyphase, power regulators for charge and discharge power conditioning.

  13. Power Management Based Current Control Technique for Photovoltaic-Battery Assisted Wind-Hydro Hybrid System

    NASA Astrophysics Data System (ADS)

    Ram Prabhakar, J.; Ragavan, K.

    2013-07-01

    This article proposes new power management based current control strategy for integrated wind-solar-hydro system equipped with battery storage mechanism. In this control technique, an indirect estimation of load current is done, through energy balance model, DC-link voltage control and droop control. This system features simpler energy management strategy and necessitates few power electronic converters, thereby minimizing the cost of the system. The generation-demand (G-D) management diagram is formulated based on the stochastic weather conditions and demand, which would likely moderate the gap between both. The features of management strategy deploying energy balance model include (1) regulating DC-link voltage within specified tolerances, (2) isolated operation without relying on external electric power transmission network, (3) indirect current control of hydro turbine driven induction generator and (4) seamless transition between grid-connected and off-grid operation modes. Furthermore, structuring of the hybrid system with appropriate selection of control variables enables power sharing among each energy conversion systems and battery storage mechanism. By addressing these intricacies, it is viable to regulate the frequency and voltage of the remote network at load end. The performance of the proposed composite scheme is demonstrated through time-domain simulation in MATLAB/Simulink environment.

  14. Research and implementation on improving I/O performance of streaming media storage system

    NASA Astrophysics Data System (ADS)

    Lu, Zheng-wu; Wang, Yu-de; Jiang, Guo-song

    2008-12-01

    In this paper, we study the special requirements of a special storage system: streaming media server, and propose a solution to improve I/O performance of RAID storage system. The solution is suitable for streaming media applications. A streaming media storage subsystem includes the I/O interfaces, RAID arrays, I/O scheduling and device drivers. The solution is implemented on the top of the storage subsystem I/O Interface. Storage subsystem is the performance bottlenecks of a streaming media system, and I/O interface directly affect the performance of the storage subsystem. According to theoretical analysis, 64 KB block-size is most appropriate for streaming media applications. We carry out experiment in detail, and verified that the proper block-size really is 64KB. It is in accordance with our analysis. The experiment results also show that by using DMA controller, efficient memory management technology and mailbox interface design mechanism, streaming media storage system achieves a high-speed data throughput.

  15. Novel Control Strategy for Multiple Run-of-the-River Hydro Power Plants to Provide Grid Ancillary Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob

    Hydropower plant (HPP) generation comprises a considerable portion of bulk electricity generation and is delivered with a low-carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which include wind and solar. Increasing penetration levels of wind and solar lead to a lower inertia on the electric grid, which poses stability challenges. In recent years, breakthroughs in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments of renewable energy resources on electric grids. If integrated with scalable, multi-time-step energy storage so that the total output can be controlled, multiple run-of-the-river (ROR)more » HPPs can be deployed. Although the size of a single energy storage system is much smaller than that of a typical reservoir, the ratings of storages and multiple ROR HPPs approximately equal the rating of a large, conventional HPP. This paper proposes cohesively managing multiple sets of energy storage systems distributed in different locations. This paper also describes the challenges associated with ROR HPP system architecture and operation.« less

  16. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  17. Cyberinfrastructure for Open Science at the Montreal Neurological Institute

    PubMed Central

    Das, Samir; Glatard, Tristan; Rogers, Christine; Saigle, John; Paiva, Santiago; MacIntyre, Leigh; Safi-Harab, Mouna; Rousseau, Marc-Etienne; Stirling, Jordan; Khalili-Mahani, Najmeh; MacFarlane, David; Kostopoulos, Penelope; Rioux, Pierre; Madjar, Cecile; Lecours-Boucher, Xavier; Vanamala, Sandeep; Adalat, Reza; Mohaddes, Zia; Fonov, Vladimir S.; Milot, Sylvain; Leppert, Ilana; Degroot, Clotilde; Durcan, Thomas M.; Campbell, Tara; Moreau, Jeremy; Dagher, Alain; Collins, D. Louis; Karamchandani, Jason; Bar-Or, Amit; Fon, Edward A.; Hoge, Rick; Baillet, Sylvain; Rouleau, Guy; Evans, Alan C.

    2017-01-01

    Data sharing is becoming more of a requirement as technologies mature and as global research and communications diversify. As a result, researchers are looking for practical solutions, not only to enhance scientific collaborations, but also to acquire larger amounts of data, and to access specialized datasets. In many cases, the realities of data acquisition present a significant burden, therefore gaining access to public datasets allows for more robust analyses and broadly enriched data exploration. To answer this demand, the Montreal Neurological Institute has announced its commitment to Open Science, harnessing the power of making both clinical and research data available to the world (Owens, 2016a,b). As such, the LORIS and CBRAIN (Das et al., 2016) platforms have been tasked with the technical challenges specific to the institutional-level implementation of open data sharing, including: Comprehensive linking of multimodal data (phenotypic, clinical, neuroimaging, biobanking, and genomics, etc.)Secure database encryption, specifically designed for institutional and multi-project data sharing, ensuring subject confidentiality (using multi-tiered identifiers).Querying capabilities with multiple levels of single study and institutional permissions, allowing public data sharing for all consented and de-identified subject data.Configurable pipelines and flags to facilitate acquisition and analysis, as well as access to High Performance Computing clusters for rapid data processing and sharing of software tools.Robust Workflows and Quality Control mechanisms ensuring transparency and consistency in best practices.Long term storage (and web access) of data, reducing loss of institutional data assets.Enhanced web-based visualization of imaging, genomic, and phenotypic data, allowing for real-time viewing and manipulation of data from anywhere in the world.Numerous modules for data filtering, summary statistics, and personalized and configurable dashboards. Implementing the vision of Open Science at the Montreal Neurological Institute will be a concerted undertaking that seeks to facilitate data sharing for the global research community. Our goal is to utilize the years of experience in multi-site collaborative research infrastructure to implement the technical requirements to achieve this level of public data sharing in a practical yet robust manner, in support of accelerating scientific discovery. PMID:28111547

  18. Cyberinfrastructure for Open Science at the Montreal Neurological Institute.

    PubMed

    Das, Samir; Glatard, Tristan; Rogers, Christine; Saigle, John; Paiva, Santiago; MacIntyre, Leigh; Safi-Harab, Mouna; Rousseau, Marc-Etienne; Stirling, Jordan; Khalili-Mahani, Najmeh; MacFarlane, David; Kostopoulos, Penelope; Rioux, Pierre; Madjar, Cecile; Lecours-Boucher, Xavier; Vanamala, Sandeep; Adalat, Reza; Mohaddes, Zia; Fonov, Vladimir S; Milot, Sylvain; Leppert, Ilana; Degroot, Clotilde; Durcan, Thomas M; Campbell, Tara; Moreau, Jeremy; Dagher, Alain; Collins, D Louis; Karamchandani, Jason; Bar-Or, Amit; Fon, Edward A; Hoge, Rick; Baillet, Sylvain; Rouleau, Guy; Evans, Alan C

    2016-01-01

    Data sharing is becoming more of a requirement as technologies mature and as global research and communications diversify. As a result, researchers are looking for practical solutions, not only to enhance scientific collaborations, but also to acquire larger amounts of data, and to access specialized datasets. In many cases, the realities of data acquisition present a significant burden, therefore gaining access to public datasets allows for more robust analyses and broadly enriched data exploration. To answer this demand, the Montreal Neurological Institute has announced its commitment to Open Science, harnessing the power of making both clinical and research data available to the world (Owens, 2016a,b). As such, the LORIS and CBRAIN (Das et al., 2016) platforms have been tasked with the technical challenges specific to the institutional-level implementation of open data sharing, including: Comprehensive linking of multimodal data (phenotypic, clinical, neuroimaging, biobanking, and genomics, etc.)Secure database encryption, specifically designed for institutional and multi-project data sharing, ensuring subject confidentiality (using multi-tiered identifiers).Querying capabilities with multiple levels of single study and institutional permissions, allowing public data sharing for all consented and de-identified subject data.Configurable pipelines and flags to facilitate acquisition and analysis, as well as access to High Performance Computing clusters for rapid data processing and sharing of software tools.Robust Workflows and Quality Control mechanisms ensuring transparency and consistency in best practices.Long term storage (and web access) of data, reducing loss of institutional data assets.Enhanced web-based visualization of imaging, genomic, and phenotypic data, allowing for real-time viewing and manipulation of data from anywhere in the world.Numerous modules for data filtering, summary statistics, and personalized and configurable dashboards. Implementing the vision of Open Science at the Montreal Neurological Institute will be a concerted undertaking that seeks to facilitate data sharing for the global research community. Our goal is to utilize the years of experience in multi-site collaborative research infrastructure to implement the technical requirements to achieve this level of public data sharing in a practical yet robust manner, in support of accelerating scientific discovery.

  19. [Mobile phone-computer wireless interactive graphics transmission technology and its medical application].

    PubMed

    Huang, Shuo; Liu, Jing

    2010-05-01

    Application of clinical digital medical imaging has raised many tough issues to tackle, such as data storage, management, and information sharing. Here we investigated a mobile phone based medical image management system which is capable of achieving personal medical imaging information storage, management and comprehensive health information analysis. The technologies related to the management system spanning the wireless transmission technology, the technical capabilities of phone in mobile health care and management of mobile medical database were discussed. Taking medical infrared images transmission between phone and computer as an example, the working principle of the present system was demonstrated.

  20. Automatic Identification of Application I/O Signatures from Noisy Server-Side Traces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yang; Gunasekaran, Raghul; Ma, Xiaosong

    2014-01-01

    Competing workloads on a shared storage system cause I/O resource contention and application performance vagaries. This problem is already evident in today s HPC storage systems and is likely to become acute at exascale. We need more interaction between application I/O requirements and system software tools to help alleviate the I/O bottleneck, moving towards I/O-aware job scheduling. However, this requires rich techniques to capture application I/O characteristics, which remain evasive in production systems. Traditionally, I/O characteristics have been obtained using client-side tracing tools, with drawbacks such as non-trivial instrumentation/development costs, large trace traffic, and inconsistent adoption. We present a novelmore » approach, I/O Signature Identifier (IOSI), to characterize the I/O behavior of data-intensive applications. IOSI extracts signatures from noisy, zero-overhead server-side I/O throughput logs that are already collected on today s supercomputers, without interfering with the compiling/execution of applications. We evaluated IOSI using the Spider storage system at Oak Ridge National Laboratory, the S3D turbulence application (running on 18,000 Titan nodes), and benchmark-based pseudo-applications. Through our ex- periments we confirmed that IOSI effectively extracts an application s I/O signature despite significant server-side noise. Compared to client-side tracing tools, IOSI is transparent, interface-agnostic, and incurs no overhead. Compared to alternative data alignment techniques (e.g., dynamic time warping), it offers higher signature accuracy and shorter processing time.« less

  1. Complementing hydropower with PV and wind: optimal energy mix in a fully renewable Switzerland

    NASA Astrophysics Data System (ADS)

    Dujardin, Jérôme; Kahl, Annelen; Kruyt, Bert; Lehning, Michael

    2017-04-01

    Like several other countries, Switzerland plans to phase out its nuclear power production and will replace most or all of it by renewables. Switzerland has the chance to benefit from a large hydropower potential and has already exploited almost all of it. Currently about 60% of the Swiss electricity consumption is covered by hydropower, which will eventually leave a gap of about 40% to the other renewables mainly composed of photovoltaics (PV) and wind. With its high flexibility, storage hydropower will play a major role in the future energy mix, providing valuable power and energy balance. Our work focuses on the interplay between PV, wind and storage hydropower, to analyze the dynamics of this complex system and to identify the best PV-wind mixing ratio. Given the current electricity consumption and the currently installed pumping capacity of the storage hydropower plants, it appears that the Swiss hydropower system can completely alleviate the intermittency of PV and wind. However, some seasonal mismatch between production and demand will remain, but we show that oversizing the production from PV and wind or enlarging the reservoir capacity can be a solution to keep it to an acceptable level or even eliminate it. We found that PV, wind and hydropower performs the best together when the share of PV in the solar - wind mix is between 20 and 60%. These findings are quantitatively specific for Switzerland but qualitatively transferable to similar mountainous environments with abundant hydropower resources.

  2. Pollutant emissions from vehicles with regenerating after-treatment systems in regulatory and real-world driving cycles.

    PubMed

    Alvarez, Robert; Weilenmann, Martin; Novak, Philippe

    2008-07-15

    Regenerating exhaust after-treatment systems are increasingly employed in passenger cars in order to comply with regulatory emission standards. These systems include pollutant storage units that occasionally have to be regenerated. The regeneration strategy applied, the resultant emission levels and their share of the emission level during normal operation mode are key issues in determining realistic overall emission factors for these cars. In order to investigate these topics, test series with four cars featuring different types of such after-treatment systems were carried out. The emission performance in legislative and real-world cycles was monitored as well as at constant speeds. The extra emissions determined during regeneration stages are presented together with the methodology applied to calculate their impact on overall emissions. It can be concluded that exhaust after-treatment systems with storage units cause substantial overall extra emissions during regeneration mode and can appreciably affect the emission factors of cars equipped with such systems, depending on the frequency of regenerations. Considering that the fleet appearance of vehicles equipped with such after-treatment systems will increase due to the evolution of statutory pollutant emission levels, extra emissions originating from regenerations of pollutant storage units consequently need to be taken into account for fleet emission inventories. Accurately quantifying these extra emissions is achieved by either conducting sufficient repetitions of emission measurements with an individual car or by considerably increasing the size of the sample of cars with comparable after-treatment systems.

  3. Impact of Nisin-Activated Packaging on Microbiota of Beef Burgers during Storage

    PubMed Central

    Ferrocino, Ilario; Greppi, Anna; La Storia, Antonietta; Rantsiou, Kalliopi; Ercolini, Danilo

    2015-01-01

    Beef burgers were stored at 4°C in a vacuum in nisin-activated antimicrobial packaging. Microbial ecology analyses were performed on samples collected between days 0 and 21 of storage to discover the population diversity. Two batches were analyzed using RNA-based denaturing gradient gel electrophoresis (DGGE) and pyrosequencing. The active packaging retarded the growth of the total viable bacteria and lactic acid bacteria. Culture-independent analysis by pyrosequencing of RNA extracted directly from meat showed that Photobacterium phosphoreum, Lactococcus piscium, Lactobacillus sakei, and Leuconostoc carnosum were the major operational taxonomic units (OTUs) shared between control and treated samples. Beta diversity analysis of the 16S rRNA sequence data and RNA-DGGE showed a clear separation between two batches based on the microbiota. Control samples from batch B showed a significant high abundance of some taxa sensitive to nisin, such as Kocuria rhizophila, Staphylococcus xylosus, Leuconostoc carnosum, and Carnobacterium divergens, compared to control samples from batch A. However, only from batch B was it possible to find a significant difference between controls and treated samples during storage due to the active packaging. Predicted metagenomes confirmed differences between the two batches and indicated that the use of nisin-based antimicrobial packaging can determine a reduction in the abundance of specific metabolic pathways related to spoilage. The present study aimed to assess the viable bacterial communities in beef burgers stored in nisin-based antimicrobial packaging, and it highlights the efficacy of this strategy to prolong beef burger shelf life. PMID:26546424

  4. Team Leader: Tom Peters--TAP Information Services

    ERIC Educational Resources Information Center

    Library Journal, 2005

    2005-01-01

    Tom Peters packs 36 hours of work into the confines of a 24-hour day. Without breaking a sweat, he juggles multiple collaborative projects, which currently include an Illinois academic library shared storage facility; a multistate virtual reference and instruction service for blind and visually impaired individuals (InfoEyes); a virtual meeting…

  5. DEVELOPMENT OF THE U.S. EPA HEALTH EFFECTS RESEARCH LABORATORY FROZEN BLOOD CELL REPOSITORY PROGRAM

    EPA Science Inventory

    In previous efforts, we suggested that proper blood cell freezing and storage is necessary in longitudinal studies with reduced between tests error, for specimen sharing between laboratories and for convenient scheduling of assays. e continue to develop and upgrade programs for o...

  6. A Case for Data Commons

    PubMed Central

    Grossman, Robert L.; Heath, Allison; Murphy, Mark; Patterson, Maria; Wells, Walt

    2017-01-01

    Data commons collocate data, storage, and computing infrastructure with core services and commonly used tools and applications for managing, analyzing, and sharing data to create an interoperable resource for the research community. An architecture for data commons is described, as well as some lessons learned from operating several large-scale data commons. PMID:29033693

  7. Dish Stirling High Performance Thermal Storage FY14Q4 Quad Chart

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andraka, Charles E.

    2014-10-01

    The goals of this project are to demonstrate the feasibility of significant thermal storage for dish stirling systems to leverage their existing high performance to greater capacity; demonstrate key components of a latent storage and transport system enabling on-dish storage with low energy losses; and provide a technology path to a 25kW e system with 6 hours of storage.

  8. The Materials Data Facility: Data Services to Advance Materials Science Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaiszik, B.; Chard, K.; Pruyne, J.

    2016-07-06

    With increasingly strict data management requirements from funding agencies and institutions, expanding focus on the challenges of research replicability, and growing data sizes and heterogeneity, new data needs are emerging in the materials community. The materials data facility (MDF) operates two cloudhosted services, data publication and data discovery, with features to promote open data sharing, self-service data publication and curation, and encourage data reuse, layered with powerful data discovery tools. The data publication service simplifies the process of copying data to a secure storage location, assigning data a citable persistent identifier, and recording custom (e.g., material, technique, or instrument specific)andmore » automatically-extractedmetadata in a registrywhile the data discovery service will provide advanced search capabilities (e.g., faceting, free text range querying, and full text search) against the registered data and metadata. TheMDF services empower individual researchers, research projects, and institutions to (I) publish research datasets, regardless of size, from local storage, institutional data stores, or cloud storage, without involvement of thirdparty publishers; (II) build, share, and enforce extensible domain-specific custom metadata schemas; (III) interact with published data and metadata via representational state transfer (REST) application program interfaces (APIs) to facilitate automation, analysis, and feedback; and (IV) access a data discovery model that allows researchers to search, interrogate, and eventually build on existing published data. We describe MDF’s design, current status, and future plans.« less

  9. The Materials Data Facility: Data Services to Advance Materials Science Research

    NASA Astrophysics Data System (ADS)

    Blaiszik, B.; Chard, K.; Pruyne, J.; Ananthakrishnan, R.; Tuecke, S.; Foster, I.

    2016-08-01

    With increasingly strict data management requirements from funding agencies and institutions, expanding focus on the challenges of research replicability, and growing data sizes and heterogeneity, new data needs are emerging in the materials community. The materials data facility (MDF) operates two cloud-hosted services, data publication and data discovery, with features to promote open data sharing, self-service data publication and curation, and encourage data reuse, layered with powerful data discovery tools. The data publication service simplifies the process of copying data to a secure storage location, assigning data a citable persistent identifier, and recording custom (e.g., material, technique, or instrument specific) and automatically-extracted metadata in a registry while the data discovery service will provide advanced search capabilities (e.g., faceting, free text range querying, and full text search) against the registered data and metadata. The MDF services empower individual researchers, research projects, and institutions to (I) publish research datasets, regardless of size, from local storage, institutional data stores, or cloud storage, without involvement of third-party publishers; (II) build, share, and enforce extensible domain-specific custom metadata schemas; (III) interact with published data and metadata via representational state transfer (REST) application program interfaces (APIs) to facilitate automation, analysis, and feedback; and (IV) access a data discovery model that allows researchers to search, interrogate, and eventually build on existing published data. We describe MDF's design, current status, and future plans.

  10. DICOM relay over the cloud.

    PubMed

    Silva, Luís A Bastião; Costa, Carlos; Oliveira, José Luis

    2013-05-01

    Healthcare institutions worldwide have adopted picture archiving and communication system (PACS) for enterprise access to images, relying on Digital Imaging Communication in Medicine (DICOM) standards for data exchange. However, communication over a wider domain of independent medical institutions is not well standardized. A DICOM-compliant bridge was developed for extending and sharing DICOM services across healthcare institutions without requiring complex network setups or dedicated communication channels. A set of DICOM routers interconnected through a public cloud infrastructure was implemented to support medical image exchange among institutions. Despite the advantages of cloud computing, new challenges were encountered regarding data privacy, particularly when medical data are transmitted over different domains. To address this issue, a solution was introduced by creating a ciphered data channel between the entities sharing DICOM services. Two main DICOM services were implemented in the bridge: Storage and Query/Retrieve. The performance measures demonstrated it is quite simple to exchange information and processes between several institutions. The solution can be integrated with any currently installed PACS-DICOM infrastructure. This method works transparently with well-known cloud service providers. Cloud computing was introduced to augment enterprise PACS by providing standard medical imaging services across different institutions, offering communication privacy and enabling creation of wider PACS scenarios with suitable technical solutions.

  11. The Contribution of Reservoirs to Global Land Surface Water Storage Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Tian; Nijssen, Bart; Gao, Huilin

    Man-made reservoirs play a key role in the terrestrial water system. They alter water fluxes at the land surface and impact surface water storage through water management regulations for diverse purposes such as irrigation, municipal water supply, hydropower generation, and flood control. Although most developed countries have established sophisticated observing systems for many variables in the land surface water cycle, long-term and consistent records of reservoir storage are much more limited and not always shared. Furthermore, most land surface hydrological models do not represent the effects of water management activities. Here, the contribution of reservoirs to seasonal water storage variationsmore » is investigated using a large-scale water management model to simulate the effects of reservoir management at basin and continental scales. The model was run from 1948 to 2010 at a spatial resolution of 0.258 latitude–longitude. A total of 166 of the largest reservoirs in the world with a total capacity of about 3900 km3 (nearly 60%of the globally integrated reservoir capacity) were simulated. The global reservoir storage time series reflects the massive expansion of global reservoir capacity; over 30 000 reservoirs have been constructed during the past half century, with a mean absolute interannual storage variation of 89 km3. The results indicate that the average reservoir-induced seasonal storage variation is nearly 700 km3 or about 10%of the global reservoir storage. For some river basins, such as the Yellow River, seasonal reservoir storage variations can be as large as 72%of combined snow water equivalent and soil moisture storage.« less

  12. An Object-Relational Ifc Storage Model Based on Oracle Database

    NASA Astrophysics Data System (ADS)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  13. GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments

    NASA Astrophysics Data System (ADS)

    Chen, Zhanlong; Wu, Xin-cai; Wu, Liang

    2008-12-01

    Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the distributed operation, reduplication operation transfer operation of spatial index in the grid environment. The design of GSHR-Tree has ensured the performance of the load balance in the parallel computation. This tree structure is fit for the parallel process of the spatial information in the distributed network environments. Instead of spatial object's recursive comparison where original R tree has been used, the algorithm builds the spatial index by applying binary code operation in which computer runs more efficiently, and extended dynamic hash code for bit comparison. In GSHR-Tree, a new server is assigned to the network whenever a split of a full node is required. We describe a more flexible allocation protocol which copes with a temporary shortage of storage resources. It uses a distributed balanced binary spatial tree that scales with insertions to potentially any number of storage servers through splits of the overloaded ones. The application manipulates the GSHR-Tree structure from a node in the grid environment. The node addresses the tree through its image that the splits can make outdated. This may generate addressing errors, solved by the forwarding among the servers. In this paper, a spatial index data distribution algorithm that limits the number of servers has been proposed. We improve the storage utilization at the cost of additional messages. The structure of GSHR-Tree is believed that the scheme of this grid spatial index should fit the needs of new applications using endlessly larger sets of spatial data. Our proposal constitutes a flexible storage allocation method for a distributed spatial index. The insertion policy can be tuned dynamically to cope with periods of storage shortage. In such cases storage balancing should be favored for better space utilization, at the price of extra message exchanges between servers. This structure makes a compromise in the updating of the duplicated index and the transformation of the spatial index data. Meeting the needs of the grid computing, GSHRTree has a flexible structure in order to satisfy new needs in the future. The GSHR-Tree provides the R-tree capabilities for large spatial datasets stored over interconnected servers. The analysis, including the experiments, confirmed the efficiency of our design choices. The scheme should fit the needs of new applications of spatial data, using endlessly larger datasets. Using the system response time of the parallel processing of spatial scope query algorithm as the performance evaluation factor, According to the result of the simulated the experiments, GSHR-Tree is performed to prove the reasonable design and the high performance of the indexing structure that the paper presented.

  14. Data Transfer Study HPSS Archiving

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, James; Parete-Koon, Suzanne T; Mitchell, Quinn

    2015-01-01

    The movement of the large amounts of data produced by codes run in a High Performance Computing (HPC) environment can be a bottleneck for project workflows. To balance filesystem capacity and performance requirements, HPC centers enforce data management policies to purge old files to make room for new computation and analysis results. Users at Oak Ridge Leadership Computing Facility (OLCF) and many other HPC user facilities must archive data to avoid data loss during purges, therefore the time associated with data movement for archiving is something that all users must consider. This study observed the difference in transfer speed frommore » the originating location on the Lustre filesystem to the more permanent High Performance Storage System (HPSS). The tests were done with a number of different transfer methods for files that spanned a variety of sizes and compositions that reflect OLCF user data. This data will be used to help users of Titan and other Cray supercomputers plan their workflow and data transfers so that they are most efficient for their project. We will also discuss best practice for maintaining data at shared user facilities.« less

  15. Telecommunication Platforms for Transmitting Sensor Data over Communication Networks-State of the Art and Challenges.

    PubMed

    Staniec, Kamil; Habrych, Marcin

    2016-07-19

    The importance of constructing wide-area sensor networks for holistic environmental state evaluation has been demonstrated. A general structure of such a network has been presented with distinction of three segments: local (based on ZigBee, Ethernet and ModBus techniques), core (base on cellular technologies) and the storage/application. The implementation of these techniques requires knowledge of their technical limitations and electromagnetic compatibility issues. The former refer to ZigBee performance degradation in multi-hop transmission, whereas the latter are associated with the common electromagnetic spectrum sharing with other existing technologies or with undesired radiated emissions generated by the radio modules of the sensor network. In many cases, it is also necessary to provide a measurement station with autonomous energy source, such as solar. As stems from measurements of the energetic efficiency of these sources, one should apply them with care and perform detailed power budget since their real performance may turn out to be far from expected. This, in turn, may negatively affect-in particular-the operation of chemical sensors implemented in the network as they often require additional heating.

  16. Telecommunication Platforms for Transmitting Sensor Data over Communication Networks—State of the Art and Challenges

    PubMed Central

    Staniec, Kamil; Habrych, Marcin

    2016-01-01

    The importance of constructing wide-area sensor networks for holistic environmental state evaluation has been demonstrated. A general structure of such a network has been presented with distinction of three segments: local (based on ZigBee, Ethernet and ModBus techniques), core (base on cellular technologies) and the storage/application. The implementation of these techniques requires knowledge of their technical limitations and electromagnetic compatibility issues. The former refer to ZigBee performance degradation in multi-hop transmission, whereas the latter are associated with the common electromagnetic spectrum sharing with other existing technologies or with undesired radiated emissions generated by the radio modules of the sensor network. In many cases, it is also necessary to provide a measurement station with autonomous energy source, such as solar. As stems from measurements of the energetic efficiency of these sources, one should apply them with care and perform detailed power budget since their real performance may turn out to be far from expected. This, in turn, may negatively affect—in particular—the operation of chemical sensors implemented in the network as they often require additional heating. PMID:27447633

  17. A modeling of dynamic storage assignment for order picking in beverage warehousing with Drive-in Rack system

    NASA Astrophysics Data System (ADS)

    Hadi, M. Z.; Djatna, T.; Sugiarto

    2018-04-01

    This paper develops a dynamic storage assignment model to solve storage assignment problem (SAP) for beverages order picking in a drive-in rack warehousing system to determine the appropriate storage location and space for each beverage products dynamically so that the performance of the system can be improved. This study constructs a graph model to represent drive-in rack storage position then combine association rules mining, class-based storage policies and an arrangement rule algorithm to determine an appropriate storage location and arrangement of the product according to dynamic orders from customers. The performance of the proposed model is measured as rule adjacency accuracy, travel distance (for picking process) and probability a product become expiry using Last Come First Serve (LCFS) queue approach. Finally, the proposed model is implemented through computer simulation and compare the performance for different storage assignment methods as well. The result indicates that the proposed model outperforms other storage assignment methods.

  18. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    NASA Astrophysics Data System (ADS)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  19. Progress in preliminary studies at Ottana Solar Facility

    NASA Astrophysics Data System (ADS)

    Demontis, V.; Camerada, M.; Cau, G.; Cocco, D.; Damiano, A.; Melis, T.; Musio, M.

    2016-05-01

    The fast increasing share of distributed generation from non-programmable renewable energy sources, such as the strong penetration of photovoltaic technology in the distribution networks, has generated several problems for the management and security of the whole power grid. In order to meet the challenge of a significant share of solar energy in the electricity mix, several actions aimed at increasing the grid flexibility and its hosting capacity, as well as at improving the generation programmability, need to be investigated. This paper focuses on the ongoing preliminary studies at the Ottana Solar Facility, a new experimental power plant located in Sardinia (Italy) currently under construction, which will offer the possibility to progress in the study of solar plants integration in the power grid. The facility integrates a concentrating solar power (CSP) plant, including a thermal energy storage system and an organic Rankine cycle (ORC) unit, with a concentrating photovoltaic (CPV) plant and an electrical energy storage system. The facility has the main goal to assess in real operating conditions the small scale concentrating solar power technology and to study the integration of the two technologies and the storage systems to produce programmable and controllable power profiles. A model for the CSP plant yield was developed to assess different operational strategies that significantly influence the plant yearly yield and its global economic effectiveness. In particular, precise assumptions for the ORC module start-up operation behavior, based on discussions with the manufacturers and technical datasheets, will be described. Finally, the results of the analysis of the: "solar driven", "weather forecasts" and "combined storage state of charge (SOC)/ weather forecasts" operational strategies will be presented.

  20. The Dockstore: enabling modular, community-focused sharing of Docker-based genomics tools and workflows

    PubMed Central

    O'Connor, Brian D.; Yuen, Denis; Chung, Vincent; Duncan, Andrew G.; Liu, Xiang Kun; Patricia, Janice; Paten, Benedict; Stein, Lincoln; Ferretti, Vincent

    2017-01-01

    As genomic datasets continue to grow, the feasibility of downloading data to a local organization and running analysis on a traditional compute environment is becoming increasingly problematic. Current large-scale projects, such as the ICGC PanCancer Analysis of Whole Genomes (PCAWG), the Data Platform for the U.S. Precision Medicine Initiative, and the NIH Big Data to Knowledge Center for Translational Genomics, are using cloud-based infrastructure to both host and perform analysis across large data sets. In PCAWG, over 5,800 whole human genomes were aligned and variant called across 14 cloud and HPC environments; the processed data was then made available on the cloud for further analysis and sharing. If run locally, an operation at this scale would have monopolized a typical academic data centre for many months, and would have presented major challenges for data storage and distribution. However, this scale is increasingly typical for genomics projects and necessitates a rethink of how analytical tools are packaged and moved to the data. For PCAWG, we embraced the use of highly portable Docker images for encapsulating and sharing complex alignment and variant calling workflows across highly variable environments. While successful, this endeavor revealed a limitation in Docker containers, namely the lack of a standardized way to describe and execute the tools encapsulated inside the container. As a result, we created the Dockstore ( https://dockstore.org), a project that brings together Docker images with standardized, machine-readable ways of describing and running the tools contained within. This service greatly improves the sharing and reuse of genomics tools and promotes interoperability with similar projects through emerging web service standards developed by the Global Alliance for Genomics and Health (GA4GH). PMID:28344774

  1. The Dockstore: enabling modular, community-focused sharing of Docker-based genomics tools and workflows.

    PubMed

    O'Connor, Brian D; Yuen, Denis; Chung, Vincent; Duncan, Andrew G; Liu, Xiang Kun; Patricia, Janice; Paten, Benedict; Stein, Lincoln; Ferretti, Vincent

    2017-01-01

    As genomic datasets continue to grow, the feasibility of downloading data to a local organization and running analysis on a traditional compute environment is becoming increasingly problematic. Current large-scale projects, such as the ICGC PanCancer Analysis of Whole Genomes (PCAWG), the Data Platform for the U.S. Precision Medicine Initiative, and the NIH Big Data to Knowledge Center for Translational Genomics, are using cloud-based infrastructure to both host and perform analysis across large data sets. In PCAWG, over 5,800 whole human genomes were aligned and variant called across 14 cloud and HPC environments; the processed data was then made available on the cloud for further analysis and sharing. If run locally, an operation at this scale would have monopolized a typical academic data centre for many months, and would have presented major challenges for data storage and distribution. However, this scale is increasingly typical for genomics projects and necessitates a rethink of how analytical tools are packaged and moved to the data. For PCAWG, we embraced the use of highly portable Docker images for encapsulating and sharing complex alignment and variant calling workflows across highly variable environments. While successful, this endeavor revealed a limitation in Docker containers, namely the lack of a standardized way to describe and execute the tools encapsulated inside the container. As a result, we created the Dockstore ( https://dockstore.org), a project that brings together Docker images with standardized, machine-readable ways of describing and running the tools contained within. This service greatly improves the sharing and reuse of genomics tools and promotes interoperability with similar projects through emerging web service standards developed by the Global Alliance for Genomics and Health (GA4GH).

  2. Exploiting the cannibalistic traits of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Collins, O.

    1993-01-01

    In Reed-Solomon codes and all other maximum distance separable codes, there is an intrinsic relationship between the size of the symbols in a codeword and the length of the codeword. Increasing the number of symbols in a codeword to improve the efficiency of the coding system thus requires using a larger set of symbols. However, long Reed-Solomon codes are difficult to implement and many communications or storage systems cannot easily accommodate an increased symbol size, e.g., M-ary frequency shift keying (FSK) and photon-counting pulse-position modulation demand a fixed symbol size. A technique for sharing redundancy among many different Reed-Solomon codewords to achieve the efficiency attainable in long Reed-Solomon codes without increasing the symbol size is described. Techniques both for calculating the performance of these new codes and for determining their encoder and decoder complexities is presented. These complexities are usually found to be substantially lower than conventional Reed-Solomon codes of similar performance.

  3. The Modern Research Data Portal: A Design Pattern for Networked, Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chard, Kyle; Dart, Eli; Foster, Ian

    Here we describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance Science DMZs and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe howmore » to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.« less

  4. The Modern Research Data Portal: a design pattern for networked, data-intensive science

    DOE PAGES

    Chard, Kyle; Dart, Eli; Foster, Ian; ...

    2018-01-15

    We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. Here, we capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance data enclaves and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe howmore » to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site,https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.« less

  5. The Modern Research Data Portal: a design pattern for networked, data-intensive science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chard, Kyle; Dart, Eli; Foster, Ian

    We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. Here, we capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance data enclaves and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe howmore » to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site,https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.« less

  6. Performance Evaluation of Peer-to-Peer Progressive Download in Broadband Access Networks

    NASA Astrophysics Data System (ADS)

    Shibuya, Megumi; Ogishi, Tomohiko; Yamamoto, Shu

    P2P (Peer-to-Peer) file sharing architectures have scalable and cost-effective features. Hence, the application of P2P architectures to media streaming is attractive and expected to be an alternative to the current video streaming using IP multicast or content delivery systems because the current systems require expensive network infrastructures and large scale centralized cache storage systems. In this paper, we investigate the P2P progressive download enabling Internet video streaming services. We demonstrated the capability of the P2P progressive download in both laboratory test network as well as in the Internet. Through the experiments, we clarified the contribution of the FTTH links to the P2P progressive download in the heterogeneous access networks consisting of FTTH and ADSL links. We analyzed the cause of some download performance degradation occurred in the experiment and discussed about the effective methods to provide the video streaming service using P2P progressive download in the current heterogeneous networks.

  7. Performance Analysis and Parametric Study of a Natural Convection Solar Air Heater With In-built Oil Storage

    NASA Astrophysics Data System (ADS)

    Dhote, Yogesh; Thombre, Shashikant

    2016-10-01

    This paper presents the thermal performance of the proposed double flow natural convection solar air heater with in-built liquid (oil) sensible heat storage. Unused engine oil was used as thermal energy storage medium due to its good heat retaining capacity even at high temperatures without evaporation. The performance evaluation was carried out for a day of the month March for the climatic conditions of Nagpur (India). A self reliant computational model was developed using computational tool as C++. The program developed was self reliant and compute the performance parameters for any day of the year and would be used for major cities in India. The effect of change in storage oil quantity and the inclination (tilt angle) on the overall efficiency of the solar air heater was studied. The performance was tested initially at different storage oil quantities as 25, 50, 75 and 100 l for a plate spacing of 0.04 m with an inclination of 36o. It has been found that the solar air heater gives the best performance at a storage oil quantity of 50 l. The performance of the proposed solar air heater is further tested for various combinations of storage oil quantity (50, 75 and 100 l) and the inclination (0o, 15o, 30o, 45o, 60o, 75o, 90o). It has been found that the proposed solar air heater with in-built oil storage shows its best performance for the combination of 50 l storage oil quantity and 60o inclination. Finally the results of the parametric study was also presented in the form of graphs carried out for a fixed storage oil quantity of 25 l, plate spacing of 0.03 m and at an inclination of 36o to study the behaviour of various heat transfer and fluid flow parameters of the solar air heater.

  8. CERNBox + EOS: end-user storage for science

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Gonzalez Labrador, H.; Lamanna, M.; Mościcki, JT; Peters, AJ

    2015-12-01

    CERNBox is a cloud synchronisation service for end-users: it allows syncing and sharing files on all major mobile and desktop platforms (Linux, Windows, MacOSX, Android, iOS) aiming to provide offline availability to any data stored in the CERN EOS infrastructure. The successful beta phase of the service confirmed the high demand in the community for an easily accessible cloud storage solution such as CERNBox. Integration of the CERNBox service with the EOS storage back-end is the next step towards providing “sync and share” capabilities for scientific and engineering use-cases. In this report we will present lessons learnt in offering the CERNBox service, key technical aspects of CERNBox/EOS integration and new, emerging usage possibilities. The latter includes the ongoing integration of “sync and share” capabilities with the LHC data analysis tools and transfer services.

  9. A web portal for hydrodynamical, cosmological simulations

    NASA Astrophysics Data System (ADS)

    Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.

    2017-07-01

    This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.

  10. Performance Modeling of Network-Attached Storage Device Based Hierarchical Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Pentakalos, Odysseas I.

    1995-01-01

    Network attached storage devices improve I/O performance by separating control and data paths and eliminating host intervention during the data transfer phase. Devices are attached to both a high speed network for data transfer and to a slower network for control messages. Hierarchical mass storage systems use disks to cache the most recently used files and a combination of robotic and manually mounted tapes to store the bulk of the files in the file system. This paper shows how queuing network models can be used to assess the performance of hierarchical mass storage systems that use network attached storage devices as opposed to host attached storage devices. Simulation was used to validate the model. The analytic model presented here can be used, among other things, to evaluate the protocols involved in 1/0 over network attached devices.

  11. Negotiating designs of multi-purpose reservoir systems in international basins

    NASA Astrophysics Data System (ADS)

    Geressu, Robel; Harou, Julien

    2016-04-01

    Given increasing agricultural and energy demands, coordinated management of multi-reservoir systems could help increase production without further stressing available water resources. However, regional or international disputes about water-use rights pose a challenge to efficient expansion and management of many large reservoir systems. Even when projects are likely to benefit all stakeholders, agreeing on the design, operation, financing, and benefit sharing can be challenging. This is due to the difficulty of considering multiple stakeholder interests in the design of projects and understanding the benefit trade-offs that designs imply. Incommensurate performance metrics, incomplete knowledge on system requirements, lack of objectivity in managing conflict and difficulty to communicate complex issue exacerbate the problem. This work proposes a multi-step hybrid multi-objective optimization and multi-criteria ranking approach for supporting negotiation in water resource systems. The approach uses many-objective optimization to generate alternative efficient designs and reveal the trade-offs between conflicting objectives. This enables informed elicitation of criteria weights for further multi-criteria ranking of alternatives. An ideal design would be ranked as best by all stakeholders. Resource-sharing mechanisms such as power-trade and/or cost sharing may help competing stakeholders arrive at designs acceptable to all. Many-objective optimization helps suggests efficient designs (reservoir site, its storage size and operating rule) and coordination levels considering the perspectives of multiple stakeholders simultaneously. We apply the proposed approach to a proof-of-concept study of the expansion of the Blue Nile transboundary reservoir system.

  12. A Cloud Robotics Based Service for Managing RPAS in Emergency, Rescue and Hazardous Scenarios

    NASA Astrophysics Data System (ADS)

    Silvagni, Mario; Chiaberge, Marcello; Sanguedolce, Claudio; Dara, Gianluca

    2016-04-01

    Cloud robotics and cloud services are revolutionizing not only the ICT world but also the robotics industry, giving robots more computing capabilities, storage and connection bandwidth while opening new scenarios that blend the physical to the digital world. In this vision, new IT architectures are required to manage robots, retrieve data from them and create services to interact with users. Among all the robots this work is mainly focused on flying robots, better known as drones, UAV (Unmanned Aerial Vehicle) or RPAS (Remotely Piloted Aircraft Systems). The cloud robotics approach shifts the concept of having a single local "intelligence" for every single UAV, as a unique device that carries out onboard all the computation and storage processes, to a more powerful "centralized brain" located in the cloud. This breakthrough opens new scenarios where UAVs are agents, relying on remote servers for most of their computational load and data storage, creating a network of devices where they can share knowledge and information. Many applications, using UAVs, are growing as interesting and suitable devices for environment monitoring. Many services can be build fetching data from UAVs, such as telemetry, video streaming, pictures or sensors data; once. These services, part of the IT architecture, can be accessed via web by other devices or shared with other UAVs. As test cases of the proposed architecture, two examples are reported. In the first one a search and rescue or emergency management, where UAVs are required for monitoring intervention, is shown. In case of emergency or aggression, the user requests the emergency service from the IT architecture, providing GPS coordinates and an identification number. The IT architecture uses a UAV (choosing among the available one according to distance, service status, etc.) to reach him/her for monitoring and support operations. In the meantime, an officer will use the service to see the current position of the UAV, its telemetry and video streaming from its camera. Data are stored for further use and documentation and can be shared to all the involved personal or services. The second case refer to imaging survey. An investigation area is selected using a map or a set of coordinates by a user that can be on the field on in a management facility. The cloud system elaborate this data and automatically compute a flight plan that consider the survey data requirements (i.e: picture ground resolution, overlapping) but also several environment constraints (i.e: no fly zones, possible hazardous areas, known obstacles, etc). Once the flight plan is loaded in the selected UAV the mission starts. During the mission, if a suitable data network coverage is available, the UAV transmit acquired images (typically low quality image to limit bandwidth) and shooting pose in order to perform a preliminary check during the mission and minimize failing in survey; if not, all data are uploaded asynchronously after the mission. The cloud servers perform all the tasks related to image processing (mosaic, ortho-photo, geo-referencing, 3D models) and data management.

  13. Advancing Collaboration through Hydrologic Data and Model Sharing

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.

    2015-12-01

    HydroShare is an online, collaborative system for open sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined primarily by standardized metadata, content data models for each resource type, and an overarching resource data model based on the Open Archives Initiative's Object Reuse and Exchange (OAI-ORE) standard and a hierarchical file packaging system called "BagIt". HydroShare expands the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated to include geospatial and multidimensional space-time datasets commonly used in hydrology. HydroShare also includes new capability for sharing models, model components, and analytical tools and will take advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. It also supports web services and server/cloud based computation operating on resources for the execution of hydrologic models and analysis and visualization of hydrologic data. HydroShare uses iRODS as a network file system for underlying storage of datasets and models. Collaboration is enabled by casting datasets and models as "social objects". Social functions include both private and public sharing, formation of collaborative groups of users, and value-added annotation of shared datasets and models. The HydroShare web interface and social media functions were developed using the Django web application framework coupled to iRODS. Data visualization and analysis is supported through the Tethys Platform web GIS software stack. Links to external systems are supported by RESTful web service interfaces to HydroShare's content. This presentation will introduce the HydroShare functionality developed to date and describe ongoing development of functionality to support collaboration and integration of data and models.

  14. Improvements in magnetic bearing performance for flywheel energy storage

    NASA Technical Reports Server (NTRS)

    Plant, David P.; Anand, Davinder K.; Kirk, James A.; Calomeris, Anthony J.; Romero, Robert L.

    1988-01-01

    The paper considers the development of a 500-Watt-hour magnetically suspended flywheel stack energy storage system. The work includes hardware testing results from a stack flywheel energy storage system, improvements in the area of noncontacting displacement transducers, and performance enhancements of magnetic bearings. Experimental results show that a stack flywheel energy storage system is feasible technology.

  15. An Effective Cache Algorithm for Heterogeneous Storage Systems

    PubMed Central

    Li, Yong; Feng, Dan

    2013-01-01

    Modern storage environment is commonly composed of heterogeneous storage devices. However, traditional cache algorithms exhibit performance degradation in heterogeneous storage systems because they were not designed to work with the diverse performance characteristics. In this paper, we present a new cache algorithm called HCM for heterogeneous storage systems. The HCM algorithm partitions the cache among the disks and adopts an effective scheme to balance the work across the disks. Furthermore, it applies benefit-cost analysis to choose the best allocation of cache block to improve the performance. Conducting simulations with a variety of traces and a wide range of cache size, our experiments show that HCM significantly outperforms the existing state-of-the-art storage-aware cache algorithms. PMID:24453890

  16. Secure count query on encrypted genomic data.

    PubMed

    Hasan, Mohammad Zahidul; Mahdi, Md Safiur Rahman; Sadat, Md Nazmus; Mohammed, Noman

    2018-05-01

    Human genomic information can yield more effective healthcare by guiding medical decisions. Therefore, genomics research is gaining popularity as it can identify potential correlations between a disease and a certain gene, which improves the safety and efficacy of drug treatment and can also develop more effective prevention strategies [1]. To reduce the sampling error and to increase the statistical accuracy of this type of research projects, data from different sources need to be brought together since a single organization does not necessarily possess required amount of data. In this case, data sharing among multiple organizations must satisfy strict policies (for instance, HIPAA and PIPEDA) that have been enforced to regulate privacy-sensitive data sharing. Storage and computation on the shared data can be outsourced to a third party cloud service provider, equipped with enormous storage and computation resources. However, outsourcing data to a third party is associated with a potential risk of privacy violation of the participants, whose genomic sequence or clinical profile is used in these studies. In this article, we propose a method for secure sharing and computation on genomic data in a semi-honest cloud server. In particular, there are two main contributions. Firstly, the proposed method can handle biomedical data containing both genotype and phenotype. Secondly, our proposed index tree scheme reduces the computational overhead significantly for executing secure count query operation. In our proposed method, the confidentiality of shared data is ensured through encryption, while making the entire computation process efficient and scalable for cutting-edge biomedical applications. We evaluated our proposed method in terms of efficiency on a database of Single-Nucleotide Polymorphism (SNP) sequences, and experimental results demonstrate that the execution time for a query of 50 SNPs in a database of 50,000 records is approximately 5 s, where each record contains 500 SNPs. And, it requires 69.7 s to execute the query on the same database that also includes phenotypes. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Working Memory in Children: A Time-Constrained Functioning Similar to Adults

    ERIC Educational Resources Information Center

    Portrat, Sophie; Camos, Valerie; Barrouillet, Pierre

    2009-01-01

    Within the time-based resource-sharing (TBRS) model, we tested a new conception of the relationships between processing and storage in which the core mechanisms of working memory (WM) are time constrained. However, our previous studies were restricted to adults. The current study aimed at demonstrating that these mechanisms are present and…

  18. 40 CFR 60.482-1 - Standards: General.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...

  19. 40 CFR 60.482-1 - Standards: General.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...

  20. 40 CFR 60.482-1 - Standards: General.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...

  1. 40 CFR 60.482-1 - Standards: General.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... are shared among two or more batch process units that are subject to this subpart may be monitored at... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...

  2. 40 CFR 60.482-1 - Standards: General.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...

  3. Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"

    ERIC Educational Resources Information Center

    Romiszowski, Alexander J.

    2012-01-01

    "Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This…

  4. Migrating Educational Data and Services to Cloud Computing: Exploring Benefits and Challenges

    ERIC Educational Resources Information Center

    Lahiri, Minakshi; Moseley, James L.

    2013-01-01

    "Cloud computing" is currently the "buzzword" in the Information Technology field. Cloud computing facilitates convenient access to information and software resources as well as easy storage and sharing of files and data, without the end users being aware of the details of the computing technology behind the process. This…

  5. Cryptography for Big Data Security

    DTIC Science & Technology

    2015-07-13

    Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount

  6. Precategorical Acoustic Storage and the Perception of Speech

    ERIC Educational Resources Information Center

    Frankish, Clive

    2008-01-01

    Theoretical accounts of both speech perception and of short term memory must consider the extent to which perceptual representations of speech sounds might survive in relatively unprocessed form. This paper describes a novel version of the serial recall task that can be used to explore this area of shared interest. In immediate recall of digit…

  7. Gigwa-Genotype investigator for genome-wide analyses.

    PubMed

    Sempéré, Guilhem; Philippe, Florian; Dereeper, Alexis; Ruiz, Manuel; Sarah, Gautier; Larmande, Pierre

    2016-06-06

    Exploring the structure of genomes and analyzing their evolution is essential to understanding the ecological adaptation of organisms. However, with the large amounts of data being produced by next-generation sequencing, computational challenges arise in terms of storage, search, sharing, analysis and visualization. This is particularly true with regards to studies of genomic variation, which are currently lacking scalable and user-friendly data exploration solutions. Here we present Gigwa, a web-based tool that provides an easy and intuitive way to explore large amounts of genotyping data by filtering it not only on the basis of variant features, including functional annotations, but also on genotype patterns. The data storage relies on MongoDB, which offers good scalability properties. Gigwa can handle multiple databases and may be deployed in either single- or multi-user mode. In addition, it provides a wide range of popular export formats. The Gigwa application is suitable for managing large amounts of genomic variation data. Its user-friendly web interface makes such processing widely accessible. It can either be simply deployed on a workstation or be used to provide a shared data portal for a given community of researchers.

  8. Artificial Neural Network with Hardware Training and Hardware Refresh

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor)

    2003-01-01

    A neural network circuit is provided having a plurality of circuits capable of charge storage. Also provided is a plurality of circuits each coupled to at least one of the plurality of charge storage circuits and constructed to generate an output in accordance with a neuron transfer function. Each of a plurality of circuits is coupled to one of the plurality of neuron transfer function circuits and constructed to generate a derivative of the output. A weight update circuit updates the charge storage circuits based upon output from the plurality of transfer function circuits and output from the plurality of derivative circuits. In preferred embodiments, separate training and validation networks share the same set of charge storage circuits and may operate concurrently. The validation network has a separate transfer function circuits each being coupled to the charge storage circuits so as to replicate the training network s coupling of the plurality of charge storage to the plurality of transfer function circuits. The plurality of transfer function circuits may be constructed each having a transconductance amplifier providing differential currents combined to provide an output in accordance with a transfer function. The derivative circuits may have a circuit constructed to generate a biased differential currents combined so as to provide the derivative of the transfer function.

  9. Ethical sharing of health data in online platforms - which values should be considered?

    PubMed

    Riso, Brígida; Tupasela, Aaro; Vears, Danya F; Felzmann, Heike; Cockbain, Julian; Loi, Michele; Kongsholm, Nana C H; Zullo, Silvia; Rakic, Vojin

    2017-08-21

    Intensified and extensive data production and data storage are characteristics of contemporary western societies. Health data sharing is increasing with the growth of Information and Communication Technology (ICT) platforms devoted to the collection of personal health and genomic data. However, the sensitive and personal nature of health data poses ethical challenges when data is disclosed and shared even if for scientific research purposes.With this in mind, the Science and Values Working Group of the COST Action CHIP ME 'Citizen's Health through public-private Initiatives: Public health, Market and Ethical perspectives' (IS 1303) identified six core values they considered to be essential for the ethical sharing of health data using ICT platforms. We believe that using this ethical framework will promote respectful scientific practices in order to maintain individuals' trust in research.We use these values to analyse five ICT platforms and explore how emerging data sharing platforms are reconfiguring the data sharing experience from a range of perspectives. We discuss which types of values, rights and responsibilities they entail and enshrine within their philosophy or outlook on what it means to share personal health information. Through this discussion we address issues of the design and the development process of personal health data and patient-oriented infrastructures, as well as new forms of technologically-mediated empowerment.

  10. XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital.

    PubMed

    Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Choi, Young Hwan; Cho, Yong Kyun

    2013-12-01

    The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE.

  11. XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital

    PubMed Central

    Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Cho, Yong Kyun

    2013-01-01

    Objectives The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Methods Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. Results The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Conclusions Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE. PMID:24523994

  12. Computer predictions of ground storage effects on performance of Galileo and ISPM generators

    NASA Technical Reports Server (NTRS)

    Chmielewski, A.

    1983-01-01

    Radioisotope Thermoelectric Generators (RTG) that will supply electrical power to the Galileo and International Solar Polar Mission (ISPM) spacecraft are exposed to several degradation mechanisms during the prolonged ground storage before launch. To assess the effect of storage on the RTG flight performance, a computer code has been developed which simulates all known degradation mechanisms that occur in an RTG during storage and flight. The modeling of these mechanisms and their impact on the RTG performance are discussed.

  13. Comparative analysis for various redox flow batteries chemistries using a cost performance model

    NASA Astrophysics Data System (ADS)

    Crawford, Alasdair; Viswanathan, Vilayanur; Stephenson, David; Wang, Wei; Thomsen, Edwin; Reed, David; Li, Bin; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2015-10-01

    The total energy storage system cost is determined by means of a robust performance-based cost model for multiple flow battery chemistries. Systems aspects such as shunt current losses, pumping losses and various flow patterns through electrodes are accounted for. The system cost minimizing objective function determines stack design by optimizing the state of charge operating range, along with current density and current-normalized flow. The model cost estimates are validated using 2-kW stack performance data for the same size electrodes and operating conditions. Using our validated tool, it has been demonstrated that an optimized all-vanadium system has an estimated system cost of < 350 kWh-1 for 4-h application. With an anticipated decrease in component costs facilitated by economies of scale from larger production volumes, coupled with performance improvements enabled by technology development, the system cost is expected to decrease to 160 kWh-1 for a 4-h application, and to 100 kWh-1 for a 10-h application. This tool has been shared with the redox flow battery community to enable cost estimation using their stack data and guide future direction.

  14. HydroShare: A Platform for Collaborative Data and Model Sharing in Hydrology

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.

    2017-12-01

    HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting its use as a virtual environment supporting education and research. HydroShare has components that support: (1) resource storage, (2) resource exploration, and (3) web apps for actions on resources. The HydroShare data discovery, sharing and publishing functions as well as HydroShare web apps provide the capability to analyze data and execute models completely in the cloud (servers remote from the user) overcoming desktop platform limitations. The HydroShare GIS app provides a basic capability to visualize spatial data. The HydroShare JupyterHub Notebook app provides flexible and documentable execution of Python code snippets for analysis and modeling in a way that results can be shared among HydroShare users and groups to support research collaboration and education. We will discuss how these developments can be used to support different types of educational efforts in Hydrology where being completely web based is of value in an educational setting as students can all have access to the same functionality regardless of their computer.

  15. Performing an allreduce operation using shared memory

    DOEpatents

    Archer, Charles J [Rochester, MN; Dozsa, Gabor [Ardsley, NY; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  16. Performing an allreduce operation using shared memory

    DOEpatents

    Archer, Charles J; Dozsa, Gabor; Ratterman, Joseph D; Smith, Brian E

    2014-06-10

    Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit.

  17. Performance analysis of phase-change material storage unit for both heating and cooling of buildings

    NASA Astrophysics Data System (ADS)

    Waqas, Adeel; Ali, Majid; Ud Din, Zia

    2017-04-01

    Utilisation of solar energy and the night ambient (cool) temperatures are the passive ways of heating and cooling of buildings. Intermittent and time-dependent nature of these sources makes thermal energy storage vital for efficient and continuous operation of these heating and cooling techniques. Latent heat thermal energy storage by phase-change materials (PCMs) is preferred over other storage techniques due to its high-energy storage density and isothermal storage process. The current study was aimed to evaluate the performance of the air-based PCM storage unit utilising solar energy and cool ambient night temperatures for comfort heating and cooling of a building in dry-cold and dry-hot climates. The performance of the studied PCM storage unit was maximised when the melting point of the PCM was ∼29°C in summer and 21°C during winter season. The appropriate melting point was ∼27.5°C for all-the-year-round performance. At lower melting points than 27.5°C, declination in the cooling capacity of the storage unit was more profound as compared to the improvement in the heating capacity. Also, it was concluded that the melting point of the PCM that provided maximum cooling during summer season could be used for winter heating also but not vice versa.

  18. Data Grid Management Systems

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne

    2004-01-01

    The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.

  19. Functional customization: Value creation by individual storage elements in the car interior.

    PubMed

    Wagner, A-S; Kilincsoy, Ü; Reitmeir, M; Vink, P

    2016-07-27

    Mobility demands change due to differing life stages of car owners. Car sharing and retail markets seldom offer a possibility for customization by the user in contrast to the freedom of choice of an initial owner of a car. The value creation of functional customization is investigated. Prior to a test with a concept design, different use case scenarios of car drivers are identified regarding the preferred storage location of their personal belongings in different situations. A study with 70 subjects was conducted in order to evaluate the value added by functional customization. Storage habits of users were investigated in general and in relation to a concept design offering the possibility for flexible storage. Smartphones, supplies, beverages and wallets were the most relevant belongings in all driving situations (commuting, leisure, vacation and special occasions) complemented by sports equipment. Smartphones and other valuables are stored within reach and sight of the user. The emotional responses, recorded before and after the test, subdivided in attraction, hope and joy indicated positive feedback. The ease of use and the design proved to be crucial product characteristics of individually adaptable storage solutions. Positive emotions are contributing factors for a user's purchasing decision.

  20. Variation in moisture duration as a driver of coexistence by the storage effect in desert annual plants.

    PubMed

    Holt, Galen; Chesson, Peter

    2014-03-01

    Temporal environmental variation is a leading hypothesis for the coexistence of desert annual plants. Environmental variation is hypothesized to cause species-specific patterns of variation in germination, which then generates the storage effect coexistence mechanism. However, it has never been shown how sufficient species differences in germination patterns for multispecies coexistence can arise from a shared fluctuating environment. Here we show that nonlinear germination responses to a single fluctuating physical environmental factor can lead to sufficient differences between species in germination pattern for the storage effect to yield coexistence of multiple species. We derive these nonlinear germination responses from experimental data on the effects of varying soil moisture duration. Although these nonlinearities lead to strong species asymmetries in germination patterns, the relative nonlinearity coexistence mechanism is minor compared with the storage effect. However, these asymmetries mean that the storage effect can be negative for some species, which then only persist in the face of interspecific competition through average fitness advantages. This work shows how a low dimensional physical environment can nevertheless stabilize multispecies coexistence when the species have different nonlinear responses to common conditions, as supported by our experimental data. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Handling Metadata in a Neurophysiology Laboratory

    PubMed Central

    Zehl, Lyuba; Jaillet, Florent; Stoewer, Adrian; Grewe, Jan; Sobolev, Andrey; Wachtler, Thomas; Brochier, Thomas G.; Riehle, Alexa; Denker, Michael; Grün, Sonja

    2016-01-01

    To date, non-reproducibility of neurophysiological research is a matter of intense discussion in the scientific community. A crucial component to enhance reproducibility is to comprehensively collect and store metadata, that is, all information about the experiment, the data, and the applied preprocessing steps on the data, such that they can be accessed and shared in a consistent and simple manner. However, the complexity of experiments, the highly specialized analysis workflows and a lack of knowledge on how to make use of supporting software tools often overburden researchers to perform such a detailed documentation. For this reason, the collected metadata are often incomplete, incomprehensible for outsiders or ambiguous. Based on our research experience in dealing with diverse datasets, we here provide conceptual and technical guidance to overcome the challenges associated with the collection, organization, and storage of metadata in a neurophysiology laboratory. Through the concrete example of managing the metadata of a complex experiment that yields multi-channel recordings from monkeys performing a behavioral motor task, we practically demonstrate the implementation of these approaches and solutions with the intention that they may be generalized to other projects. Moreover, we detail five use cases that demonstrate the resulting benefits of constructing a well-organized metadata collection when processing or analyzing the recorded data, in particular when these are shared between laboratories in a modern scientific collaboration. Finally, we suggest an adaptable workflow to accumulate, structure and store metadata from different sources using, by way of example, the odML metadata framework. PMID:27486397

  2. Handling Metadata in a Neurophysiology Laboratory.

    PubMed

    Zehl, Lyuba; Jaillet, Florent; Stoewer, Adrian; Grewe, Jan; Sobolev, Andrey; Wachtler, Thomas; Brochier, Thomas G; Riehle, Alexa; Denker, Michael; Grün, Sonja

    2016-01-01

    To date, non-reproducibility of neurophysiological research is a matter of intense discussion in the scientific community. A crucial component to enhance reproducibility is to comprehensively collect and store metadata, that is, all information about the experiment, the data, and the applied preprocessing steps on the data, such that they can be accessed and shared in a consistent and simple manner. However, the complexity of experiments, the highly specialized analysis workflows and a lack of knowledge on how to make use of supporting software tools often overburden researchers to perform such a detailed documentation. For this reason, the collected metadata are often incomplete, incomprehensible for outsiders or ambiguous. Based on our research experience in dealing with diverse datasets, we here provide conceptual and technical guidance to overcome the challenges associated with the collection, organization, and storage of metadata in a neurophysiology laboratory. Through the concrete example of managing the metadata of a complex experiment that yields multi-channel recordings from monkeys performing a behavioral motor task, we practically demonstrate the implementation of these approaches and solutions with the intention that they may be generalized to other projects. Moreover, we detail five use cases that demonstrate the resulting benefits of constructing a well-organized metadata collection when processing or analyzing the recorded data, in particular when these are shared between laboratories in a modern scientific collaboration. Finally, we suggest an adaptable workflow to accumulate, structure and store metadata from different sources using, by way of example, the odML metadata framework.

  3. Solving data-at-rest for the storage and retrieval of files in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Knobler, Ron; Scheffel, Peter; Williams, Jonathan; Gaj, Kris; Kaps, Jens-Peter

    2013-05-01

    Based on current trends for both military and commercial applications, the use of mobile devices (e.g. smartphones and tablets) is greatly increasing. Several military applications consist of secure peer to peer file sharing without a centralized authority. For these military applications, if one or more of these mobile devices are lost or compromised, sensitive files can be compromised by adversaries, since COTS devices and operating systems are used. Complete system files cannot be stored on a device, since after compromising a device, an adversary can attack the data at rest, and eventually obtain the original file. Also after a device is compromised, the existing peer to peer system devices must still be able to access all system files. McQ has teamed with the Cryptographic Engineering Research Group at George Mason University to develop a custom distributed file sharing system to provide a complete solution to the data at rest problem for resource constrained embedded systems and mobile devices. This innovative approach scales very well to a large number of network devices, without a single point of failure. We have implemented the approach on representative mobile devices as well as developed an extensive system simulator to benchmark expected system performance based on detailed modeling of the network/radio characteristics, CONOPS, and secure distributed file system functionality. The simulator is highly customizable for the purpose of determining expected system performance for other network topologies and CONOPS.

  4. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  5. Can ionophobic nanopores enhance the energy storage capacity of electric-double-layer capacitors containing nonaqueous electrolytes?

    NASA Astrophysics Data System (ADS)

    Lian, Cheng; Liu, Honglai; Henderson, Douglas; Wu, Jianzhong

    2016-10-01

    The ionophobicity effect of nanoporous electrodes on the capacitance and the energy storage capacity of nonaqueous-electrolyte supercapacitors is studied by means of the classical density functional theory (DFT). It has been hypothesized that ionophobic nanopores may create obstacles in charging, but they store energy much more efficiently than ionophilic pores. In this study, we find that, for both ionic liquids and organic electrolytes, an ionophobic pore exhibits a charging behavior different from that of an ionophilic pore, and that the capacitance-voltage curve changes from a bell shape to a two-hump camel shape when the pore ionophobicity increases. For electric-double-layer capacitors containing organic electrolytes, an increase in the ionophobicity of the nanopores leads to a higher capacity for energy storage. Without taking into account the effects of background screening, the DFT predicts that an ionophobic pore containing an ionic liquid does not enhance the supercapacitor performance within the practical voltage ranges. However, by using an effective dielectric constant to account for ion polarizability, the DFT predicts that, like an organic electrolyte, an ionophobic pore with an ionic liquid is also able to increase the energy stored when the electrode voltage is beyond a certain value. We find that the critical voltage for an enhanced capacitance in an ionic liquid is larger than that in an organic electrolyte. Our theoretical predictions provide further understanding of how chemical modification of porous electrodes affects the performance of supercapacitors. The authors are saddened by the passing of George Stell but are pleased to contribute this article in his memory. Some years ago, DH gave a talk at a Gordon Conference that contained an approximation that George had demonstrated previously to be in error in one of his publications. Rather than making this point loudly in the discussion, George politely, quietly, and privately pointed this out later. In 2002, DH shared a room with George at a conference in China. This is remembered fondly.

  6. Application of electrochemical energy storage in solar thermal electric generation systems

    NASA Technical Reports Server (NTRS)

    Das, R.; Krauthamer, S.; Frank, H.

    1982-01-01

    This paper assesses the status, cost, and performance of existing electrochemical energy storage systems, and projects the cost, performance, and availability of advanced storage systems for application in terrestrial solar thermal electric generation. A 10 MWe solar plant with five hours of storage is considered and the cost of delivered energy is computed for sixteen different storage systems. The results indicate that the five most attractive electrochemical storage systems use the following battery types: zinc-bromine (Exxon), iron-chromium redox (NASA/Lewis Research Center, LeRC), sodium-sulfur (Ford), sodium-sulfur (Dow), and zinc-chlorine (Energy Development Associates, EDA).

  7. Simulation of mass storage systems operating in a large data processing facility

    NASA Technical Reports Server (NTRS)

    Holmes, R.

    1972-01-01

    A mass storage simulation program was written to aid system designers in the design of a data processing facility. It acts as a tool for measuring the overall effect on the facility of on-line mass storage systems, and it provides the means of measuring and comparing the performance of competing mass storage systems. The performance of the simulation program is demonstrated.

  8. Metamodeling-based approach for risk assessment and cost estimation: Application to geological carbon sequestration planning

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Jeong, Hoonyoung; González-Nicolás, Ana; Templeton, Thomas C.

    2018-04-01

    Carbon capture and storage (CCS) is being evaluated globally as a geoengineering measure for significantly reducing greenhouse emission. However, long-term liability associated with potential leakage from these geologic repositories is perceived as a main barrier of entry to site operators. Risk quantification and impact assessment help CCS operators to screen candidate sites for suitability of CO2 storage. Leakage risks are highly site dependent, and a quantitative understanding and categorization of these risks can only be made possible through broad participation and deliberation of stakeholders, with the use of site-specific, process-based models as the decision basis. Online decision making, however, requires that scenarios be run in real time. In this work, a Python based, Leakage Assessment and Cost Estimation (PyLACE) web application was developed for quantifying financial risks associated with potential leakage from geologic carbon sequestration sites. PyLACE aims to assist a collaborative, analytic-deliberative decision making processes by automating metamodel creation, knowledge sharing, and online collaboration. In PyLACE, metamodeling, which is a process of developing faster-to-run surrogates of process-level models, is enabled using a special stochastic response surface method and the Gaussian process regression. Both methods allow consideration of model parameter uncertainties and the use of that information to generate confidence intervals on model outputs. Training of the metamodels is delegated to a high performance computing cluster and is orchestrated by a set of asynchronous job scheduling tools for job submission and result retrieval. As a case study, workflow and main features of PyLACE are demonstrated using a multilayer, carbon storage model.

  9. Application of acid whey and set milk to marinate beef with reference to quality parameters and product safety.

    PubMed

    Wójciak, Karolina M; Krajmas, Paweł; Solska, Elżbieta; Dolatowski, Zbigniew J

    2015-01-01

    The aim of the study was to evaluate the potential of acid whey and set milk as a marinade in the traditional production of fermented eye round. Studies involved assaying pH value, water activity (aw), oxidation-reduction potential and TBARS value, colour parameters in CIE system (L*, a*, b*), assaying the number of lactic acid bacteria and certain pathogenic bacteria after ripening process and after 60-day storing in cold storage. Sensory analysis and analysis of the fatty acids profile were performed after completion of the ripening process. Analysis of pH value in the products revealed that application of acid whey to marinate beef resulted in increased acidity of ripening eye round (5.14). The highest value of the colour parameter a* after ripening process and during storage was observed in sample AW (12.76 and 10.07 respectively), the lowest on the other hand was observed in sample SM (10.06 and 7.88 respectively). The content of polyunsaturated fatty acids (PUFA) was higher in eye round marinated in acid whey by approx. 4% in comparison to other samples. Application of acid whey to marinade beef resulted in increased share of red colour in general colour tone as well as increased oxidative stability of the product during storage. It also increased the content of polyunsaturated fatty acids (PUFA) in the product. All model products had high content of lactic acid bacteria and there were no pathogenic bacteria such as: L. monocytogenes, Y. enterocolitica, S. aureus, Clostridium sp.

  10. Impact of Nisin-Activated Packaging on Microbiota of Beef Burgers during Storage.

    PubMed

    Ferrocino, Ilario; Greppi, Anna; La Storia, Antonietta; Rantsiou, Kalliopi; Ercolini, Danilo; Cocolin, Luca

    2016-01-15

    Beef burgers were stored at 4°C in a vacuum in nisin-activated antimicrobial packaging. Microbial ecology analyses were performed on samples collected between days 0 and 21 of storage to discover the population diversity. Two batches were analyzed using RNA-based denaturing gradient gel electrophoresis (DGGE) and pyrosequencing. The active packaging retarded the growth of the total viable bacteria and lactic acid bacteria. Culture-independent analysis by pyrosequencing of RNA extracted directly from meat showed that Photobacterium phosphoreum, Lactococcus piscium, Lactobacillus sakei, and Leuconostoc carnosum were the major operational taxonomic units (OTUs) shared between control and treated samples. Beta diversity analysis of the 16S rRNA sequence data and RNA-DGGE showed a clear separation between two batches based on the microbiota. Control samples from batch B showed a significant high abundance of some taxa sensitive to nisin, such as Kocuria rhizophila, Staphylococcus xylosus, Leuconostoc carnosum, and Carnobacterium divergens, compared to control samples from batch A. However, only from batch B was it possible to find a significant difference between controls and treated samples during storage due to the active packaging. Predicted metagenomes confirmed differences between the two batches and indicated that the use of nisin-based antimicrobial packaging can determine a reduction in the abundance of specific metabolic pathways related to spoilage. The present study aimed to assess the viable bacterial communities in beef burgers stored in nisin-based antimicrobial packaging, and it highlights the efficacy of this strategy to prolong beef burger shelf life. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  11. Efficiently sphere-decodable physical layer transmission schemes for wireless storage networks

    NASA Astrophysics Data System (ADS)

    Lu, Hsiao-Feng Francis; Barreal, Amaro; Karpuk, David; Hollanti, Camilla

    2016-12-01

    Three transmission schemes over a new type of multiple-access channel (MAC) model with inter-source communication links are proposed and investigated in this paper. This new channel model is well motivated by, e.g., wireless distributed storage networks, where communication to repair a lost node takes place from helper nodes to a repairing node over a wireless channel. Since in many wireless networks nodes can come and go in an arbitrary manner, there must be an inherent capability of inter-node communication between every pair of nodes. Assuming that communication is possible between every pair of helper nodes, the newly proposed schemes are based on various smart time-sharing and relaying strategies. In other words, certain helper nodes will be regarded as relays, thereby converting the conventional uncooperative multiple-access channel to a multiple-access relay channel (MARC). The diversity-multiplexing gain tradeoff (DMT) of the system together with efficient sphere-decodability and low structural complexity in terms of the number of antennas required at each end is used as the main design objectives. While the optimal DMT for the new channel model is fully open, it is shown that the proposed schemes outperform the DMT of the simple time-sharing protocol and, in some cases, even the optimal uncooperative MAC DMT. While using a wireless distributed storage network as a motivating example throughout the paper, the MAC transmission techniques proposed here are completely general and as such applicable to any MAC communication with inter-source communication links.

  12. Hardware support for collecting performance counters directly to memory

    DOEpatents

    Gara, Alan; Salapura, Valentina; Wisniewski, Robert W.

    2012-09-25

    Hardware support for collecting performance counters directly to memory, in one aspect, may include a plurality of performance counters operable to collect one or more counts of one or more selected activities. A first storage element may be operable to store an address of a memory location. A second storage element may be operable to store a value indicating whether the hardware should begin copying. A state machine may be operable to detect the value in the second storage element and trigger hardware copying of data in selected one or more of the plurality of performance counters to the memory location whose address is stored in the first storage element.

  13. Bioinformatics and Microarray Data Analysis on the Cloud.

    PubMed

    Calabrese, Barbara; Cannataro, Mario

    2016-01-01

    High-throughput platforms such as microarray, mass spectrometry, and next-generation sequencing are producing an increasing volume of omics data that needs large data storage and computing power. Cloud computing offers massive scalable computing and storage, data sharing, on-demand anytime and anywhere access to resources and applications, and thus, it may represent the key technology for facing those issues. In fact, in the recent years it has been adopted for the deployment of different bioinformatics solutions and services both in academia and in the industry. Although this, cloud computing presents several issues regarding the security and privacy of data, that are particularly important when analyzing patients data, such as in personalized medicine. This chapter reviews main academic and industrial cloud-based bioinformatics solutions; with a special focus on microarray data analysis solutions and underlines main issues and problems related to the use of such platforms for the storage and analysis of patients data.

  14. Network Coding Opportunities for Wireless Grids Formed by Mobile Devices

    NASA Astrophysics Data System (ADS)

    Nielsen, Karsten Fyhn; Madsen, Tatiana K.; Fitzek, Frank H. P.

    Wireless grids have potential in sharing communication, computa-tional and storage resources making these networks more powerful, more robust, and less cost intensive. However, to enjoy the benefits of cooperative resource sharing, a number of issues should be addressed and the cost of the wireless link should be taken into account. We focus on the question how nodes can efficiently communicate and distribute data in a wireless grid. We show the potential of a network coding approach when nodes have the possibility to combine packets thus increasing the amount of information per transmission. Our implementation demonstrates the feasibility of network coding for wireless grids formed by mobile devices.

  15. Secure public cloud platform for medical images sharing.

    PubMed

    Pan, Wei; Coatrieux, Gouenou; Bouslimi, Dalel; Prigent, Nicolas

    2015-01-01

    Cloud computing promises medical imaging services offering large storage and computing capabilities for limited costs. In this data outsourcing framework, one of the greatest issues to deal with is data security. To do so, we propose to secure a public cloud platform devoted to medical image sharing by defining and deploying a security policy so as to control various security mechanisms. This policy stands on a risk assessment we conducted so as to identify security objectives with a special interest for digital content protection. These objectives are addressed by means of different security mechanisms like access and usage control policy, partial-encryption and watermarking.

  16. Sociocultural Theory, the L2 Writing Process, and Google Drive: Strange Bedfellows?

    ERIC Educational Resources Information Center

    Slavkov, Nikolay

    2015-01-01

    As familiar and widely used elements of second language pedagogy that can be leveraged in interesting new ways through the use of digital technology. The focus is on a set of affordances offered by Google Drive, a popular online storage and document-sharing technology. On the assumption that dynamic collaboration with peers, teacher feedback, and…

  17. The Impact of Storage on Processing: How Is Information Maintained in Working Memory?

    ERIC Educational Resources Information Center

    Vergauwe, Evie; Camos, Valérie; Barrouillet, Pierre

    2014-01-01

    Working memory is typically defined as a system devoted to the simultaneous maintenance and processing of information. However, the interplay between these 2 functions is still a matter of debate in the literature, with views ranging from complete independence to complete dependence. The time-based resource-sharing model assumes that a central…

  18. Beyond Traditional Literacy Instruction: Toward an Account-Based Literacy Training Curriculum in Libraries

    ERIC Educational Resources Information Center

    Cirella, David

    2012-01-01

    A diverse group, account-based services include a wide variety of sites commonly used by patrons, including online shopping sites, social networks, photo- and video-sharing sites, banking and financial sites, government services, and cloud-based storage. Whether or not a piece of information is obtainable online must be considered when creating…

  19. 1988-2000 Long-Range Plan for Technology of the Texas State Board of Education.

    ERIC Educational Resources Information Center

    Texas State Board of Education, Austin.

    This plan plots the course for meeting educational needs in Texas through such technologies as computer-based systems, devices for storage and retrieval of massive amounts of information, telecommunications for audio, video, and information sharing, and other electronic media devised by the year 2000 that can help meet the instructional and…

  20. MARC and the Library Service Center: Automation at Bargain Rates.

    ERIC Educational Resources Information Center

    Pearson, Karl M.

    Despite recent research and development in the field of library automation, libraries have been unable to reap the benefits promised by technology due to the high cost of building and maintaining their own computer-based systems. Time-sharing and disc mass storage devices will bring automation costs, if spread over a number of users, within the…

  1. Social Influences on User Behavior in Group Information Repositories

    ERIC Educational Resources Information Center

    Rader, Emilee Jeanne

    2009-01-01

    Group information repositories are systems for organizing and sharing files kept in a central location that all group members can access. These systems are often assumed to be tools for storage and control of files and their metadata, not tools for communication. The purpose of this research is to better understand user behavior in group…

  2. Incorporating Functional Digital Literacy Skills as Part of the Curriculum for High School Students with Intellectual Disability

    ERIC Educational Resources Information Center

    Cihak, David F.; Wright, Rachel; Smith, Cate C.; McMahon, Don; Kraiss, Kelly

    2015-01-01

    The purpose of this study was to examine the effects of teaching functional digital literacy skills to three high school students with intellectual disability. Functional digital literacy skills included sending and receiving email messages, organizing social bookmarking to save, share, and access career websites, and accessing cloud storage to…

  3. SHOEBOX: A Personal File Handling System for Textual Data. Information System Language Studies, Number 23.

    ERIC Educational Resources Information Center

    Glantz, Richard S.

    Until recently, the emphasis in information storage and retrieval systems has been towards batch-processing of large files. In contrast, SHOEBOX is designed for the unformatted, personal file collection of the computer-naive individual. Operating through display terminals in a time-sharing, interactive environment on the IBM 360, the user can…

  4. Recommended Best Practices for the Characterization of Storage Properties of Hydrogen Storage Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-03-01

    This is a reference guide to common methodologies and protocols for measuring critical performance properties of advanced hydrogen storage materials. It helps users to communicate clearly the relevant performance properties of new materials as they are discovered and tested.

  5. A Method of Signal Scrambling to Secure Data Storage for Healthcare Applications.

    PubMed

    Bao, Shu-Di; Chen, Meng; Yang, Guang-Zhong

    2017-11-01

    A body sensor network that consists of wearable and/or implantable biosensors has been an important front-end for collecting personal health records. It is expected that the full integration of outside-hospital personal health information and hospital electronic health records will further promote preventative health services as well as global health. However, the integration and sharing of health information is bound to bring with it security and privacy issues. With extensive development of healthcare applications, security and privacy issues are becoming increasingly important. This paper addresses the potential security risks of healthcare data in Internet-based applications and proposes a method of signal scrambling as an add-on security mechanism in the application layer for a variety of healthcare information, where a piece of tiny data is used to scramble healthcare records. The former is kept locally and the latter, along with security protection, is sent for cloud storage. The tiny data can be derived from a random number generator or even a piece of healthcare data, which makes the method more flexible. The computational complexity and security performance in terms of theoretical and experimental analysis has been investigated to demonstrate the efficiency and effectiveness of the proposed method. The proposed method is applicable to all kinds of data that require extra security protection within complex networks.

  6. CO2 Storage related Groundwater Impacts and Protection

    NASA Astrophysics Data System (ADS)

    Fischer, Sebastian; Knopf, Stefan; May, Franz; Rebscher, Dorothee

    2016-03-01

    Injection of CO2 into the deep subsurface will affect physical and chemical conditions in the storage environment. Hence, geological CO2 storage can have potential impacts on groundwater resources. Shallow freshwater can only be affected if leakage pathways facilitate the ascent of CO2 or saline formation water. Leakage associated with CO2 storage cannot be excluded, but potential environmental impacts could be reduced by selecting suitable storage locations. In the framework of risk assessment, testing of models and scenarios against operational data has to be performed repeatedly in order to predict the long-term fate of CO2. Monitoring of a storage site should reveal any deviations from expected storage performance, so that corrective measures can be taken. Comprehensive R & D activities and experience from several storage projects will enhance the state of knowledge on geological CO2 storage, thus enabling safe storage operations at well-characterised and carefully selected storage sites while meeting the requirements of groundwater protection.

  7. Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conover, David R.; Crawford, Alasdair J.; Fuller, Jason

    This Protocol provides a set of “best practices” for characterizing energy storage systems (ESSs) and measuring and reporting their performance. It serves as a basis for assessing how an ESS will perform with respect to key performance attributes relevant to different applications. It is intended to provide a valid and accurate basis for the comparison of different ESSs. By achieving the stated purpose, the Protocol will enable more informed decision-making in the selection of ESSs for various stationary applications. The Protocol identifies general information and technical specifications relevant in describing an ESS and also defines a set of test, measurement,more » and evaluation criteria with which to express the performance of ESSs that are intended for energy-intensive and/or power-intensive stationary applications. An ESS includes a storage device, battery management system, and any power conversion systems installed with the storage device. The Protocol is agnostic with respect to the storage technology and the size and rating of the ESS. The Protocol does not apply to single-use storage devices and storage devices that are not coupled with power conversion systems, nor does it address safety, security, or operations and maintenance of ESSs, or provide any pass/fail criteria.« less

  8. The dynamics of shared leadership: building trust and enhancing performance.

    PubMed

    Drescher, Marcus A; Korsgaard, M Audrey; Welpe, Isabell M; Picot, Arnold; Wigand, Rolf T

    2014-09-01

    In this study, we examined how the dynamics of shared leadership are related to group performance. We propose that, over time, the expansion of shared leadership within groups is related to growth in group trust. In turn, growth in group trust is related to performance improvement. Longitudinal data from 142 groups engaged in a strategic simulation game over a 4-month period provide support for positive changes in trust mediating the relationship between positive changes in shared leadership and positive changes in performance. Our findings contribute to the literature on shared leadership and group dynamics by demonstrating how the growth in shared leadership contributes to the emergence of trust and a positive performance trend over time. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. The effect of coworker knowledge sharing on performance and its boundary conditions: an interactional perspective.

    PubMed

    Kim, Seckyoung Loretta; Yun, Seokhwa

    2015-03-01

    Considering the importance of coworkers and knowledge sharing in current business environment, this study intends to advance understanding by investigating the effect of coworker knowledge sharing on focal employees' task performance. Furthermore, by taking an interactional perspective, this study examines the boundary conditions of coworker knowledge sharing on task performance. Data from 149 samples indicate that there is a positive relationship between coworker knowledge sharing and task performance, and this relationship is strengthened when general self-efficacy or abusive supervision is low rather than high. Our findings suggest that the recipients' characteristics and leaders' behaviors could be important contingent factors that limit the effect of coworker knowledge sharing on task performance. Implications for theory and practice are discussed. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  10. The SBOL Stack: A Platform for Storing, Publishing, and Sharing Synthetic Biology Designs.

    PubMed

    Madsen, Curtis; McLaughlin, James Alastair; Mısırlı, Göksel; Pocock, Matthew; Flanagan, Keith; Hallinan, Jennifer; Wipat, Anil

    2016-06-17

    Recently, synthetic biologists have developed the Synthetic Biology Open Language (SBOL), a data exchange standard for descriptions of genetic parts, devices, modules, and systems. The goals of this standard are to allow scientists to exchange designs of biological parts and systems, to facilitate the storage of genetic designs in repositories, and to facilitate the description of genetic designs in publications. In order to achieve these goals, the development of an infrastructure to store, retrieve, and exchange SBOL data is necessary. To address this problem, we have developed the SBOL Stack, a Resource Description Framework (RDF) database specifically designed for the storage, integration, and publication of SBOL data. This database allows users to define a library of synthetic parts and designs as a service, to share SBOL data with collaborators, and to store designs of biological systems locally. The database also allows external data sources to be integrated by mapping them to the SBOL data model. The SBOL Stack includes two Web interfaces: the SBOL Stack API and SynBioHub. While the former is designed for developers, the latter allows users to upload new SBOL biological designs, download SBOL documents, search by keyword, and visualize SBOL data. Since the SBOL Stack is based on semantic Web technology, the inherent distributed querying functionality of RDF databases can be used to allow different SBOL stack databases to be queried simultaneously, and therefore, data can be shared between different institutes, centers, or other users.

  11. 40 CFR 60.113 - Monitoring of operations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  12. 40 CFR 60.115a - Monitoring of operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  13. 40 CFR 60.115a - Monitoring of operations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  14. 40 CFR 60.113 - Monitoring of operations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  15. 40 CFR 60.115a - Monitoring of operations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  16. 40 CFR 60.113 - Monitoring of operations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  17. 40 CFR 60.113 - Monitoring of operations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  18. 40 CFR 60.115a - Monitoring of operations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  19. 40 CFR 60.115a - Monitoring of operations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  20. 40 CFR 60.113 - Monitoring of operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... petroleum liquid stored, the period of storage, and the maximum true vapor pressure of that liquid during the respective storage period. (b) Available data on the typical Reid vapor pressure and the maximum... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Storage Vessels...

  1. Mass storage at NSA

    NASA Technical Reports Server (NTRS)

    Shields, Michael F.

    1993-01-01

    The need to manage large amounts of data on robotically controlled devices has been critical to the mission of this Agency for many years. In many respects this Agency has helped pioneer, with their industry counterparts, the development of a number of products long before these systems became commercially available. Numerous attempts have been made to field both robotically controlled tape and optical disk technology and systems to satisfy our tertiary storage needs. Custom developed products were architected, designed, and developed without vendor partners over the past two decades to field workable systems to handle our ever increasing storage requirements. Many of the attendees of this symposium are familiar with some of the older products, such as: the Braegen Automated Tape Libraries (ATL's), the IBM 3850, the Ampex TeraStore, just to name a few. In addition, we embarked on an in-house development of a shared disk input/output support processor to manage our every increasing tape storage needs. For all intents and purposes, this system was a file server by current definitions which used CDC Cyber computers as the control processors. It served us well and was just recently removed from production usage.

  2. Eighth Goddard Conference on Mass Storage Systems and Technologies in Cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    2000-01-01

    This document contains copies of those technical papers received in time for publication prior to the Eighth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center March 27-30, 2000. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, new technology with a special emphasis on holographic storage, performance, standards, site reports, vendor solutions. Tutorials will be available on stability of optical media, disk subsystem performance evaluation, I/O and storage tuning, functionality and performance evaluation of file systems for storage area networks.

  3. Lessons Learned in Deploying the World s Largest Scale Lustre File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillow, David A; Fuller, Douglas; Wang, Feiyi

    2010-01-01

    The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing themore » file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.« less

  4. Electrochemical energy storage systems for solar thermal applications

    NASA Technical Reports Server (NTRS)

    Krauthamer, S.; Frank, H.

    1980-01-01

    Existing and advanced electrochemical storage and inversion/conversion systems that may be used with terrestrial solar-thermal power systems are evaluated. The status, cost and performance of existing storage systems are assessed, and the cost, performance, and availability of advanced systems are projected. A prime consideration is the cost of delivered energy from plants utilizing electrochemical storage. Results indicate that the five most attractive electrochemical storage systems are the: iron-chromium redox (NASA LeRC), zinc-bromine (Exxon), sodium-sulfur (Ford), sodium-sulfur (Dow), and zinc-chlorine (EDA).

  5. Goddard Conference on Mass Storage Systems and Technologies, volume 2

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor)

    1993-01-01

    Papers and viewgraphs from the conference are presented. Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional discussion topics addressed the evolution of the identifiable unit for processing (file, granule, data set, or some similar object) as data ingestion rates increase dramatically, and the present state of the art in mass storage technology.

  6. Simulation and evaluation of latent heat thermal energy storage

    NASA Technical Reports Server (NTRS)

    Sigmon, T. W.

    1980-01-01

    The relative value of thermal energy storage (TES) for heat pump storage (heating and cooling) as a function of storage temperature, mode of storage (hotside or coldside), geographic locations, and utility time of use rate structures were derived. Computer models used to simulate the performance of a number of TES/heat pump configurations are described. The models are based on existing performance data of heat pump components, available building thermal load computational procedures, and generalized TES subsystem design. Life cycle costs computed for each site, configuration, and rate structure are discussed.

  7. Motivation and Design of the Sirocco Storage System Version 1.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew Leon; Ward, H. Lee; Danielson, Geoffrey Charles

    Sirocco is a massively parallel, high performance storage system for the exascale era. It emphasizes client-to-client coordination, low server-side coupling, and free data movement to improve resilience and performance. Its architecture is inspired by peer-to-peer and victim- cache architectures. By leveraging these ideas, Sirocco natively supports several media types, including RAM, flash, disk, and archival storage, with automatic migration between levels. Sirocco also includes storage interfaces and support that are more advanced than typical block storage. Sirocco enables clients to efficiently use key-value storage or block-based storage with the same interface. It also provides several levels of transactional data updatesmore » within a single storage command, including full ACID-compliant updates. This transaction support extends to updating several objects within a single transaction. Further support is provided for con- currency control, enabling greater performance for workloads while providing safe concurrent modification. By pioneering these and other technologies and techniques in the storage system, Sirocco is poised to fulfill a need for a massively scalable, write-optimized storage system for exascale systems. This is version 1.0 of a document reflecting the current and planned state of Sirocco. Further versions of this document will be accessible at http://www.cs.sandia.gov/Scalable_IO/ sirocco .« less

  8. Effect of glucose and cellulase addition on wet-storage of excessively wilted maize stover and biogas production.

    PubMed

    Guo, Jianbin; Cui, Xian; Sun, Hui; Zhao, Qian; Wen, Xiaoyu; Pang, Changle; Dong, Renjie

    2018-07-01

    In north China, large amounts of excessively wilted maize stover are produced annually. Maize stover wet storage strategies and subsequent biogas production was examined in this study. Firstly, wet storage performances of harvested maize stover, air-dried for different time durations, were evaluated. Results showed that optimal storage performance was obtained when the initial water soluble carbohydrate (WSC) content after air-drying was higher than 8.0%. Therefore, cellulase and glucose were added to the excessively wilted maize stover to achieve the targeted pre-storage WSC levels. Good storage performances were observed in treatments with addition of 76.4 g/kg DM glucose and 12.5 g/kg DM of cellulase; the specific methane yield increased by 23.7% and 19.2%, respectively. However, use of glucose as additive or co-storing with high WSC substrates can serve as economically feasible options to adapt wet storage of excessively wilted maize stover. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. REQUIREMENTS AND GUIDELINES FOR NSLS EXPERIMENTAL BEAM LINE VACUUM SYSTEMS-REVISION B.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FOERSTER,C.

    Typical beam lines are comprised of an assembly of vacuum valves and shutters referred to as a ''front end'', optical elements to monochromatize, focus and split the photon beam, and an experimental area where a target sample is placed into the photon beam and data from the interaction is detected and recorded. Windows are used to separate sections of beam lines that are not compatible with storage ring ultra high vacuum. Some experimental beam lines share a common vacuum with storage rings. Sections of beam lines are only allowed to vent up to atmospheric pressure using pure nitrogen gas aftermore » a vacuum barrier is established to protect ring vacuum. The front end may only be bled up when there is no current in the machine. This is especially true on the VUV storage ring where for most experiments, windows are not used. For the shorter wavelength, more energetic photons of the x-ray ring, beryllium windows are used at various beam line locations so that the monochromator, mirror box or sample chamber may be used in a helium atmosphere or rough vacuum. The window separates ring vacuum from the environment of the downstream beam line components. The stored beam lifetime in the storage rings and the maintenance of desirable reflection properties of optical surfaces depend upon hydrocarbon-free, ultra-high vacuum systems. Storage ring vacuum systems will operate at pressures of {approximately} 1 x 10{sup {minus}10} Torr without beam and {approximately} 1 x 10{sup {minus}9} Torr with beam. Systems are free of hydrocarbons in the sense that no pumps, valves, etc. containing organics are used. Components are all-metal, chemically cleaned and bakeable. To the extent that beam lines share a common vacuum with the storage ring, the same criteria will hold for beam line components. The design philosophy for NSLS beam lines is to use all-metal, hydrocarbon-free front end components and recommend that experimenters use this approach for common vacuum hardware downstream of front ends. O-ring-sealed valves, if used, are not permitted upstream of the monochromator exit aperture. It will be the responsibility of users to demonstrate that their experiment will not degrade the pressure or quality of the storage ring vacuum. As a matter of operating policy, all beam lines will be monitored for prescribed pressure and the contribution of high mass gases to this pressure each time a beam line has been opened to ring vacuum.« less

  10. Managing the water-energy-food nexus: Opportunities in Central Asia

    NASA Astrophysics Data System (ADS)

    Jalilov, Shokhrukh-Mirzo; Amer, Saud A.; Ward, Frank A.

    2018-02-01

    This article examines impacts of infrastructure development and climate variability on economic outcomes for the Amu Darya Basin in Central Asia. It aims to identify the most economically productive mix of expanded reservoir storage for economic benefit sharing to occur, in which economic welfare of all riparians is improved. Policies examined include four combinations of storage infrastructure for each of two climate futures. An empirical optimization model is developed and applied to identify opportunities for improving the welfare of Tajikistan, Uzbekistan, Afghanistan, and Turkmenistan. The analysis 1) characterizes politically constrained and economically optimized water-use patterns for these combinations of expanded reservoir storage capacity, 2) describes Pareto-Improving packages of expanded storage capacity that could raise economic welfare for all four riparians, and accounts for impacts for each of two climate scenarios. Results indicate that a combination of targeted water storage infrastructure and efficient water allocation could produce outcomes for which the discounted net present value of benefits are favorable for each riparian. Results identify a framework to provide economic motivation for all riparians to cooperate through development of water storage infrastructure. Our findings illustrate the principle that development of water infrastructure can expand the negotiation space by which all communities can gain economic benefits in the face of limited water supply. Still, despite our optimistic findings, patient and deliberate negotiation will be required to transform potential improvements into actual gains.

  11. Methyllithium-Doped Naphthyl-Containing Conjugated Microporous Polymer with Enhanced Hydrogen Storage Performance.

    PubMed

    Xu, Dan; Sun, Lei; Li, Gang; Shang, Jin; Yang, Rui-Xia; Deng, Wei-Qiao

    2016-06-01

    Hydrogen storage is a primary challenge for using hydrogen as a fuel. With ideal hydrogen storage kinetics, the weak binding strength of hydrogen to sorbents is the key barrier to obtain decent hydrogen storage performance. Here, we reported the rational synthesis of a methyllithium-doped naphthyl-containing conjugated microporous polymer with exceptional binding strength of hydrogen to the polymer guided by theoretical simulations. Meanwhile, the experimental results showed that isosteric heat can reach up to 8.4 kJ mol(-1) and the methyllithium-doped naphthyl-containing conjugated microporous polymer exhibited an enhanced hydrogen storage performance with 150 % enhancement compared with its counterpart naphthyl-containing conjugated microporous polymer. These results indicate that this strategy provides a direction for design and synthesis of new materials that meet the US Department of Energy (DOE) hydrogen storage target. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Twin-tailed fail-over for fileservers maintaining full performance in the presence of a failure

    DOEpatents

    Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.

    2008-02-12

    A method for maintaining full performance of a file system in the presence of a failure is provided. The file system having N storage devices, where N is an integer greater than zero and N primary file servers where each file server is operatively connected to a corresponding storage device for accessing files therein. The file system further having a secondary file server operatively connected to at least one of the N storage devices. The method including: switching the connection of one of the N storage devices to the secondary file server upon a failure of one of the N primary file servers; and switching the connections of one or more of the remaining storage devices to a primary file server other than the failed file server as necessary so as to prevent a loss in performance and to provide each storage device with an operating file server.

  13. Modeling and Performance Simulation of the Mass Storage Network Environment

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Sang, Janche

    2000-01-01

    This paper describes the application of modeling and simulation in evaluating and predicting the performance of the mass storage network environment. Network traffic is generated to mimic the realistic pattern of file transfer, electronic mail, and web browsing. The behavior and performance of the mass storage network and a typical client-server Local Area Network (LAN) are investigated by modeling and simulation. Performance characteristics in throughput and delay demonstrate the important role of modeling and simulation in network engineering and capacity planning.

  14. Storing and sharing water in sand rivers: a water balance modelling approach

    NASA Astrophysics Data System (ADS)

    Love, D.; van der Zaag, P.; Uhlenbrook, S.

    2009-04-01

    Sand rivers and sand dams offer an alternative to conventional surface water reservoirs for storage. The alluvial aquifers that make up the beds of sand rivers can store water with minimal evaporation (extinction depth is 0.9 m) and natural filtration. The alluvial aquifers of the Mzingwane Catchment are the most extensive of any tributaries in the Limpopo Basin. The lower Mzingwane aquifer, which is currently underutilised, is recharged by managed releases from Zhovhe Dam (capacity 133 Mm3). The volume of water released annually is only twice the size of evaporation losses from the dam; the latter representing nearly one third of the dam's storage capacity. The Lower Mzingwane valley currently support commercial agro-businesses (1,750 ha irrigation) and four smallholder irrigation schemes (400 ha with provision for a further 1,200 ha). In order to support planning for optimising water use and storage over evaporation and to provide for more equitable water allocation, the spreadsheet-based balance model WAFLEX was used. It is a simple and userfriendly model, ideal for use by institutions such as the water management authorities in Zimbabwe which are challenged by capacity shortfalls and inadequate data. In this study, WAFLEX, which is normally used for accounting the surface water balance, is adapted to incorporate alluvial aquifers into the water balance, including recharge, baseflow and groundwater flows. Results of the WAFLEX modelling suggest that there is surplus water in the lower Mzingwane system, and thus there should not be any water conflicts. Through more frequent timing of releases from the dam and maintaining the alluvial aquifers permanently saturated, less evaporation losses will occur in the system and the water resources can be better shared to provide more irrigation water for smallholder farmers in the highly resource-poor communal lands along the river. Sand dams are needed to augment the aquifer storage system and improve access to water. An alternative to the current scenario was modelled in WAFLEX: making fuller use of the alluvial aquifers upstream and downstream of Zhovhe Dam. These alluvial aquifers have an estimated average water storage capacity of 0.37 Mm3 km

  15. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  16. Design and evaluation of a hybrid storage system in HEP environment

    NASA Astrophysics Data System (ADS)

    Xu, Qi; Cheng, Yaodong; Chen, Gang

    2017-10-01

    Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.

  17. Symbiosis of executive and selective attention in working memory

    PubMed Central

    Vandierendonck, André

    2014-01-01

    The notion of working memory (WM) was introduced to account for the usage of short-term memory resources by other cognitive tasks such as reasoning, mental arithmetic, language comprehension, and many others. This collaboration between memory and other cognitive tasks can only be achieved by a dedicated WM system that controls task coordination. To that end, WM models include executive control. Nevertheless, other attention control systems may be involved in coordination of memory and cognitive tasks calling on memory resources. The present paper briefly reviews the evidence concerning the role of selective attention in WM activities. A model is proposed in which selective attention control is directly linked to the executive control part of the WM system. The model assumes that apart from storage of declarative information, the system also includes an executive WM module that represents the current task set. Control processes are automatically triggered when particular conditions in these modules are met. As each task set represents the parameter settings and the actions needed to achieve the task goal, it will depend on the specific settings and actions whether selective attention control will have to be shared among the active tasks. Only when such sharing is required, task performance will be affected by the capacity limits of the control system involved. PMID:25152723

  18. Symbiosis of executive and selective attention in working memory.

    PubMed

    Vandierendonck, André

    2014-01-01

    The notion of working memory (WM) was introduced to account for the usage of short-term memory resources by other cognitive tasks such as reasoning, mental arithmetic, language comprehension, and many others. This collaboration between memory and other cognitive tasks can only be achieved by a dedicated WM system that controls task coordination. To that end, WM models include executive control. Nevertheless, other attention control systems may be involved in coordination of memory and cognitive tasks calling on memory resources. The present paper briefly reviews the evidence concerning the role of selective attention in WM activities. A model is proposed in which selective attention control is directly linked to the executive control part of the WM system. The model assumes that apart from storage of declarative information, the system also includes an executive WM module that represents the current task set. Control processes are automatically triggered when particular conditions in these modules are met. As each task set represents the parameter settings and the actions needed to achieve the task goal, it will depend on the specific settings and actions whether selective attention control will have to be shared among the active tasks. Only when such sharing is required, task performance will be affected by the capacity limits of the control system involved.

  19. Distributed geospatial model sharing based on open interoperability standards

    USGS Publications Warehouse

    Feng, Min; Liu, Shuguang; Euliss, Ned H.; Fang, Yin

    2009-01-01

    Numerous geospatial computational models have been developed based on sound principles and published in journals or presented in conferences. However modelers have made few advances in the development of computable modules that facilitate sharing during model development or utilization. Constraints hampering development of model sharing technology includes limitations on computing, storage, and connectivity; traditional stand-alone and closed network systems cannot fully support sharing and integrating geospatial models. To address this need, we have identified methods for sharing geospatial computational models using Service Oriented Architecture (SOA) techniques and open geospatial standards. The service-oriented model sharing service is accessible using any tools or systems compliant with open geospatial standards, making it possible to utilize vast scientific resources available from around the world to solve highly sophisticated application problems. The methods also allow model services to be empowered by diverse computational devices and technologies, such as portable devices and GRID computing infrastructures. Based on the generic and abstract operations and data structures required for Web Processing Service (WPS) standards, we developed an interactive interface for model sharing to help reduce interoperability problems for model use. Geospatial computational models are shared on model services, where the computational processes provided by models can be accessed through tools and systems compliant with WPS. We developed a platform to help modelers publish individual models in a simplified and efficient way. Finally, we illustrate our technique using wetland hydrological models we developed for the prairie pothole region of North America.

  20. ENT COBRA (Consortium for Brachytherapy Data Analysis): interdisciplinary standardized data collection system for head and neck patients treated with interventional radiotherapy (brachytherapy).

    PubMed

    Tagliaferri, Luca; Kovács, György; Autorino, Rosa; Budrukkar, Ashwini; Guinot, Jose Luis; Hildebrand, Guido; Johansson, Bengt; Monge, Rafael Martìnez; Meyer, Jens E; Niehoff, Peter; Rovirosa, Angeles; Takàcsi-Nagy, Zoltàn; Dinapoli, Nicola; Lanzotti, Vito; Damiani, Andrea; Soror, Tamer; Valentini, Vincenzo

    2016-08-01

    Aim of the COBRA (Consortium for Brachytherapy Data Analysis) project is to create a multicenter group (consortium) and a web-based system for standardized data collection. GEC-ESTRO (Groupe Européen de Curiethérapie - European Society for Radiotherapy & Oncology) Head and Neck (H&N) Working Group participated in the project and in the implementation of the consortium agreement, the ontology (data-set) and the necessary COBRA software services as well as the peer reviewing of the general anatomic site-specific COBRA protocol. The ontology was defined by a multicenter task-group. Eleven centers from 6 countries signed an agreement and the consortium approved the ontology. We identified 3 tiers for the data set: Registry (epidemiology analysis), Procedures (prediction models and DSS), and Research (radiomics). The COBRA-Storage System (C-SS) is not time-consuming as, thanks to the use of "brokers", data can be extracted directly from the single center's storage systems through a connection with "structured query language database" (SQL-DB), Microsoft Access(®), FileMaker Pro(®), or Microsoft Excel(®). The system is also structured to perform automatic archiving directly from the treatment planning system or afterloading machine. The architecture is based on the concept of "on-purpose data projection". The C-SS architecture is privacy protecting because it will never make visible data that could identify an individual patient. This C-SS can also benefit from the so called "distributed learning" approaches, in which data never leave the collecting institution, while learning algorithms and proposed predictive models are commonly shared. Setting up a consortium is a feasible and practicable tool in the creation of an international and multi-system data sharing system. COBRA C-SS seems to be well accepted by all involved parties, primarily because it does not influence the center's own data storing technologies, procedures, and habits. Furthermore, the method preserves the privacy of all patients.

  1. KiMoSys: a web-based repository of experimental data for KInetic MOdels of biological SYStems

    PubMed Central

    2014-01-01

    Background The kinetic modeling of biological systems is mainly composed of three steps that proceed iteratively: model building, simulation and analysis. In the first step, it is usually required to set initial metabolite concentrations, and to assign kinetic rate laws, along with estimating parameter values using kinetic data through optimization when these are not known. Although the rapid development of high-throughput methods has generated much omics data, experimentalists present only a summary of obtained results for publication, the experimental data files are not usually submitted to any public repository, or simply not available at all. In order to automatize as much as possible the steps of building kinetic models, there is a growing requirement in the systems biology community for easily exchanging data in combination with models, which represents the main motivation of KiMoSys development. Description KiMoSys is a user-friendly platform that includes a public data repository of published experimental data, containing concentration data of metabolites and enzymes and flux data. It was designed to ensure data management, storage and sharing for a wider systems biology community. This community repository offers a web-based interface and upload facility to turn available data into publicly accessible, centralized and structured-format data files. Moreover, it compiles and integrates available kinetic models associated with the data. KiMoSys also integrates some tools to facilitate the kinetic model construction process of large-scale metabolic networks, especially when the systems biologists perform computational research. Conclusions KiMoSys is a web-based system that integrates a public data and associated model(s) repository with computational tools, providing the systems biology community with a novel application facilitating data storage and sharing, thus supporting construction of ODE-based kinetic models and collaborative research projects. The web application implemented using Ruby on Rails framework is freely available for web access at http://kimosys.org, along with its full documentation. PMID:25115331

  2. Techno-economic performance evaluation of direct steam generation solar tower plants with thermal energy storage systems based on high-temperature concrete and encapsulated phase change materials

    NASA Astrophysics Data System (ADS)

    Guédez, R.; Arnaudo, M.; Topel, M.; Zanino, R.; Hassar, Z.; Laumert, B.

    2016-05-01

    Nowadays, direct steam generation concentrated solar tower plants suffer from the absence of a cost-effective thermal energy storage integration. In this study, the prefeasibility of a combined sensible and latent thermal energy storage configuration has been performed from thermodynamic and economic standpoints as a potential storage option. The main advantage of such concept with respect to only sensible or only latent choices is related to the possibility to minimize the thermal losses during system charge and discharge processes by reducing the temperature and pressure drops occurring all along the heat transfer process. Thermodynamic models, heat transfer models, plant integration and control strategies for both a pressurized tank filled with sphere-encapsulated salts and high temperature concrete storage blocks were developed within KTH in-house tool DYESOPT for power plant performance modeling. Once implemented, cross-validated and integrated the new storage model in an existing DYESOPT power plant layout, a sensitivity analysis with regards of storage, solar field and power block sizes was performed to determine the potential impact of integrating the proposed concept. Even for a storage cost figure of 50 USD/kWh, it was found that the integration of the proposed storage configuration can enhance the performance of the power plants by augmenting its availability and reducing its levelized cost of electricity. As expected, it was also found that the benefits are greater for the cases of smaller power block sizes. Specifically, for a power block of 80 MWe a reduction in levelized electricity costs of 8% was estimated together with an increase in capacity factor by 30%, whereas for a power block of 126 MWe the benefits found were a 1.5% cost reduction and 16% availability increase.

  3. Low-cost high performance distributed data storage for multi-channel observations

    NASA Astrophysics Data System (ADS)

    Liu, Ying-bo; Wang, Feng; Deng, Hui; Ji, Kai-fan; Dai, Wei; Wei, Shou-lin; Liang, Bo; Zhang, Xiao-li

    2015-10-01

    The New Vacuum Solar Telescope (NVST) is a 1-m solar telescope that aims to observe the fine structures in both the photosphere and the chromosphere of the Sun. The observational data acquired simultaneously from one channel for the chromosphere and two channels for the photosphere bring great challenges to the data storage of NVST. The multi-channel instruments of NVST, including scientific cameras and multi-band spectrometers, generate at least 3 terabytes data per day and require high access performance while storing massive short-exposure images. It is worth studying and implementing a storage system for NVST which would balance the data availability, access performance and the cost of development. In this paper, we build a distributed data storage system (DDSS) for NVST and then deeply evaluate the availability of real-time data storage on a distributed computing environment. The experimental results show that two factors, i.e., the number of concurrent read/write and the file size, are critically important for improving the performance of data access on a distributed environment. Referring to these two factors, three strategies for storing FITS files are presented and implemented to ensure the access performance of the DDSS under conditions of multi-host write and read simultaneously. The real applications of the DDSS proves that the system is capable of meeting the requirements of NVST real-time high performance observational data storage. Our study on the DDSS is the first attempt for modern astronomical telescope systems to store real-time observational data on a low-cost distributed system. The research results and corresponding techniques of the DDSS provide a new option for designing real-time massive astronomical data storage system and will be a reference for future astronomical data storage.

  4. Addressing Information Proliferation: Applications of Information Extraction and Text Mining

    ERIC Educational Resources Information Center

    Li, Jingjing

    2013-01-01

    The advent of the Internet and the ever-increasing capacity of storage media have made it easy to store, deliver, and share enormous volumes of data, leading to a proliferation of information on the Web, in online libraries, on news wires, and almost everywhere in our daily lives. Since our ability to process and absorb this information remains…

  5. The Center of Attention

    NASA Technical Reports Server (NTRS)

    2000-01-01

    New Hampshire-based Creare, Inc. used a NASA SBIR contract with Dryden to develop "middleware" known commercially as DataTurbine. DataTurbine acts as "glueware" allowing communication between dissimilar computer platforms and analysis, storage and acquisition of shared data. DataTurbine relies on Ring Buffered Network Bus technology, which is a software server providing a buffered network path between suppliers and consumers of information.

  6. A Practical Introduction to the XML, Extensible Markup Language, by Way of Some Useful Examples

    ERIC Educational Resources Information Center

    Snyder, Robin

    2004-01-01

    XML, Extensible Markup Language, is important as a way to represent and encapsulate the structure of underlying data in a portable way that supports data exchange regardless of the physical storage of the data. This paper (and session) introduces some useful and practical aspects of XML technology for sharing information in a educational setting…

  7. Sptrace

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    Sptrace is a general-purpose space utilization tracing system that is conceptually similar to the commercial Purify product used to detect leaks and other memory usage errors. It is designed to monitor space utilization in any sort of heap, i.e., a region of data storage on some device (nominally memory; possibly shared and possibly persistent) with a flat address space. This software can trace usage of shared and/or non-volatile storage in addition to private RAM (random access memory). Sptrace is implemented as a set of C function calls that are invoked from within the software that is being examined. The function calls fall into two broad classes: (1) functions that are embedded within the heap management software [e.g., JPL's SDR (Simple Data Recorder) and PSM (Personal Space Management) systems] to enable heap usage analysis by populating a virtual time-sequenced log of usage activity, and (2) reporting functions that are embedded within the application program whose behavior is suspect. For ease of use, these functions may be wrapped privately inside public functions offered by the heap management software. Sptrace can be used for VxWorks or RTEMS realtime systems as easily as for Linux or OS/X systems.

  8. MPD: a pathogen genome and metagenome database

    PubMed Central

    Zhang, Tingting; Miao, Jiaojiao; Han, Na; Qiang, Yujun; Zhang, Wen

    2018-01-01

    Abstract Advances in high-throughput sequencing have led to unprecedented growth in the amount of available genome sequencing data, especially for bacterial genomes, which has been accompanied by a challenge for the storage and management of such huge datasets. To facilitate bacterial research and related studies, we have developed the Mypathogen database (MPD), which provides access to users for searching, downloading, storing and sharing bacterial genomics data. The MPD represents the first pathogenic database for microbial genomes and metagenomes, and currently covers pathogenic microbial genomes (6604 genera, 11 071 species, 41 906 strains) and metagenomic data from host, air, water and other sources (28 816 samples). The MPD also functions as a management system for statistical and storage data that can be used by different organizations, thereby facilitating data sharing among different organizations and research groups. A user-friendly local client tool is provided to maintain the steady transmission of big sequencing data. The MPD is a useful tool for analysis and management in genomic research, especially for clinical Centers for Disease Control and epidemiological studies, and is expected to contribute to advancing knowledge on pathogenic bacteria genomes and metagenomes. Database URL: http://data.mypathogen.org PMID:29917040

  9. Charging and Discharging Processes of Thermal Energy Storage System Using Phase change materials

    NASA Astrophysics Data System (ADS)

    Kanimozhi, B., Dr.; Harish, Kasilanka; Sai Tarun, Bellamkonda; Saty Sainath Reddy, Pogaku; Sai Sujeeth, Padakandla

    2017-05-01

    The objective of the study is to investigate the thermal characteristics of charging and discharge processes of fabricated thermal energy storage system using Phase change materials. Experiments were performed with phase change materials in which a storage tank have designed and developed to enhance the heat transfer rate from the solar tank to the PCM storage tank. The enhancement of heat transfer can be done by using a number of copper tubes in the fabricated storage tank. This storage tank can hold or conserve heat energy for a much longer time than the conventional water storage system. Performance evaluations of experimental results during charging and discharging processes of paraffin wax have discussed. In which heat absorption and heat rejection have been calculated with various flow rate.

  10. 40 CFR 60.254 - Standards for coal processing and conveying equipment, coal storage systems, transfer and loading...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Standards for coal processing and conveying equipment, coal storage systems, transfer and loading systems, and open storage piles. 60.254... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Coal Preparation...

  11. Efficient architecture for spike sorting in reconfigurable hardware.

    PubMed

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-11-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  12. Parallel peak pruning for scalable SMP contour tree computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carr, Hamish A.; Weber, Gunther H.; Sewell, Christopher M.

    As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this formmore » of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.« less

  13. Striped tertiary storage arrays

    NASA Technical Reports Server (NTRS)

    Drapeau, Ann L.

    1993-01-01

    Data stripping is a technique for increasing the throughput and reducing the response time of large access to a storage system. In striped magnetic or optical disk arrays, a single file is striped or interleaved across several disks; in a striped tape system, files are interleaved across tape cartridges. Because a striped file can be accessed by several disk drives or tape recorders in parallel, the sustained bandwidth to the file is greater than in non-striped systems, where access to the file are restricted to a single device. It is argued that applying striping to tertiary storage systems will provide needed performance and reliability benefits. The performance benefits of striping for applications using large tertiary storage systems is discussed. It will introduce commonly available tape drives and libraries, and discuss their performance limitations, especially focusing on the long latency of tape accesses. This section will also describe an event-driven tertiary storage array simulator that is being used to understand the best ways of configuring these storage arrays. The reliability problems of magnetic tape devices are discussed, and plans for modeling the overall reliability of striped tertiary storage arrays to identify the amount of error correction required are described. Finally, work being done by other members of the Sequoia group to address latency of accesses, optimizing tertiary storage arrays that perform mostly writes, and compression is discussed.

  14. Autonomous Docking Based on Infrared System for Electric Vehicle Charging in Urban Areas

    PubMed Central

    Pérez, Joshué; Nashashibi, Fawzi; Lefaudeux, Benjamin; Resende, Paulo; Pollard, Evangeline

    2013-01-01

    Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative to the classic public transportation systems. However, there are still some problems to be solved related to energy storage, electric charging and autonomy. In this paper, we present an autonomous docking system for electric vehicles recharging based on an embarked infrared camera performing infrared beacons detection installed in the infrastructure. A visual servoing system coupled with an automatic controller allows the vehicle to dock accurately to the recharging booth in a street parking area. The results show good behavior of the implemented system, which is currently deployed as a real prototype system in the city of Paris. PMID:23429581

  15. Autonomous docking based on infrared system for electric vehicle charging in urban areas.

    PubMed

    Pérez, Joshué; Nashashibi, Fawzi; Lefaudeux, Benjamin; Resende, Paulo; Pollard, Evangeline

    2013-02-21

    Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative to the classic public transportation systems. However, there are still some problems to be solved related to energy storage, electric charging and autonomy. In this paper, we present an autonomous docking system for electric vehicles recharging based on an embarked infrared camera performing infrared beacons detection installed in the infrastructure. A visual servoing system coupled with an automatic controller allows the vehicle to dock accurately to the recharging booth in a street parking area. The results show good behavior of the implemented system, which is currently deployed as a real prototype system in the city of Paris.

  16. The Meeting Point: Where Language Production and Working Memory Share Resources.

    PubMed

    Ishkhanyan, Byurakn; Boye, Kasper; Mogensen, Jesper

    2018-06-07

    The interaction between working memory and language processing is widely discussed in cognitive research. However, those studies often explore the relationship between language comprehension and working memory (WM). The role of WM is rarely considered in language production, despite some evidence suggesting a relationship between the two cognitive systems. This study attempts to fill that gap by using a complex span task during language production. We make our predictions based on the reorganization of elementary functions neurocognitive model, a usage based theory about grammatical status, and language production models. In accordance with these theories, we expect an overlap between language production and WM at one or more levels of language planning. Our results show that WM is involved at the phonological encoding level of language production and that adding WM load facilitates language production, which leads us to suggest that an extra task-specific storage is being created while the task is performed.

  17. A Review of Water Reclamation Research in China Urban Landscape Design and Planning Practice

    NASA Astrophysics Data System (ADS)

    Gan, Wei; Zeng, Tianran

    2018-04-01

    With the continuously growing demand for better living environment, more and more attention and efforts have been paid to the improvement of urban landscape. However, the expansion of green area and water features are at the cost of high consumption of water resources, which has become prominent problems in cities that suffer from water shortage. At the same time, with the water shortage and water environment deterioration problems that shared globally, water conservation has become an inevitable choice to achieve sustainable social development. Urban landscape is not simply a consuming body of water resources, but also are of water-saving potential and able to perform the function of water storage. Thus, recycling the limited water resources becomes a challenge for every landscape designer. This paper is intended to overview the existing effort of reclaimed water recycle research in China landscape designing fields, and raise recommendations for future research and development.

  18. Design and deployment of an elastic network test-bed in IHEP data center based on SDN

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Qi, Fazhi; Chen, Gang

    2017-10-01

    High energy physics experiments produce huge amounts of raw data, while because of the sharing characteristics of the network resources, there is no guarantee of the available bandwidth for each experiment which may cause link congestion problems. On the other side, with the development of cloud computing technologies, IHEP have established a cloud platform based on OpenStack which can ensure the flexibility of the computing and storage resources, and more and more computing applications have been deployed on virtual machines established by OpenStack. However, under the traditional network architecture, network capability can’t be required elastically, which becomes the bottleneck of restricting the flexible application of cloud computing. In order to solve the above problems, we propose an elastic cloud data center network architecture based on SDN, and we also design a high performance controller cluster based on OpenDaylight. In the end, we present our current test results.

  19. Evaluation of the Huawei UDS cloud storage system for CERN specific data

    NASA Astrophysics Data System (ADS)

    Zotes Resines, M.; Heikkila, S. S.; Duellmann, D.; Adde, G.; Toebbicke, R.; Hughes, J.; Wang, L.

    2014-06-01

    Cloud storage is an emerging architecture aiming to provide increased scalability and access performance, compared to more traditional solutions. CERN is evaluating this promise using Huawei UDS and OpenStack SWIFT storage deployments, focusing on the needs of high-energy physics. Both deployed setups implement S3, one of the protocols that are emerging as a standard in the cloud storage market. A set of client machines is used to generate I/O load patterns to evaluate the storage system performance. The presented read and write test results indicate scalability both in metadata and data perspectives. Futher the Huawei UDS cloud storage is shown to be able to recover from a major failure of losing 16 disks. Both cloud storages are finally demonstrated to function as back-end storage systems to a filesystem, which is used to deliver high energy physics software.

  20. Slow Dynamics Model of Compressed Air Energy Storage and Battery Storage Technologies for Automatic Generation Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat; Das, Trishna

    Increasing variable generation penetration and the consequent increase in short-term variability makes energy storage technologies look attractive, especially in the ancillary market for providing frequency regulation services. This paper presents slow dynamics model for compressed air energy storage and battery storage technologies that can be used in automatic generation control studies to assess the system frequency response and quantify the benefits from storage technologies in providing regulation service. The paper also represents the slow dynamics model of the power system integrated with storage technologies in a complete state space form. The storage technologies have been integrated to the IEEE 24more » bus system with single area, and a comparative study of various solution strategies including transmission enhancement and combustion turbine have been performed in terms of generation cycling and frequency response performance metrics.« less

  1. Beam dynamics and expected performance of Sweden's new storage-ring light source: MAX IV

    NASA Astrophysics Data System (ADS)

    Leemann, S. C.; Andersson, Å.; Eriksson, M.; Lindgren, L.-J.; Wallén, E.; Bengtsson, J.; Streun, A.

    2009-12-01

    MAX IV will be Sweden’s next-generation high-performance synchrotron radiation source. The project has recently been granted funding and construction is scheduled to begin in 2010. User operation for a broad and international user community should commence in 2015. The facility is comprised of two storage rings optimized for different wavelength ranges, a linac-based short-pulse facility and a free-electron laser for the production of coherent radiation. The main radiation source of MAX IV will be a 528 m ultralow emittance storage ring operated at 3 GeV for the generation of high-brightness hard x rays. This storage ring was designed to meet the requirements of state-of-the-art insertion devices which will be installed in nineteen 5 m long dispersion-free straight sections. The storage ring is based on a novel multibend achromat design delivering an unprecedented horizontal bare lattice emittance of 0.33 nm rad and a vertical emittance below the 8 pm rad diffraction limit for 1 Å radiation. In this paper we present the beam dynamics considerations behind this storage-ring design and detail its expected unique performance.

  2. Integrating hydrologic modeling web services with online data sharing to prepare, store, and execute models in hydrology

    NASA Astrophysics Data System (ADS)

    Gan, T.; Tarboton, D. G.; Dash, P. K.; Gichamo, T.; Horsburgh, J. S.

    2017-12-01

    Web based apps, web services and online data and model sharing technology are becoming increasingly available to support research. This promises benefits in terms of collaboration, platform independence, transparency and reproducibility of modeling workflows and results. However, challenges still exist in real application of these capabilities and the programming skills researchers need to use them. In this research we combined hydrologic modeling web services with an online data and model sharing system to develop functionality to support reproducible hydrologic modeling work. We used HydroDS, a system that provides web services for input data preparation and execution of a snowmelt model, and HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. To make the web services easy to use, we developed a HydroShare app (based on the Tethys platform) to serve as a browser based user interface for HydroDS. In this integration, HydroDS receives web requests from the HydroShare app to process the data and execute the model. HydroShare supports storage and sharing of the results generated by HydroDS web services. The snowmelt modeling example served as a use case to test and evaluate this approach. We show that, after the integration, users can prepare model inputs or execute the model through the web user interface of the HydroShare app without writing program code. The model input/output files and metadata describing the model instance are stored and shared in HydroShare. These files include a Python script that is automatically generated by the HydroShare app to document and reproduce the model input preparation workflow. Once stored in HydroShare, inputs and results can be shared with other users, or published so that other users can directly discover, repeat or modify the modeling work. This approach provides a collaborative environment that integrates hydrologic web services with a data and model sharing system to enable model development and execution. The entire system comprised of the HydroShare app, HydroShare and HydroDS web services is open source and contributes to capability for web based modeling research.

  3. Microbial Diagnostic Array Workstation (MDAW): a web server for diagnostic array data storage, sharing and analysis

    PubMed Central

    Scaria, Joy; Sreedharan, Aswathy; Chang, Yung-Fu

    2008-01-01

    Background Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Methods Microbial Diagnostic Array Workstation (MDAW) is a database driven application designed in MS Access and front end designed in ASP.NET. Conclusion MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays. PMID:18811969

  4. Microbial Diagnostic Array Workstation (MDAW): a web server for diagnostic array data storage, sharing and analysis.

    PubMed

    Scaria, Joy; Sreedharan, Aswathy; Chang, Yung-Fu

    2008-09-23

    Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Microbial Diagnostic Array Workstation (MDAW) is a database driven application designed in MS Access and front end designed in ASP.NET. MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays.

  5. Image Engine: an object-oriented multimedia database for storing, retrieving and sharing medical images and text.

    PubMed Central

    Lowe, H. J.

    1993-01-01

    This paper describes Image Engine, an object-oriented, microcomputer-based, multimedia database designed to facilitate the storage and retrieval of digitized biomedical still images, video, and text using inexpensive desktop computers. The current prototype runs on Apple Macintosh computers and allows network database access via peer to peer file sharing protocols. Image Engine supports both free text and controlled vocabulary indexing of multimedia objects. The latter is implemented using the TView thesaurus model developed by the author. The current prototype of Image Engine uses the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary (with UMLS Meta-1 extensions) as its indexing thesaurus. PMID:8130596

  6. Menu-driven cloud computing and resource sharing for R and Bioconductor.

    PubMed

    Bolouri, Hamid; Dulepet, Rajiv; Angerman, Michael

    2011-08-15

    We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. hbolouri@fhcrc.org.

  7. Building and managing high performance, scalable, commodity mass storage systems

    NASA Technical Reports Server (NTRS)

    Lekashman, John

    1998-01-01

    The NAS Systems Division has recently embarked on a significant new way of handling the mass storage problem. One of the basic goals of this new development are to build systems at very large capacity and high performance, yet have the advantages of commodity products. The central design philosophy is to build storage systems the way the Internet was built. Competitive, survivable, expandable, and wide open. The thrust of this paper is to describe the motivation for this effort, what we mean by commodity mass storage, what the implications are for a facility that performs such an action, and where we think it will lead.

  8. An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balman, Mehmet; Kosar, Tevfik

    Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that aremore » accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.« less

  9. Performance evaluation of termite-mound clay, concrete and steel silos for the storage of maize grains in the humid tropics

    USDA-ARS?s Scientific Manuscript database

    Inadequate storage facilities have contributed to severe maize postharvest losses in many developing countries. This study determined the potential of termite mound clay (TMC), a readily-available material in Nigeria, as a construction material for storage silos. The performance of the TMC silo was ...

  10. High Performance Analytics with the R3-Cache

    NASA Astrophysics Data System (ADS)

    Eavis, Todd; Sayeed, Ruhan

    Contemporary data warehouses now represent some of the world’s largest databases. As these systems grow in size and complexity, however, it becomes increasingly difficult for brute force query processing approaches to meet the performance demands of end users. Certainly, improved indexing and more selective view materialization are helpful in this regard. Nevertheless, with warehouses moving into the multi-terabyte range, it is clear that the minimization of external memory accesses must be a primary performance objective. In this paper, we describe the R 3-cache, a natively multi-dimensional caching framework designed specifically to support sophisticated warehouse/OLAP environments. R 3-cache is based upon an in-memory version of the R-tree that has been extended to support buffer pages rather than disk blocks. A key strength of the R 3-cache is that it is able to utilize multi-dimensional fragments of previous query results so as to significantly minimize the frequency and scale of disk accesses. Moreover, the new caching model directly accommodates the standard relational storage model and provides mechanisms for pro-active updates that exploit the existence of query “hot spots”. The current prototype has been evaluated as a component of the Sidera DBMS, a “shared nothing” parallel OLAP server designed for multi-terabyte analytics. Experimental results demonstrate significant performance improvements relative to simpler alternatives.

  11. HydroShare for iUTAH: Collaborative Publication, Interoperability, and Reuse of Hydrologic Data and Models for a Large, Interdisciplinary Water Research Project

    NASA Astrophysics Data System (ADS)

    Horsburgh, J. S.; Jones, A. S.

    2016-12-01

    Data and models used within the hydrologic science community are diverse. New research data and model repositories have succeeded in making data and models more accessible, but have been, in most cases, limited to particular types or classes of data or models and also lack the type of collaborative, and iterative functionality needed to enable shared data collection and modeling workflows. File sharing systems currently used within many scientific communities for private sharing of preliminary and intermediate data and modeling products do not support collaborative data capture, description, visualization, and annotation. More recently, hydrologic datasets and models have been cast as "social objects" that can be published, collaborated around, annotated, discovered, and accessed. Yet it can be difficult using existing software tools to achieve the kind of collaborative workflows and data/model reuse that many envision. HydroShare is a new, web-based system for sharing hydrologic data and models with specific functionality aimed at making collaboration easier and achieving new levels of interactive functionality and interoperability. Within HydroShare, we have developed new functionality for creating datasets, describing them with metadata, and sharing them with collaborators. HydroShare is enabled by a generic data model and content packaging scheme that supports describing and sharing diverse hydrologic datasets and models. Interoperability among the diverse types of data and models used by hydrologic scientists is achieved through the use of consistent storage, management, sharing, publication, and annotation within HydroShare. In this presentation, we highlight and demonstrate how the flexibility of HydroShare's data model and packaging scheme, HydroShare's access control and sharing functionality, and versioning and publication capabilities have enabled the sharing and publication of research datasets for a large, interdisciplinary water research project called iUTAH (innovative Urban Transitions and Aridregion Hydro-sustainability). We discuss the experiences of iUTAH researchers now using HydroShare to collaboratively create, curate, and publish datasets and models in a way that encourages collaboration, promotes reuse, and meets funding agency requirements.

  12. Test report : Raytheon / KTech RK30 Energy Storage System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, David Martin; Schenkman, Benjamin L.; Borneo, Daniel R.

    2013-10-01

    The Department of Energy Office of Electricity (DOE/OE), Sandia National Laboratories (SNL) and the Base Camp Integration Lab (BCIL) partnered together to incorporate an energy storage system into a microgrid configured Forward Operating Base to reduce the fossil fuel consumption and to ultimately save lives. Energy storage vendors will be sending their systems to SNL Energy Storage Test Pad (ESTP) for functional testing and then to the BCIL for performance evaluation. The technologies that will be tested are electro-chemical energy storage systems comprising of lead acid, lithium-ion or zinc-bromide. Raytheon/KTech has developed an energy storage system that utilizes zinc-bromide flowmore » batteries to save fuel on a military microgrid. This report contains the testing results and some limited analysis of performance of the Raytheon/KTech Zinc-Bromide Energy Storage System.« less

  13. The effect of storage conditions on microbial community composition and biomethane potential in a biogas starter culture.

    PubMed

    Hagen, Live Heldal; Vivekanand, Vivekanand; Pope, Phillip B; Eijsink, Vincent G H; Horn, Svein J

    2015-07-01

    A new biogas process is initiated by adding a microbial community, typically in the form of a sample collected from a functional biogas plant. This inoculum has considerable impact on the initial performance of a biogas reactor, affecting parameters such as stability, biogas production yields and the overall efficiency of the anaerobic digestion process. In this study, we have analyzed changes in the microbial composition and performance of an inoculum during storage using barcoded pyrosequencing of bacterial and archaeal 16S ribosomal RNA (rRNA) genes, and determination of the biomethane potential, respectively. The inoculum was stored at room temperature, 4 and -20 °C for up to 11 months and cellulose was used as a standard substrate to test the biomethane potential. Storage up to 1 month resulted in similar final methane yields, but the rate of methane production was reduced by storage at -20 °C. Longer storage times resulted in reduced methane yields and slower production kinetics for all storage conditions, with room temperature and frozen samples consistently giving the best and worst performance, respectively. Both storage time and temperature affected the microbial community composition and methanogenic activity. In particular, fluctuations in the relative abundance of Bacteroidetes were observed. Interestingly, a shift from hydrogenotrophic methanogens to methanogens with the capacity to perform acetoclastic methanogensis was observed upon prolonged storage. In conclusion, this study suggests that biogas inocula may be stored up to 1 month with low loss of methanogenic activity, and identifies bacterial and archaeal species that are affected by the storage.

  14. Battery and Thermal Energy Storage | Energy Systems Integration Facility |

    Science.gov Websites

    NREL Battery and Thermal Energy Storage Battery and Thermal Energy Storage Not long ago, the performance of grid-integrated battery and thermal energy storage technologies. Photo of a battery energy . NREL is also creating better materials for batteries and thermal storage devices to improve their

  15. Mass storage: The key to success in high performance computing

    NASA Technical Reports Server (NTRS)

    Lee, Richard R.

    1993-01-01

    There are numerous High Performance Computing & Communications Initiatives in the world today. All are determined to help solve some 'Grand Challenges' type of problem, but each appears to be dominated by the pursuit of higher and higher levels of CPU performance and interconnection bandwidth as the approach to success, without any regard to the impact of Mass Storage. My colleagues and I at Data Storage Technologies believe that all will have their performance against their goals ultimately measured by their ability to efficiently store and retrieve the 'deluge of data' created by end-users who will be using these systems to solve Scientific Grand Challenges problems, and that the issue of Mass Storage will become then the determinant of success or failure in achieving each projects goals. In today's world of High Performance Computing and Communications (HPCC), the critical path to success in solving problems can only be traveled by designing and implementing Mass Storage Systems capable of storing and manipulating the truly 'massive' amounts of data associated with solving these challenges. Within my presentation I will explore this critical issue and hypothesize solutions to this problem.

  16. Energy Storage Thermal Performance | Transportation Research | NREL

    Science.gov Websites

    Thermal Performance Energy Storage Thermal Performance Photo of tweezers placing a small round nation's recognized leader in battery thermal management research and development (R&D), NREL is one of system level. The lab's assessments of thermal behavior, capacity, lifespan, and overall performance

  17. A comparison of sample preparation methods for extracting volatile organic compounds (VOCs) from equine faeces using HS-SPME.

    PubMed

    Hough, Rachael; Archer, Debra; Probert, Christopher

    2018-01-01

    Disturbance to the hindgut microbiota can be detrimental to equine health. Metabolomics provides a robust approach to studying the functional aspect of hindgut microorganisms. Sample preparation is an important step towards achieving optimal results in the later stages of analysis. The preparation of samples is unique depending on the technique employed and the sample matrix to be analysed. Gas chromatography mass spectrometry (GCMS) is one of the most widely used platforms for the study of metabolomics and until now an optimised method has not been developed for equine faeces. To compare a sample preparation method for extracting volatile organic compounds (VOCs) from equine faeces. Volatile organic compounds were determined by headspace solid phase microextraction gas chromatography mass spectrometry (HS-SPME-GCMS). Factors investigated were the mass of equine faeces, type of SPME fibre coating, vial volume and storage conditions. The resultant method was unique to those developed for other species. Aliquots of 1000 or 2000 mg in 10 ml or 20 ml SPME headspace were optimal. From those tested, the extraction of VOCs should ideally be performed using a divinylbenzene-carboxen-polydimethysiloxane (DVB-CAR-PDMS) SPME fibre. Storage of faeces for up to 12 months at - 80 °C shared a greater percentage of VOCs with a fresh sample than the equivalent stored at - 20 °C. An optimised method for extracting VOCs from equine faeces using HS-SPME-GCMS has been developed and will act as a standard to enable comparisons between studies. This work has also highlighted storage conditions as an important factor to consider in experimental design for faecal metabolomics studies.

  18. PANDA: A distributed multiprocessor operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubb, P.

    1989-01-01

    PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less

  19. Integration of DICOM and openEHR standards

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Yao, Zhihong; Liu, Lei

    2011-03-01

    The standard format for medical imaging storage and transmission is DICOM. openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records. Considering that the integration of DICOM and openEHR is beneficial to information sharing, on the basis of XML-based DICOM format, we developed a method of creating a DICOM Imaging Archetype in openEHR to enable the integration of DICOM and openEHR. Each DICOM file contains abundant imaging information. However, because reading a DICOM involves looking up the DICOM Data Dictionary, the readability of a DICOM file has been limited. openEHR has innovatively adopted two level modeling method, making clinical information divided into lower level, the information model, and upper level, archetypes and templates. But one critical challenge posed to the development of openEHR is the information sharing problem, especially in imaging information sharing. For example, some important imaging information cannot be displayed in an openEHR file. In this paper, to enhance the readability of a DICOM file and semantic interoperability of an openEHR file, we developed a method of mapping a DICOM file to an openEHR file by adopting the form of archetype defined in openEHR. Because an archetype has a tree structure, after mapping a DICOM file to an openEHR file, the converted information is structuralized in conformance with openEHR format. This method enables the integration of DICOM and openEHR and data exchange without losing imaging information between two standards.

  20. Long-Term Outcomes of Laser Prostatectomy for Storage Symptoms: Comparison of Serial 5-Year Followup Data between High Performance System Photoselective Vaporization and Holmium Laser Enucleation of the Prostate.

    PubMed

    Cho, Min Chul; Song, Won Hoon; Park, Juhyun; Cho, Sung Yong; Jeong, Hyeon; Oh, Seung-June; Paick, Jae-Seung; Son, Hwancheol

    2018-06-01

    We compared long-term storage symptom outcomes between photoselective laser vaporization of the prostate with a 120 W high performance system and holmium laser enucleation of the prostate. We also determined factors influencing postoperative improvement of storage symptoms in the long term. Included in our study were 266 men, including 165 treated with prostate photoselective laser vaporization using a 120 W high performance system and 101 treated with holmium laser enucleation of the prostate, on whom 60-month followup data were available. Outcomes were assessed serially 6, 12, 24, 36, 48 and 60 months postoperatively using the International Prostate Symptom Score, uroflowmetry and the serum prostate specific antigen level. Postoperative improvement in storage symptoms was defined as a 50% or greater reduction in the subtotal storage symptom score at each followup visit after surgery compared to baseline. Improvements in frequency, urgency, nocturia, subtotal storage symptom scores and the quality of life index were maintained up to 60 months after photoselective laser vaporization or holmium laser enucleation of the prostate. There was no difference in the degree of improvement in storage symptoms or the percent of patients with postoperative improvement in storage symptoms between the 2 groups throughout the long-term followup. However, the holmium laser group showed greater improvement in voiding symptoms and quality of life than the laser vaporization group. On logistic regression analysis a higher baseline subtotal storage symptom score and a higher BOOI (Bladder Outlet Obstruction Index) were the factors influencing the improvement in storage symptoms 5 years after prostate photoselective laser vaporization or holmium laser enucleation. Our serial followup data suggest that storage symptom improvement was maintained throughout the long-term postoperative period for prostate photoselective laser vaporization with a 120 W high performance system and holmium laser enucleation without any difference between the 2 surgeries. Also, more severe storage symptoms at baseline and a more severe BOOI predicted improved storage symptoms in the long term after each surgery. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  1. GeoSearch: a new virtual globe application for the submission, storage, and sharing of point-based ecological data

    NASA Astrophysics Data System (ADS)

    Cardille, J. A.; Gonzales, R.; Parrott, L.; Bai, J.

    2009-12-01

    How should researchers store and share data? For most of history, scientists with results and data to share have been mostly limited to books and journal articles. In recent decades, the advent of personal computers and shared data formats has made it feasible, though often cumbersome, to transfer data between individuals or among small groups. Meanwhile, the use of automatic samplers, simulation models, and other data-production techniques has increased greatly. The result is that there is more and more data to store, and a greater expectation that they will be available at the click of a button. In 10 or 20 years, will we still send emails to each other to learn about what data exist? The development and widespread familiarity with virtual globes like Google Earth and NASA WorldWind has created the potential, in just the last few years, to revolutionize the way we share data, search for and search through data, and understand the relationship between individual projects in research networks, where sharing and dissemination of knowledge is encouraged. For the last two years, we have been building the GeoSearch application, a cutting-edge online resource for the storage, sharing, search, and retrieval of data produced by research networks. Linking NASA’s WorldWind globe platform, the data browsing toolkit prefuse, and SQL databases, GeoSearch’s version 1.0 enables flexible searches and novel geovisualizations of large amounts of related scientific data. These data may be submitted to the database by individual researchers and processed by GeoSearch’s data parser. Ultimately, data from research groups gathered in a research network would be shared among users via the platform. Access is not limited to the scientists themselves; administrators can determine which data can be presented publicly and which require group membership. Under the auspices of the Canada’s Sustainable Forestry Management Network of Excellence, we have created a moderate-sized database of ecological measurements in forests; we expect to extend the approach to a Quebec lake research network encompassing decades of lake measurements. In this session, we will describe and present four related components of the new system: GeoSearch’s globe-based searching and display of scientific data; prefuse-based visualization of social connections among members of a scientific research network; geolocation of research projects using Google Spreadsheets, KML, and Google Earth/Maps; and collaborative construction of a geolocated database of research articles. Each component is designed to have applications for scientists themselves as well as the general public. Although each implementation is in its infancy, we believe they could be useful to other researcher networks.

  2. Performance evaluation of molten salt thermal storage systems

    NASA Astrophysics Data System (ADS)

    Kolb, G. J.; Nikolai, U.

    1987-09-01

    The molton salt thermal storage system located at the Central Receiver Test Facility (CRTF) was recently subjected to thermal performance tests. The system is composed of a hot storage tank containing molten nitrate salt at a temperature of 1050 F and a cold tank containing 550 F salt with associated valves and controls. It is rated at 7 MWht and was designed and installed by Martin Marietta Corporation in 1982. The results of these tests were used to accomplish four objectives: (1) to compare the current thermal performance of the system with the performance of the system soon after it was installed, (2) to validate a dynamic computer model of the system, (3) to obtain an estimate of an annual system efficiency for a hypothetical commercial scale 1200 MWht system and (4) to compare the performance of the CRTF system with thermal storage systems developed by the European solar community.

  3. Dynamic Enforcement of Knowledge-based Security Policies

    DTIC Science & Technology

    2011-04-05

    foster and maintain relationships by sharing information with friends and fans. These services store users’ personal information and use it to customize...Facebook selects ads based on age, gender, and even sexual preference [2]. Unfortunately, once personal information is collected, users have limited...could use a storage server (e.g., running on their home network) that handles personal † University of Maryland, Department of Computer Science

  4. The Role of Standards in Cloud-Computing Interoperability

    DTIC Science & Technology

    2012-10-01

    services are not shared outside the organization. CloudStack, Eucalyptus, HP, Microsoft, OpenStack , Ubuntu, and VMWare provide tools for building...center requirements • Developing usage models for cloud ven- dors • Independent IT consortium OpenStack http://www.openstack.org • Open-source...software for running private clouds • Currently consists of three core software projects: OpenStack Compute (Nova), OpenStack Object Storage (Swift

  5. Integrating Archaeological Modeling in DoD Cultural Resource Compliance

    DTIC Science & Technology

    2012-10-01

    that Eglin-area populations shared behaviors and values with other regions. Evidence for maize horticulture appears for the first time during the...ceramic containers, the introduction of maize agriculture, the practice of pipe smoking, the existence of more substantial sedentary villages with storage...excavations undertaken during survey to locate buried artifacts and features. Phase 2 is the phase after initial survey when STPs that yielded artifacts

  6. Post Your Digital Photos Online: Save Hard-Drive Space and Share Your Snapshots

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    Digital photographs can take up a lot of hard-drive space. In light of this fact, many people are choosing to store their photos online. There are several ways to store pictures on the Web, the most popular being online photo storage services. These services have many benefits. They offer a safe place for photos in the event that one's computer…

  7. Enabling Web-Based Analysis of CUAHSI HIS Hydrologic Data Using R and Web Processing Services

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Kadlec, J.; Bayles, M.; Seul, M.; Hooper, R. P.; Cummings, B.

    2015-12-01

    The CUAHSI Hydrologic Information System (CUAHSI HIS) provides open access to a large number of hydrological time series observation and modeled data from many parts of the world. Several software tools have been designed to simplify searching and access to the CUAHSI HIS datasets. These software tools include: Desktop client software (HydroDesktop, HydroExcel), developer libraries (WaterML R Package, OWSLib, ulmo), and the new interactive search website, http://data.cuahsi.org. An issue with using the time series data from CUAHSI HIS for further analysis by hydrologists (for example for verification of hydrological and snowpack models) is the large heterogeneity of the time series data. The time series may be regular or irregular, contain missing data, have different time support, and be recorded in different units. R is a widely used computational environment for statistical analysis of time series and spatio-temporal data that can be used to assess fitness and perform scientific analyses on observation data. R includes the ability to record a data analysis in the form of a reusable script. The R script together with the input time series dataset can be shared with other users, making the analysis more reproducible. The major goal of this study is to examine the use of R as a Web Processing Service for transforming time series data from the CUAHSI HIS and sharing the results on the Internet within HydroShare. HydroShare is an online data repository and social network for sharing large hydrological data sets such as time series, raster datasets, and multi-dimensional data. It can be used as a permanent cloud storage space for saving the time series analysis results. We examine the issues associated with running R scripts online: including code validation, saving of outputs, reporting progress, and provenance management. An explicit goal is that the script which is run locally should produce exactly the same results as the script run on the Internet. Our design can be used as a model for other studies that need to run R scripts on the web.

  8. The HydroShare Collaborative Repository for the Hydrology Community

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.

    2017-12-01

    HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of, and collaboration around, "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting our approach to making this system easy to use and serving the needs of the hydrology community represented by the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc. (CUAHSI). Metadata for uploaded files is harvested automatically or captured using easy to use web user interfaces. Users are encouraged to add or create resources in HydroShare early in the data life cycle. To encourage this we allow users to share and collaborate on HydroShare resources privately among individual users or groups, entering metadata while doing the work. HydroShare also provides enhanced functionality for users through web apps that provide tools and computational capability for actions on resources. HydroShare's architecture broadly is comprised of: (1) resource storage, (2) resource exploration website, and (3) web apps for actions on resources. System components are loosely coupled and interact through APIs, which enhances robustness, as components can be upgraded and advanced relatively independently. The full power of this paradigm is the extensibility it supports. Web apps are hosted on separate servers, which may be 3rd party servers. They are registered in HydroShare using a web app resource that configures the connectivity for them to be discovered and launched directly from resource types they are associated with.

  9. Public preferences for electronic health data storage, access, and sharing - evidence from a pan-European survey.

    PubMed

    Patil, Sunil; Lu, Hui; Saunders, Catherine L; Potoglou, Dimitris; Robinson, Neil

    2016-11-01

    To assess the public's preferences regarding potential privacy threats from devices or services storing health-related personal data. A pan-European survey based on a stated-preference experiment for assessing preferences for electronic health data storage, access, and sharing. We obtained 20 882 survey responses (94 606 preferences) from 27 EU member countries. Respondents recognized the benefits of storing electronic health information, with 75.5%, 63.9%, and 58.9% agreeing that storage was important for improving treatment quality, preventing epidemics, and reducing delays, respectively. Concerns about different levels of access by third parties were expressed by 48.9% to 60.6% of respondents.On average, compared to devices or systems that only store basic health status information, respondents preferred devices that also store identification data (coefficient/relative preference 95% CI = 0.04 [0.00-0.08], P = 0.034) and information on lifelong health conditions (coefficient = 0.13 [0.08 to 0.18], P < 0.001), but there was no evidence of this for devices with information on sensitive health conditions such as mental and sexual health and addictions (coefficient = -0.03 [-0.09 to 0.02], P = 0.24). Respondents were averse to their immediate family (coefficient = -0.05 [-0.05 to -0.01], P = 0.011) and home care nurses (coefficient = -0.06 [-0.11 to -0.02], P = 0.004) viewing this data, and strongly averse to health insurance companies (coefficient = -0.43 [-0.52 to 0.34], P < 0.001), private sector pharmaceutical companies (coefficient = -0.82 [-0.99 to -0.64], P < 0.001), and academic researchers (coefficient = -0.53 [-0.66 to -0.40], P < 0.001) viewing the data. Storing more detailed electronic health data was generally preferred, but respondents were averse to wider access to and sharing of this information. When developing frameworks for the use of electronic health data, policy makers should consider approaches that both highlight the benefits to the individual and minimize the perception of privacy risks. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  10. The Jade File System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rao, Herman Chung-Hwa

    1991-01-01

    File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its design. The prototype consists of interfaces to the Unix File System, the Sun Network File System, and the File Transfer Protocol.

  11. Clearing your Desk! Software and Data Services for Collaborative Web Based GIS Analysis

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Gichamo, T.; Yildirim, A. A.; Liu, Y.

    2015-12-01

    Can your desktop computer crunch the large GIS datasets that are becoming increasingly common across the geosciences? Do you have access to or the know-how to take advantage of advanced high performance computing (HPC) capability? Web based cyberinfrastructure takes work off your desk or laptop computer and onto infrastructure or "cloud" based data and processing servers. This talk will describe the HydroShare collaborative environment and web based services being developed to support the sharing and processing of hydrologic data and models. HydroShare supports the upload, storage, and sharing of a broad class of hydrologic data including time series, geographic features and raster datasets, multidimensional space-time data, and other structured collections of data. Web service tools and a Python client library provide researchers with access to HPC resources without requiring them to become HPC experts. This reduces the time and effort spent in finding and organizing the data required to prepare the inputs for hydrologic models and facilitates the management of online data and execution of models on HPC systems. This presentation will illustrate the use of web based data and computation services from both the browser and desktop client software. These web-based services implement the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation, generation of hydrology-based terrain information, and preparation of hydrologic model inputs. They allow users to develop scripts on their desktop computer that call analytical functions that are executed completely in the cloud, on HPC resources using input datasets stored in the cloud, without installing specialized software, learning how to use HPC, or transferring large datasets back to the user's desktop. These cases serve as examples for how this approach can be extended to other models to enhance the use of web and data services in the geosciences.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. L. Poe, Jr.; P.F. Wise

    The U.S. Department of Energy (DOE) is preparing a proposal to construct, operate 2nd monitor, and eventually close a repository at Yucca Mountain in Nye County, Nevada, for the geologic disposal of spent nuclear fuel (SNF) and high-level radioactive waste (HLW). As part of this effort, DOE has prepared a viability assessment and an assessment of potential consequences that may exist if the repository is not constructed. The assessment of potential consequences if the repository is not constructed assumes that all SNF and HLW would be left at the generator sites. These include 72 commercial generator sites (three commercial facilitymore » pairs--Salem and Hope Creek, Fitzpatrick and Nine Mile Point, and Dresden and Morris--would share common storage due to their close proximity to each other) and five DOE sites across the country. DOE analyzed the environmental consequences of the effects of the continued storage of these materials at these sites in a report titled Continued Storage Analysis Report (CSAR; Reference 1 ) . The CSAR analysis includes a discussion of the degradation of these materials when exposed to the environment. This document describes the environmental parameters that influence the degradation analyzed in the CSAR. These include temperature, relative humidity, precipitation chemistry (pH and chemical composition), annual precipitation rates, annual number of rain-days, and annual freeze/thaw cycles. The document also tabulates weather conditions for each storage site, evaluates the degradation of concrete storage modules and vaults in different regions of the country, and provides a thermal analysis of commercial SNF in storage.« less

  13. Strabo: An App and Database for Structural Geology and Tectonics Data

    NASA Astrophysics Data System (ADS)

    Newman, J.; Williams, R. T.; Tikoff, B.; Walker, J. D.; Good, J.; Michels, Z. D.; Ash, J.

    2016-12-01

    Strabo is a data system designed to facilitate digital storage and sharing of structural geology and tectonics data. The data system allows researchers to store and share field and laboratory data as well as construct new multi-disciplinary data sets. Strabo is built on graph database technology, as opposed to a relational database, which provides the flexibility to define relationships between objects of any type. This framework allows observations to be linked in a complex and hierarchical manner that is not possible in traditional database topologies. Thus, the advantage of the Strabo data structure is the ability of graph databases to link objects in both numerous and complex ways, in a manner that more accurately reflects the realities of the collecting and organizing of geological data sets. The data system is accessible via a mobile interface (iOS and Android devices) that allows these data to be stored, visualized, and shared during primary collection in the field or the laboratory. The Strabo Data System is underlain by the concept of a "Spot," which we define as any observation that characterizes a specific area. This can be anything from a strike and dip measurement of bedding to cross-cutting relationships between faults in complex dissected terrains. Each of these spots can then contain other Spots and/or measurements (e.g., lithology, slickenlines, displacement magnitude.) Hence, the Spot concept is applicable to all relationships and observation sets. Strabo is therefore capable of quantifying and digitally storing large spatial variations and complex geometries of naturally deformed rocks within hierarchically related maps and images. These approaches provide an observational fidelity comparable to a traditional field book, but with the added benefits of digital data storage, processing, and ease of sharing. This approach allows Strabo to integrate seamlessly into the workflow of most geologists. Future efforts will focus on extending Strabo to other sub-disciplines as well as developing a desktop system for the enhanced collection and organization of microstructural data.

  14. Archival storage solutions for PACS

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy

    1997-05-01

    While they are many, one of the inhibitors to the wide spread diffusion of PACS systems has been robust, cost effective digital archive storage solutions. Moreover, an automated Nearline solution is key to a central, sharable data repository, enabling many applications such as PACS, telemedicine and teleradiology, and information warehousing and data mining for research such as patient outcome analysis. Selecting the right solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, configuration architecture and flexibility, subsystem availability and reliability, security requirements, system cost, achievable benefits and cost savings, investment protection, strategic fit and more.This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on storage system throughput will be analyzed. The concept of automated migration of images from high performance, high cost storage devices to high capacity, low cost storage devices will be introduced as a viable way to minimize overall storage costs for an archive. The concept of access density will also be introduced and applied to the selection of the most cost effective archive solution.

  15. Fabrication of Porous Carbon-based Nanostructure for Energy Storage and Transfer Applications

    DTIC Science & Technology

    2014-06-09

    in the voltage range of 3.0 to 0.005 V (versus Li/Li+). Cyclic voltammetry (CV) was performed on a computer controlled MacPile II unit (Biological...performed at current density of 37mAg–1, voltage: 3.0-0.005V vs. Li/Li+. Cyclic voltammetry was performed at a scan rate of 58 µs/V. Red plots...pseudocapacitve storage behaviour of the electrode.19 The Li storage mechanism of both electrodes can also be studied carefully by slow scanning cyclic

  16. The role of processing difficulty in the predictive utility of working memory span.

    PubMed

    Bunting, Michael

    2006-12-01

    Storage-plus-processing working memory span tasks (e.g., operation span [OSPAN]) are strong predictors of higher order cognition, including general fluid intelligence. This is due, in part, to the difficulty of the processing component. When the processing component prevents only articulatory rehearsal, but not executive attentional control, the predictive utility is attenuated. Participants in one experiment (N = 59) completed Raven's Advanced Progressive Matrices (RAPM) and multiple versions of OSPAN and probed recall (PR). A distractor task (high or low difficulty) was added to PR, and OSPAN's processing component was manipulated for difficulty. OSPAN and PR correlated with RAPM when the processing component took executive attentional control. These results are suggestive of resource sharing between processing and storage.

  17. Human β-glucuronidase: structure, function, and application in enzyme replacement therapy.

    PubMed

    Naz, Huma; Islam, Asimul; Waheed, Abdul; Sly, William S; Ahmad, Faizan; Hassan, Imtaiyaz

    2013-10-01

    Lysosomal storage diseases occur due to incomplete metabolic degradation of macromolecules by various hydrolytic enzymes in the lysosome. Despite structural differences, most of the lysosomal enzymes share many common features including a lysosomal targeting motif and phosphotransferase recognition sites. β-Glucuronidase (GUSB) is an important lysosomal enzyme involved in the degradation of glucuronate-containing glycosaminoglycan. The deficiency of GUSB causes mucopolysaccharidosis type VII (MPSVII), leading to lysosomal storage in the brain. GUSB is a well-studied protein for its expression, sequence, structure, and function. The purpose of this review is to summarize our current understanding of sequence, structure, function, and evolution of GUSB and its lysosomal enzyme targeting. Enzyme replacement therapy reported for this protein is also discussed.

  18. Space Geodesy and Geochemistry Applied to the Monitoring, Verification of Carbon Capture and Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swart, Peter

    2013-11-30

    This award was a training grant awarded by the U.S. Department of Energy (DOE). The purpose of this award was solely to provide training for two PhD graduate students for three years in the general area of carbon capture and storage (CCS). The training consisted of course work and conducting research in the area of CCS. Attendance at conferences was also encouraged as an activity and positive experience for students to learn the process of sharing research findings with the scientific community, and the peer review process. At the time of this report, both students have approximately two years remainingmore » of their studies, so have not fully completed their scientific research projects.« less

  19. A Grid Connected Photovoltaic Inverter with Battery-Supercapacitor Hybrid Energy Storage

    PubMed Central

    Guerrero-Martínez, Miguel Ángel; Barrero-González, Fermín

    2017-01-01

    The power generation from renewable power sources is variable in nature, and may contain unacceptable fluctuations, which can be alleviated by using energy storage systems. However, the cost of batteries and their limited lifetime are serious disadvantages. To solve these problems, an improvement consisting in the collaborative association of batteries and supercapacitors has been studied. Nevertheless, these studies don’t address in detail the case of residential and large-scale photovoltaic systems. In this paper, a selected combined topology and a new control scheme are proposed to control the power sharing between batteries and supercapacitors. Also, a method for sizing the energy storage system together with the hybrid distribution based on the photovoltaic power curves is introduced. This innovative contribution not only reduces the stress levels on the battery, and hence increases its life span, but also provides constant power injection to the grid during a defined time interval. The proposed scheme is validated through detailed simulation and experimental tests. PMID:28800102

  20. A Grid Connected Photovoltaic Inverter with Battery-Supercapacitor Hybrid Energy Storage.

    PubMed

    Miñambres-Marcos, Víctor Manuel; Guerrero-Martínez, Miguel Ángel; Barrero-González, Fermín; Milanés-Montero, María Isabel

    2017-08-11

    The power generation from renewable power sources is variable in nature, and may contain unacceptable fluctuations, which can be alleviated by using energy storage systems. However, the cost of batteries and their limited lifetime are serious disadvantages. To solve these problems, an improvement consisting in the collaborative association of batteries and supercapacitors has been studied. Nevertheless, these studies don't address in detail the case of residential and large-scale photovoltaic systems. In this paper, a selected combined topology and a new control scheme are proposed to control the power sharing between batteries and supercapacitors. Also, a method for sizing the energy storage system together with the hybrid distribution based on the photovoltaic power curves is introduced. This innovative contribution not only reduces the stress levels on the battery, and hence increases its life span, but also provides constant power injection to the grid during a defined time interval. The proposed scheme is validated through detailed simulation and experimental tests.

  1. Experimental investigation on the thermal performance of heat storage walls coupled with active solar systems

    NASA Astrophysics Data System (ADS)

    Zhao, Chunyu; You, Shijun; Zhu, Chunying; Yu, Wei

    2016-12-01

    This paper presents an experimental investigation of the performance of a system combining a low-temperature water wall radiant heating system and phase change energy storage technology with an active solar system. This system uses a thermal storage wall that is designed with multilayer thermal storage plates. The heat storage material is expanded graphite that absorbs a mixture of capric acid and lauric acid. An experiment is performed to study the actual effect. The following are studied under winter conditions: (1) the temperature of the radiation wall surface, (2) the melting status of the thermal storage material in the internal plate, (3) the density of the heat flux, and (4) the temperature distribution of the indoor space. The results reveal that the room temperature is controlled between 16 and 20 °C, and the thermal storage wall meets the heating and temperature requirements. The following are also studied under summer conditions: (1) the internal relationship between the indoor temperature distribution and the heat transfer within the regenerative plates during the day and (2) the relationship between the outlet air temperature and inlet air temperature in the thermal storage wall in cooling mode at night. The results indicate that the indoor temperature is approximately 27 °C, which satisfies the summer air-conditioning requirements.

  2. Adult age differences in the storage of information in working memory.

    PubMed

    Foos, P W; Wright, L

    1992-01-01

    The performance of 97 young and 91 old persons were compared to determine if a deficiency in working memory resources for processing, storage, or allocation could be detected. Persons simultaneously performed a storage and one of two processing tasks while instructed to allocate resources to processing, storage, or both tasks. The storage task involved remembering the names of one, three, or five persons. Processing tasks involved solving addition problems presented on flashcards or answering common knowledge questions. Results showed increased age differences on the storage task as demands for resources increased but no differences on processing tasks. Individuals seemed unable to allocate resources as instructed. A comparison of young-old and old-old groups showed the same results as those obtained comparing young and old groups and support the hypothesis of a deficiency of storage, but not processing, resources in working memory for old, especially old-old, adults.

  3. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question:more » Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms« less

  4. ClearedLeavesDB: an online database of cleared plant leaf images

    PubMed Central

    2014-01-01

    Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985

  5. ClearedLeavesDB: an online database of cleared plant leaf images.

    PubMed

    Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S

    2014-03-28

    Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.

  6. A Spatially-Registered, Massively Parallelised Data Structure for Interacting with Large, Integrated Geodatasets

    NASA Astrophysics Data System (ADS)

    Irving, D. H.; Rasheed, M.; O'Doherty, N.

    2010-12-01

    The efficient storage, retrieval and interactive use of subsurface data present great challenges in geodata management. Data volumes are typically massive, complex and poorly indexed with inadequate metadata. Derived geomodels and interpretations are often tightly bound in application-centric and proprietary formats; open standards for long-term stewardship are poorly developed. Consequently current data storage is a combination of: complex Logical Data Models (LDMs) based on file storage formats; 2D GIS tree-based indexing of spatial data; and translations of serialised memory-based storage techniques into disk-based storage. Whilst adequate for working at the mesoscale over a short timeframes, these approaches all possess technical and operational shortcomings: data model complexity; anisotropy of access; scalability to large and complex datasets; and weak implementation and integration of metadata. High performance hardware such as parallelised storage and Relational Database Management System (RDBMS) have long been exploited in many solutions but the underlying data structure must provide commensurate efficiencies to allow multi-user, multi-application and near-realtime data interaction. We present an open Spatially-Registered Data Structure (SRDS) built on Massively Parallel Processing (MPP) database architecture implemented by a ANSI SQL 2008 compliant RDBMS. We propose a LDM comprising a 3D Earth model that is decomposed such that each increasing Level of Detail (LoD) is achieved by recursively halving the bin size until it is less than the error in each spatial dimension for that data point. The value of an attribute at that point is stored as a property of that point and at that LoD. It is key to the numerical efficiency of the SRDS that it is under-pinned by a power-of-two relationship thus precluding the need for computationally intensive floating point arithmetic. Our approach employed a tightly clustered MPP array with small clusters of storage, processors and memory communicating over a high-speed network inter-connect. This is a shared-nothing architecture where resources are managed within each cluster unlike most other RDBMSs. Data are accessed on this architecture by their primary index values which utilises the hashing algorithm for point-to-point access. The hashing algorithm’s main role is the efficient distribution of data across the clusters based on the primary index. In this study we used 3D seismic volumes, 2D seismic profiles and borehole logs to demonstrate application in both (x,y,TWT) and (x,y,z)-space. In the SRDS the primary index is a composite column index of (x,y) to avoid invoking time-consuming full table scans as is the case in tree-based systems. This means that data access is isotropic. A query for data in a specified spatial range permits retrieval recursively by point-to-point queries within each nested LoD yielding true linear performance up to the Petabyte scale with hardware scaling presenting the primary limiting factor. Our architecture and LDM promotes: realtime interaction with massive data volumes; streaming of result sets and server-rendered 2D/3D imagery; rigorous workflow control and auditing; and in-database algorithms run directly against data as a HPC cloud service.

  7. Remaining gaps for "safe" CO2 storage: the INGV CO2GAPS vision of "learning by doing" monitoring geogas leakage, reservoirs contamination/mixing and induced/triggered seismicity

    NASA Astrophysics Data System (ADS)

    Quattrocchi, F.; Vinciguerra, S.; Chiarabba, C.; Boschi, E.; Anselmi, M.; Burrato, P.; Buttinelli, M.; Cantucci, B.; Cinti, D.; Galli, G.; Improta, L.; Nazzari, M.; Pischiutta, M.; Pizzino, L.; Procesi, M.; Rovelli, A.; Sciarra, A.; Voltattorni, N.

    2012-12-01

    The CO2GAPS project proposed by INGV is intended to build up an European Proposal for a new kind of research strategy in the field of the geogas storage. Aim of the project would be to fill such key GAPS concerning the main risks associated to CO2 storage and their implications on the entire Carbon Capture and Storage (CCS) process, which are: i) the geogas leakage both in soils and shallow aquifers, up to indoor seepage; ii) the reservoirs contamination/mixing by hydrocarbons and heavy metals; iii) induced or triggered seismicity and microseismicity, especially for seismogenic blind faults. In order to consider such risks and make the CCS public acceptance easier, a new kind of research approach should be performed by: i) a better multi-disciplinary and "site specific" risk assessment; ii) the development of more reliable multi-disciplinary monitoring protocols. In this view robust pre-injection base-lines (seismicity and degassing) as well as identification and discrimination criteria for potential anomalies are mandatory. CO2 injection dynamic modelling presently not consider reservoirs geomechanical properties during reactive mass-transport large scale simulations. Complex simulations of the contemporaneous physic-chemical processes involving CO2-rich plumes which move, react and help to crack the reservoir rocks are not totally performed. These activities should not be accomplished only by the oil-gas/electric companies, since the experienced know-how should be shared among the CCS industrial operators and research institutions, with the governments support and overview, also flanked by a transparent and "peer reviewed" scientific popularization process. In this context, a preliminary and reliable 3D modelling of the entire "storage complex" as defined by the European Directive 31/2009 is strictly necessary, taking into account the above mentioned geological, geochemical and geophysical risks. New scientific results could also highlighting such opportunities recently shown by strategic researches on the synergies between the use of underground space (e.g. CH4, CO2 storage and deep geothermics) for energetic supplying purposes. The CO2GAPS approach would merge together geomechanical and geochemical data with seismic velocity and anisotropy properties of the crust, induced seismicity data, gravimetry, EM techniques, and "early alarm" procedures for leakage/cracks detection in shallow geo-spheres (e.g. abandoned wells, naturally seismic and degassing zones). Moreover, a full merging of those data is necessary for a reliable 3D-Earth modelling and the subsequent reactive transport simulations. CO2GAPS vision would apply and verify these issues working on several European selected sites, taking also into account complex systems such as "inland" active faulted blocks close to potential off-shore CO2 storage sites, ECBM faulted prone-areas, "inland" injection test site and CO2 natural faulted analogues. The purpose of these activities focus on the study of long-term fate of stored CO2, leakage mechanisms through the cap-rock and/or abandoned wells, cement wells reactivity, as well as the effects of impurities in the CO2 streams, their removal costs, the use of tracers and the role of biota.

  8. Optical Data Storage Capabilities of Bacteriorhodopsin

    NASA Technical Reports Server (NTRS)

    Gary, Charles

    1998-01-01

    We present several measurements of the data storage capability of bacteriorhodopsin films to help establish the baseline performance of this material as a medium for holographic data storage. In particular, we examine the decrease in diffraction efficiency with the density of holograms stored at one location in the film, and we also analyze the recording schedule needed to produce a set of equal intensity holograms at a single location in the film. Using this information along with the assumptions about the performance of the optical system, we can estimate potential data storage densities in bacteriorhodopsin.

  9. The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2

    DOE PAGES

    Middleton, Richard Stephen; Yaw, Sean Patrick

    2018-01-11

    Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less

  10. The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, Richard Stephen; Yaw, Sean Patrick

    Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less

  11. Files synchronization from a large number of insertions and deletions

    NASA Astrophysics Data System (ADS)

    Ellappan, Vijayan; Kumari, Savera

    2017-11-01

    Synchronization between different versions of files is becoming a major issue that most of the applications are facing. To make the applications more efficient a economical algorithm is developed from the previously used algorithm of “File Loading Algorithm”. I am extending this algorithm in three ways: First, dealing with non-binary files, Second backup is generated for uploaded files and lastly each files are synchronized with insertions and deletions. User can reconstruct file from the former file with minimizing the error and also provides interactive communication by eliminating the frequency without any disturbance. The drawback of previous system is overcome by using synchronization, in which multiple copies of each file/record is created and stored in backup database and is efficiently restored in case of any unwanted deletion or loss of data. That is, to introduce a protocol that user B may use to reconstruct file X from file Y with suitably low probability of error. Synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Drop box synchronize between local copies and cloud backups each time users make changes to local versions. Similarly, synchronization tools are necessary in mobile devices. Specialized synchronization algorithms are used for video and sound editing. Synchronization tools are also capable of performing data duplication.

  12. Predicting long-term performance of engineered geologic carbon dioxide storage systems to inform decisions amidst uncertainty

    NASA Astrophysics Data System (ADS)

    Pawar, R.

    2016-12-01

    Risk assessment and risk management of engineered geologic CO2 storage systems is an area of active investigation. The potential geologic CO2 storage systems currently under consideration are inherently heterogeneous and have limited to no characterization data. Effective risk management decisions to ensure safe, long-term CO2 storage requires assessing and quantifying risks while taking into account the uncertainties in a storage site's characteristics. The key decisions are typically related to definition of area of review, effective monitoring strategy and monitoring duration, potential of leakage and associated impacts, etc. A quantitative methodology for predicting a sequestration site's long-term performance is critical for making key decisions necessary for successful deployment of commercial scale geologic storage projects where projects will require quantitative assessments of potential long-term liabilities. An integrated assessment modeling (IAM) paradigm which treats a geologic CO2 storage site as a system made up of various linked subsystems can be used to predict long-term performance. The subsystems include storage reservoir, seals, potential leakage pathways (such as wellbores, natural fractures/faults) and receptors (such as shallow groundwater aquifers). CO2 movement within each of the subsystems and resulting interactions are captured through reduced order models (ROMs). The ROMs capture the complex physical/chemical interactions resulting due to CO2 movement and interactions but are computationally extremely efficient. The computational efficiency allows for performing Monte Carlo simulations necessary for quantitative probabilistic risk assessment. We have used the IAM to predict long-term performance of geologic CO2 sequestration systems and to answer questions related to probability of leakage of CO2 through wellbores, impact of CO2/brine leakage into shallow aquifer, etc. Answers to such questions are critical in making key risk management decisions. A systematic uncertainty quantification approach can been used to understand how uncertain parameters associated with different subsystems (e.g., reservoir permeability, wellbore cement permeability, wellbore density, etc.) impact the overall site performance predictions.

  13. An adaptive cryptographic accelerator for network storage security on dynamically reconfigurable platform

    NASA Astrophysics Data System (ADS)

    Tang, Li; Liu, Jing-Ning; Feng, Dan; Tong, Wei

    2008-12-01

    Existing security solutions in network storage environment perform poorly because cryptographic operations (encryption and decryption) implemented in software can dramatically reduce system performance. In this paper we propose a cryptographic hardware accelerator on dynamically reconfigurable platform for the security of high performance network storage system. We employ a dynamic reconfigurable platform based on a FPGA to implement a PowerPCbased embedded system, which executes cryptographic algorithms. To reduce the reconfiguration latency, we apply prefetch scheduling. Moreover, the processing elements could be dynamically configured to support different cryptographic algorithms according to the request received by the accelerator. In the experiment, we have implemented AES (Rijndael) and 3DES cryptographic algorithms in the reconfigurable accelerator. Our proposed reconfigurable cryptographic accelerator could dramatically increase the performance comparing with the traditional software-based network storage systems.

  14. Assessment of Medication Use among University Students in Ethiopia

    PubMed Central

    2017-01-01

    Background. The extent, nature, and determinants of medication use of individuals can be known from drug utilization studies. Objectives. This study intended to determine medication consumption, sharing, storage, and disposal practices of university students in Northwest Ethiopia. Methods. A descriptive cross-sectional study was conducted on 404 university students selected through stratified random sampling technique. Data were collected using self-administered questionnaire and analyzed with SPSS version 20 statistical software. Pearson's Chi-square test of independence was conducted with P < 0.05 taken as statistically significant. Results. At 95.3% response rate, the prevalences of medication consumption and sharing were 35.3% (N = 136) and 38.2% (N = 147), respectively. One hundred (26%) respondents admitted that they often keep leftover medications for future use while the rest (N = 285, 74%) discard them primarily into toilets (N = 126, 44.2%). Evidence of association existed between medication taking and year of study (P = 0.048), medication sharing and sex (P = 0.003), and medication sharing and year of study (P = 0.015). Conclusion. There is a high prevalence of medication consumption, medication sharing, and inappropriate disposal practices which are influenced by sex and educational status of the university students. Thus medication use related educational interventions need to be given to students in general. PMID:28393101

  15. Assessment of Medication Use among University Students in Ethiopia.

    PubMed

    Asmelashe Gelayee, Dessalegn; Binega, Gashaw

    2017-01-01

    Background. The extent, nature, and determinants of medication use of individuals can be known from drug utilization studies. Objectives. This study intended to determine medication consumption, sharing, storage, and disposal practices of university students in Northwest Ethiopia. Methods. A descriptive cross-sectional study was conducted on 404 university students selected through stratified random sampling technique. Data were collected using self-administered questionnaire and analyzed with SPSS version 20 statistical software. Pearson's Chi-square test of independence was conducted with P < 0.05 taken as statistically significant. Results. At 95.3% response rate, the prevalences of medication consumption and sharing were 35.3% ( N = 136) and 38.2% ( N = 147), respectively. One hundred (26%) respondents admitted that they often keep leftover medications for future use while the rest ( N = 285, 74%) discard them primarily into toilets ( N = 126, 44.2%). Evidence of association existed between medication taking and year of study ( P = 0.048), medication sharing and sex ( P = 0.003), and medication sharing and year of study ( P = 0.015). Conclusion. There is a high prevalence of medication consumption, medication sharing, and inappropriate disposal practices which are influenced by sex and educational status of the university students. Thus medication use related educational interventions need to be given to students in general.

  16. easyDAS: Automatic creation of DAS servers

    PubMed Central

    2011-01-01

    Background The Distributed Annotation System (DAS) has proven to be a successful way to publish and share biological data. Although there are more than 750 active registered servers from around 50 organizations, setting up a DAS server comprises a fair amount of work, making it difficult for many research groups to share their biological annotations. Given the clear advantage that the generalized sharing of relevant biological data is for the research community it would be desirable to facilitate the sharing process. Results Here we present easyDAS, a web-based system enabling anyone to publish biological annotations with just some clicks. The system, available at http://www.ebi.ac.uk/panda-srv/easydas is capable of reading different standard data file formats, process the data and create a new publicly available DAS source in a completely automated way. The created sources are hosted on the EBI systems and can take advantage of its high storage capacity and network connection, freeing the data provider from any network management work. easyDAS is an open source project under the GNU LGPL license. Conclusions easyDAS is an automated DAS source creation system which can help many researchers in sharing their biological data, potentially increasing the amount of relevant biological data available to the scientific community. PMID:21244646

  17. IHE profiles applied to regional PACS.

    PubMed

    Fernandez-Bayó, Josep

    2011-05-01

    PACS has been widely adopted as an image storage solution that perfectly fits the radiology department workflow and that can be easily extended to other hospital departments. Integrations with other hospital systems, like the Radiology Information System, the Hospital Information System and the Electronic Patient Record are fully achieved but still challenging aims. PACS also creates the perfect environment for teleradiology and teleworking setups. One step further is the regional PACS concept where different hospitals or health care enterprises share the images in an integrated Electronic Patient Record. Among the different solutions available to share images between different hospitals IHE (Integrating the Healthcare Enterprise) organization presents the Cross Enterprise Document Sharing profile (XDS) which allows sharing images from different hospitals even if they have different PACS vendors. Adopting XDS has multiple advantages, images do not need to be duplicated in a central archive to be shared among the different healthcare enterprises, they only need to be indexed and published in a central document registry. In the XDS profile IHE defines the mechanisms to publish and index the images in the central document registry. It also defines the mechanisms that each hospital will use to retrieve those images regardless on the Hospital PACS they are stored. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  18. Super NiCd Open-Circuit Storage and Low Earth Orbit (LEO) Life Test Evaluation

    NASA Technical Reports Server (NTRS)

    Baer, Jean Marie; Hwang, Warren C.; Ang, Valerie J.; Hayden, Jeff; Rao, Gopalakrishna; Day, John H. (Technical Monitor)

    2002-01-01

    This presentation discusses Air Force tests performed on super NiCd cells to measure their performance under conditions simulating Low Earth Orbit (LEO) conditions. Super NiCd cells offer potential advantages over existing NiCd cell designs including advanced cell design with improved separator material and electrode making processes, but handling and storage requires active charging. These tests conclude that the super NiCd cells support generic Air Force qualifications for conventional LEO missions (up to five years duration) and that handling and storage may not actually require active charging as previously assumed. Topics covered include: Test Plan, Initial Characterization Tests, Open-Circuit Storage Tests, and post storage capacities.

  19. Fuel Aging in Storage and Transportation (FAST): Accelerated Characterization and Performance Assessment of the Used Nuclear Fuel Storage System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDeavitt, Sean

    2016-08-02

    This Integrated Research Project (IRP) was established to characterize key limiting phenomena related to the performance of used nuclear fuel (UNF) storage systems. This was an applied engineering project with a specific application in view (i.e., UNF dry storage). The completed tasks made use of a mixture of basic science and engineering methods. The overall objective was to create, or enable the creation of, predictive tools in the form of observation methods, phenomenological models, and databases that will enable the design, installation, and licensing of dry UNF storage systems that will be capable of containing UNF for extended period ofmore » time.« less

  20. Shared performance monitor in a multiprocessor system

    DOEpatents

    Chiu, George; Gara, Alan G; Salapura, Valentina

    2014-12-02

    A performance monitoring unit (PMU) and method for monitoring performance of events occurring in a multiprocessor system. The multiprocessor system comprises a plurality of processor devices units, each processor device for generating signals representing occurrences of events in the processor device, and, a single shared counter resource for performance monitoring. The performance monitor unit is shared by all processor cores in the multiprocessor system. The PMU is further programmed to monitor event signals issued from non-processor devices.

  1. Sirocco Storage Server v. pre-alpha 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew L.; Danielson, Geoffrey; Ward, H. Lee

    Sirocco is a parallel storage system under development, designed for write-intensive workloads on large-scale HPC platforms. It implements a keyvalue object store on top of a set of loosely federated storage servers that cooperate to ensure data integrity and performance. It includes support for a range of different types of storage transactions. This software release constitutes a conformant storage server, along with the client-side libraries to access the storage over a network.

  2. Outperforming whom? A multilevel study of performance-prove goal orientation, performance, and the moderating role of shared team identification.

    PubMed

    Dietz, Bart; van Knippenberg, Daan; Hirst, Giles; Restubog, Simon Lloyd D

    2015-11-01

    Performance-prove goal orientation affects performance because it drives people to try to outperform others. A proper understanding of the performance-motivating potential of performance-prove goal orientation requires, however, that we consider the question of whom people desire to outperform. In a multilevel analysis of this issue, we propose that the shared team identification of a team plays an important moderating role here, directing the performance-motivating influence of performance-prove goal orientation to either the team level or the individual level of performance. A multilevel study of salespeople nested in teams supports this proposition, showing that performance-prove goal orientation motivates team performance more with higher shared team identification, whereas performance-prove goal orientation motivates individual performance more with lower shared team identification. Establishing the robustness of these findings, a second study replicates them with individual and team performance in an educational context. (c) 2015 APA, all rights reserved).

  3. [Carbon capture and storage (CCS) and its potential role to mitigate carbon emission in China].

    PubMed

    Chen, Wen-Ying; Wu, Zong-Xin; Wang, Wei-Zhong

    2007-06-01

    Carbon capture and storage (CCS) has been widely recognized as one of the options to mitigate carbon emission to eventually stabilize carbon dioxide concentration in the atmosphere. Three parts of CCS, which are carbon capture, transport, and storage are assessed in this paper, covering comparisons of techno-economic parameters for different carbon capture technologies, comparisons of storage mechanism, capacity and cost for various storage formations, and etc. In addition, the role of CCS to mitigate global carbon emission is introduced. Finally, China MARKAL model is updated to include various CCS technologies, especially indirect coal liquefaction and poly-generation technologies with CCS, in order to consider carbon emission reduction as well as energy security issue. The model is used to generate different scenarios to study potential role of CCS to mitigate carbon emissions by 2050 in China. It is concluded that application of CCS can decrease marginal abatement cost and the decrease rate can reach 45% for the emission reduction rate of 50%, and it can lessen the dependence on nuclear power development for stringent carbon constrains. Moreover, coal resources can be cleanly used for longer time with CCS, e.g., for the scenario C70, coal share in the primary energy consumption by 2050 will increase from 10% when without CCS to 30% when with CCS. Therefore, China should pay attention to CCS R&D activities and to developing demonstration projects.

  4. Menu-driven cloud computing and resource sharing for R and Bioconductor

    PubMed Central

    Bolouri, Hamid; Angerman, Michael

    2011-01-01

    Summary: We report CRdata.org, a cloud-based, free, open-source web server for running analyses and sharing data and R scripts with others. In addition to using the free, public service, CRdata users can launch their own private Amazon Elastic Computing Cloud (EC2) nodes and store private data and scripts on Amazon's Simple Storage Service (S3) with user-controlled access rights. All CRdata services are provided via point-and-click menus. Availability and Implementation: CRdata is open-source and free under the permissive MIT License (opensource.org/licenses/mit-license.php). The source code is in Ruby (ruby-lang.org/en/) and available at: github.com/seerdata/crdata. Contact: hbolouri@fhcrc.org PMID:21685055

  5. OpenElectrophy: An Electrophysiological Data- and Analysis-Sharing Framework

    PubMed Central

    Garcia, Samuel; Fourcaud-Trocmé, Nicolas

    2008-01-01

    Progress in experimental tools and design is allowing the acquisition of increasingly large datasets. Storage, manipulation and efficient analyses of such large amounts of data is now a primary issue. We present OpenElectrophy, an electrophysiological data- and analysis-sharing framework developed to fill this niche. It stores all experiment data and meta-data in a single central MySQL database, and provides a graphic user interface to visualize and explore the data, and a library of functions for user analysis scripting in Python. It implements multiple spike-sorting methods, and oscillation detection based on the ridge extraction methods due to Roux et al. (2007). OpenElectrophy is open source and is freely available for download at http://neuralensemble.org/trac/OpenElectrophy. PMID:19521545

  6. Team Knowledge Sharing Intervention Effects on Team Shared Mental Models and Student Performance in an Undergraduate Science Course

    ERIC Educational Resources Information Center

    Sikorski, Eric G.; Johnson, Tristan E.; Ruscher, Paul H.

    2012-01-01

    The purpose of this study was to examine the effects of a shared mental model (SMM) based intervention on student team mental model similarity and ultimately team performance in an undergraduate meteorology course. The team knowledge sharing (TKS) intervention was designed to promote team reflection, communication, and improvement planning.…

  7. The Relationship between Shared Mental Models and Task Performance in an Online Team- Based Learning Environment

    ERIC Educational Resources Information Center

    Johnson, Tristan E.; Lee, Youngmin

    2008-01-01

    In an effort to better understand learning teams, this study examines the effects of shared mental models on team and individual performance. The results indicate that each team's shared mental model changed significantly over the time that subjects participated in team-based learning activities. The results also showed that the shared mental…

  8. Storage Characteristics of Lithium Ion Cells

    NASA Technical Reports Server (NTRS)

    Ratnakumar, B. V.; Smart, M. C.; Blosiu, J. O.; Surampudi, S.

    2000-01-01

    Lithium ion cells are being developed under the NASA/Air Force Consortium for the upcoming aerospace missions. First among these missions are the Mars 2001 Lander and Mars 2003 Lander and Rover missions. Apart from the usual needs of high specific energy, energy density and long cycle life, a critical performance characteristic for the Mars missions is low temperature performance. The batteries need to perform well at -20 C, with at least 70% of the rated capacity realizable at moderate discharge rates (C/5). Several modifications have been made to the lithium ion chemistry, mainly with respect to the electrolyte, both at JPL' and elsewhere to achieve this. Another key requirement for the battery is its storageability during pre-cruise and cruise periods. For the Mars programs, the cruise period is relatively short, about 12 months, compared to the Outer Planets missions (3-8 years). Yet, the initial results of our storage studies reveal that the cells do sustain noticeable permanent degradation under certain storage conditions, typically of 10% over two months duration at ambient temperatures, attributed to impedance buildup. The build up of the cell impedance or the decay in the cell capacity is affected by various storage parameters, i.e., storage temperature, storage duration, storage mode (open circuit, on buss or cycling at low rates) and state of charge. Our preliminary studies indicate that low storage temperatures and states of charge are preferable. In some cases, we have observed permanent capacity losses of approx. 10% over eight-week storage at 40 C, compared to approx. 0-2% at O C. Also, we are attempting to determine the impact of cell chemistry and design upon the storageability of Li ion cells.

  9. Thermodynamic Performance and Cost Optimization of a Novel Hybrid Thermal-Compressed Air Energy Storage System Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houssainy, Sammy; Janbozorgi, Mohammad; Kavehpour, Pirouz

    Compressed Air Energy Storage (CAES) can potentially allow renewable energy sources to meet electricity demands as reliably as coal-fired power plants. However, conventional CAES systems rely on the combustion of natural gas, require large storage volumes, and operate at high pressures, which possess inherent problems such as high costs, strict geological locations, and the production of greenhouse gas emissions. A novel and patented hybrid thermal-compressed air energy storage (HT-CAES) design is presented which allows a portion of the available energy, from the grid or renewable sources, to operate a compressor and the remainder to be converted and stored in themore » form of heat, through joule heating in a sensible thermal storage medium. The HT-CAES design incudes a turbocharger unit that provides supplementary mass flow rate alongside the air storage. The hybrid design and the addition of a turbocharger have the beneficial effect of mitigating the shortcomings of conventional CAES systems and its derivatives by eliminating combustion emissions and reducing storage volumes, operating pressures, and costs. Storage efficiency and cost are the two key factors, which upon integration with renewable energies would allow the sources to operate as independent forms of sustainable energy. The potential of the HT-CAES design is illustrated through a thermodynamic optimization study, which outlines key variables that have a major impact on the performance and economics of the storage system. The optimization analysis quantifies the required distribution of energy between thermal and compressed air energy storage, for maximum efficiency, and for minimum cost. This study provides a roundtrip energy and exergy efficiency map of the storage system and illustrates a trade off that exists between its capital cost and performance.« less

  10. Community Energy Storage Thermal Analysis and Management: Cooperative Research and Development Final Report, CRADA Number CRD-11-445

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A.

    The goal of this project is to create thermal solutions and models for community energy storage devices using both purpose-designed batteries and EV or PHEV batteries. Modeling will be employed to identify major factors of a device's lifetime and performance. Simultaneously, several devices will be characterized to determine their electrical and thermal performance under controlled conditions. After the factors are identified, a variety of thermal design approaches will be evaluated to improve the performance of energy storage devices. Upon completion of this project, recommendations for community energy storage device enclosures, thermal management systems, and/or battery sourcing will be made. NREL'smore » interest is in both new and aged batteries.« less

  11. 30 CFR 57.6800 - Storage facilities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Storage facilities. 57.6800 Section 57.6800...-Surface and Underground § 57.6800 Storage facilities. When repair work which could produce a spark or flame is to be performed on a storage facility— (a) The explosive material shall be moved to another...

  12. 30 CFR 57.6800 - Storage facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Storage facilities. 57.6800 Section 57.6800...-Surface and Underground § 57.6800 Storage facilities. When repair work which could produce a spark or flame is to be performed on a storage facility— (a) The explosive material shall be moved to another...

  13. 30 CFR 57.6800 - Storage facilities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Storage facilities. 57.6800 Section 57.6800...-Surface and Underground § 57.6800 Storage facilities. When repair work which could produce a spark or flame is to be performed on a storage facility— (a) The explosive material shall be moved to another...

  14. 30 CFR 57.6800 - Storage facilities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Storage facilities. 57.6800 Section 57.6800...-Surface and Underground § 57.6800 Storage facilities. When repair work which could produce a spark or flame is to be performed on a storage facility— (a) The explosive material shall be moved to another...

  15. 30 CFR 57.6800 - Storage facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Storage facilities. 57.6800 Section 57.6800...-Surface and Underground § 57.6800 Storage facilities. When repair work which could produce a spark or flame is to be performed on a storage facility— (a) The explosive material shall be moved to another...

  16. Digital Rocks Portal: a sustainable platform for imaged dataset sharing, translation and automated analysis

    NASA Astrophysics Data System (ADS)

    Prodanovic, M.; Esteva, M.; Hanlon, M.; Nanda, G.; Agarwal, P.

    2015-12-01

    Recent advances in imaging have provided a wealth of 3D datasets that reveal pore space microstructure (nm to cm length scale) and allow investigation of nonlinear flow and mechanical phenomena from first principles using numerical approaches. This framework has popularly been called "digital rock physics". Researchers, however, have trouble storing and sharing the datasets both due to their size and the lack of standardized image types and associated metadata for volumetric datasets. This impedes scientific cross-validation of the numerical approaches that characterize large scale porous media properties, as well as development of multiscale approaches required for correct upscaling. A single research group typically specializes in an imaging modality and/or related modeling on a single length scale, and lack of data-sharing infrastructure makes it difficult to integrate different length scales. We developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal, that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of geosciences or engineering researchers not necessarily trained in computer science or data analysis. Once widely accepter, the repository will jumpstart productivity and enable scientific inquiry and engineering decisions founded on a data-driven basis. This is the first repository of its kind. We show initial results on incorporating essential software tools and pipelines that make it easier for researchers to store and reuse data, and for educators to quickly visualize and illustrate concepts to a wide audience. For data sustainability and continuous access, the portal is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative.

  17. DNA degrades during storage in formalin-fixed and paraffin-embedded tissue blocks.

    PubMed

    Guyard, Alice; Boyez, Alice; Pujals, Anaïs; Robe, Cyrielle; Tran Van Nhieu, Jeanne; Allory, Yves; Moroch, Julien; Georges, Odette; Fournet, Jean-Christophe; Zafrani, Elie-Serge; Leroy, Karen

    2017-10-01

    Formalin-fixed paraffin-embedded (FFPE) tissue blocks are widely used to identify clinically actionable molecular alterations or perform retrospective molecular studies. Our goal was to quantify degradation of DNA occurring during mid to long-term storage of samples in usual conditions. We selected 46 FFPE samples of surgically resected carcinomas of lung, colon, and urothelial tract, of which DNA had been previously extracted. We performed a second DNA extraction on the same blocks under identical conditions after a median period of storage of 5.5 years. Quantitation of DNA by fluorimetry showed a 53% decrease in DNA quantity after storage. Quantitative PCR (qPCR) targeting KRAS exon 2 showed delayed amplification of DNA extracted after storage in all samples but one. The qPCR/fluorimetry quantification ratio decreased from 56 to 15% after storage (p < 0.001). Overall, remaining proportion of DNA analyzable by qPCR represented only 11% of the amount obtained at first extraction. Maximal length of amplifiable DNA fragments assessed with a multiplex PCR was reduced in DNA extracted from stored tissue, indicating that DNA fragmentation had increased in the paraffin blocks during storage. Next-generation sequencing was performed on 12 samples and showed a mean 3.3-fold decrease in library yield and a mean 4.5-fold increase in the number of single-nucleotide variants detected after storage. In conclusion, we observed significant degradation of DNA extracted from the same FFPE block after 4 to 6 years of storage. Better preservation strategies should be considered for storage of FFPE biopsy specimens.

  18. Terrestrial Energy Storage SPS Systems

    NASA Technical Reports Server (NTRS)

    Brandhorst, Henry W., Jr.

    1998-01-01

    Terrestrial energy storage systems for the SSP system were evaluated that could maintain the 1.2 GW power level during periods of brief outages from the solar powered satellite (SPS). Short-term outages of ten minutes and long-term outages up to four hours have been identified as "typical" cases where the ground-based energy storage system would be required to supply power to the grid. These brief interruptions in transmission could result from performing maintenance on the solar power satellite or from safety considerations necessitating the power beam be turned off. For example, one situation would be to allow for the safe passage of airplanes through the space occupied by the beam. Under these conditions, the energy storage system needs to be capable of storing 200 MW-hrs and 4.8 GW-hrs, respectively. The types of energy storage systems to be considered include compressed air energy storage, inertial energy storage, electrochemical energy storage, superconducting magnetic energy storage, and pumped hydro energy storage. For each of these technologies, the state-of-the-art in terms of energy and power densities were identified as well as the potential for scaling to the size systems required by the SSP system. Other issues addressed included the performance, life expectancy, cost, and necessary infrastructure and site locations for the various storage technologies.

  19. Understanding I/O workload characteristics of a Peta-scale storage system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gunasekaran, Raghul

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less

  20. High-performance metadata indexing and search in petascale data storage systems

    NASA Astrophysics Data System (ADS)

    Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.

    2008-07-01

    Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.

  1. Goddard Conference on Mass Storage Systems and Technologies, Volume 1

    NASA Technical Reports Server (NTRS)

    Kobler, Ben (Editor); Hariharan, P. C. (Editor)

    1993-01-01

    Copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in Sep. 1992 are included. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems (data ingestion rates now approach the order of terabytes per day). Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional topics addressed the evolution of the identifiable unit for processing purposes as data ingestion rates increase dramatically, and the present state of the art in mass storage technology.

  2. Determination of Duty Cycle for Energy Storage Systems in a Renewables (Solar) Firming Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoenwald, David A.; Ellison, James

    2016-04-01

    This report supplements the document, “Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems,” issued in a revised version in April 2016, which will include the renewables (solar) firming application for an energy storage system (ESS). This report provides the background and documentation associated with the determination of a duty cycle for an ESS operated in a renewables (solar) firming application for the purpose of measuring and expressing ESS performance in accordance with the ESS performance protocol.

  3. Solar Total Energy Project (STEP) Performance Analysis of High Temperature Energy Storage Subsystem

    NASA Technical Reports Server (NTRS)

    Moore, D. M.

    1984-01-01

    The 1982 milestones and lessons learned; performance in 1983; a typical day's operation; collector field performance and thermal losses; and formal testing are highlighted. An initial test that involves characterizing the high temperature storage (hts) subsystem is emphasized. The primary element is on 11,000 gallon storage tank that provides energy to the steam generator during transient solar conditions or extends operating time. Overnight, thermal losses were analyzed. The length of time the system is operated at various levels of cogeneration using stored energy is reviewed.

  4. Cooperative high-performance storage in the accelerated strategic computing initiative

    NASA Technical Reports Server (NTRS)

    Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark

    1996-01-01

    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.

  5. Performance characteristics of lithium primary cells after controlled storage. [on-orbit for energy power supply

    NASA Technical Reports Server (NTRS)

    Deligiannis, F.; Shen, D. H.; Halpert, G.; Ang, V.; Donley, S.

    1991-01-01

    A program was initiated to investigate the effects of storage on the performance of lithium primary cells. Two types of liquid cathode cells were chosen to investigate these effects. The cell types included Li-SOCl2/BCX cells, Li-SO2 cells from two different manufacturers, and a small sample size of 8-year-old Li-SO2 cells. The following measurements are performed at each test interval: open circuit voltage, resistance and weight, microcalorimetry, ac impedance, capacity, and voltage delay. The authors examine the performance characteristics of these cells after one year of controlled storage at two temperatures (10 and 30 C). The Li-SO2 cells experienced little to no voltage and capacity degradation after one year storage. The Li-SOCl2/BCX cells exhibited significant voltage and capacity degradation after 30 C storage. Predischarging shortly prior to use appears to be an effective method of reducing the initial voltage drop. Studies are in progress to correlate ac impedance and microcalorimetry measurements with capacity losses and voltage delay.

  6. Augmentation of Rocket Propulsion: Physical Limits

    NASA Technical Reports Server (NTRS)

    Taylor, Charles R.

    1996-01-01

    Rocket propulsion is not ideal when the propellant is not ejected at a unique velocity in an inertial frame. An ideal velocity distribution requires that the exhaust velocity vary linearly with the velocity of the vehicle in an inertial frame. It also requires that the velocity distribution variance as a thermodynamic quantity be minimized. A rocket vehicle with an inert propellant is not optimal, because it does not take advantage of the propellant mass for energy storage. Nor is it logical to provide another energy storage device in order to realize variable exhaust velocity, because it would have to be partly unfilled at the beginning of the mission. Performance is enhanced by pushing on the surrounding because it increases the reaction mass and decreases the reaction jet velocity. This decreases the fraction of the energy taken away by the propellant and increases the share taken by the payload. For an optimal model with the propellant used as fuel, the augmentation realized by pushing on air is greatest for vehicles with a low initial/final mass ratio. For a typical vehicle in the Earth's atmosphere, the augmentation is seen mainly at altitudes below about 80 km. When drag is taken into account, there is a well-defined optimum size for the air intake. Pushing on air has the potential to increase the performance of rockets which pass through the atmosphere. This is apart from benefits derived from "air breathing", or using the oxygen in the atmosphere to reduce the mass of an on-board oxidizer. Because of the potential of these measures, it is vital to model these effects more carefully and explore technology that may realize their advantages.

  7. Use of Optical Storage Devices as Shared Resources in Local Area Networks

    DTIC Science & Technology

    1989-09-01

    13 3. SERVICE CALLS FOR MS-DOS CD-ROM EXTENSIONS . 14 4. MS-DOS PRIMITIVE GROUPS ....................... 15 5. RAM USAGE FOR VARIOUS LAN...17 2. Service Call Translation to DOS Primitives ............. 19 3. MS-DOS Device Drivers ............................. 21 4. MS-DOS/ROM...directed to I/O devices will be referred to as primitive instruction groups). These primitive instruction groups include keyboard, video, disk, serial

  8. Support increased adoption of green infrastructure into ...

    EPA Pesticide Factsheets

    This project will provide technical assistance to support implementation of GI in U.S. communities and information on best practices for GI approaches that protect ground water supplies. Case studies that can be more broadly applied to other communities will be conducted. The project will provide program and regional offices with guidance on GI planning, implementation, and maintenance for stormwater management and capture/aquifer storage. To share information on SSWR research projects

  9. Atom Interferometry on Atom Chips - A Novel Approach Towards Precision Inertial Navigation System - PINS

    DTIC Science & Technology

    2010-06-01

    Demonstration of an area-enclosing guided-atom interferometer for rotation sensing, Phys. Rev. Lett. 99, 173201 (2007). 4. Heralded Single- Magnon Quantum...excitations are quantized spin waves ( magnons ), such that transitions between its energy levels ( magnon number states) correspond to highly directional...polarization storage in the form of a single collective-spin excitation ( magnon ) that is shared between two spatially overlapped atomic ensembles

  10. DISN Forecast to Industry

    DTIC Science & Technology

    2008-08-08

    Ms. Cindy E. Moran Director for Network Services 8 August 2008 DISN Forecast to Industry Report Documentation Page Form ApprovedOMB No. 0704-0188...TITLE AND SUBTITLE DISN (Defense Information system Network ) Forecast to Industry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Prescribed by ANSI Std Z39-18 2 2 Integrated DISN Services by 2016: A Solid Goal Network Aware Applications Common Storage & Retrieval Shared Long

  11. Privacy protection in HealthGrid: distributing encryption management over the VO.

    PubMed

    Torres, Erik; de Alfonso, Carlos; Blanquer, Ignacio; Hernández, Vicente

    2006-01-01

    Grid technologies have proven to be very successful in tackling challenging problems in which data access and processing is a bottleneck. Notwithstanding the benefits that Grid technologies could have in Health applications, privacy leakages of current DataGrid technologies due to the sharing of data in VOs and the use of remote resources, compromise its widespreading. Privacy control for Grid technology has become a key requirement for the adoption of Grids in the Healthcare sector. Encrypted storage of confidential data effectively reduces the risk of disclosure. A self-enforcing scheme for encrypted data storage can be achieved by combining Grid security systems with distributed key management and classical cryptography techniques. Virtual Organizations, as the main unit of user management in Grid, can provide a way to organize key sharing, access control lists and secure encryption management. This paper provides programming models and discusses the value, costs and behavior of such a system implemented on top of one of the latest Grid middlewares. This work is partially funded by the Spanish Ministry of Science and Technology in the frame of the project Investigación y Desarrollo de Servicios GRID: Aplicación a Modelos Cliente-Servidor, Colaborativos y de Alta Productividad, with reference TIC2003-01318.

  12. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities

    PubMed Central

    2010-01-01

    Background Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. Results We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. Conclusions The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities. PMID:20482787

  13. Web Mapping Architectures Based on Open Specifications and Free and Open Source Software in the Water Domain

    NASA Astrophysics Data System (ADS)

    Arias Muñoz, C.; Brovelli, M. A.; Kilsedar, C. E.; Moreno-Sanchez, R.; Oxoli, D.

    2017-09-01

    The availability of water-related data and information across different geographical and jurisdictional scales is of critical importance for the conservation and management of water resources in the 21st century. Today information assets are often found fragmented across multiple agencies that use incompatible data formats and procedures for data collection, storage, maintenance, analysis, and distribution. The growing adoption of Web mapping systems in the water domain is reducing the gap between data availability and its practical use and accessibility. Nevertheless, more attention must be given to the design and development of these systems to achieve high levels of interoperability and usability while fulfilling different end user informational needs. This paper first presents a brief overview of technologies used in the water domain, and then presents three examples of Web mapping architectures based on free and open source software (FOSS) and the use of open specifications (OS) that address different users' needs for data sharing, visualization, manipulation, scenario simulations, and map production. The purpose of the paper is to illustrate how the latest developments in OS for geospatial and water-related data collection, storage, and sharing, combined with the use of mature FOSS projects facilitate the creation of sophisticated interoperable Web-based information systems in the water domain.

  14. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities.

    PubMed

    Tolopko, Andrew N; Sullivan, John P; Erickson, Sean D; Wrobel, David; Chiang, Su L; Rudnicki, Katrina; Rudnicki, Stewart; Nale, Jennifer; Selfors, Laura M; Greenhouse, Dara; Muhlich, Jeremy L; Shamu, Caroline E

    2010-05-18

    Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities.

  15. Metallic phase change material thermal storage for Dish Stirling

    DOE PAGES

    Andraka, C. E.; Kruizenga, A. M.; Hernandez-Sanchez, B. A.; ...

    2015-06-05

    Dish-Stirling systems provide high-efficiency solar-only electrical generation and currently hold the world record at 31.25%. This high efficiency results in a system with a high possibility of meeting the DOE SunShot goal of $0.06/kWh. However, current dish-Stirling systems do not incorporate thermal storage. For the next generation of non-intermittent and cost-competitive solar power plants, we propose adding a thermal energy storage system that combines latent (phase-change) energy transport and latent energy storage in order to match the isothermal input requirements of Stirling engines while also maximizing the exergetic efficiency of the entire system. This paper reports current findings in themore » area of selection, synthesis and evaluation of a suitable high performance metallic phase change material (PCM) as well as potential interactions with containment alloy materials. The metallic PCM's, while more expensive than salts, have been identified as having substantial performance advantages primarily due to high thermal conductivity, leading to high exergetic efficiency. Systems modeling has indicated, based on high dish Stirling system performance, an allowable cost of the PCM storage system that is substantially higher than SunShot goals for storage cost on tower systems. Several PCM's are identified with suitable melting temperature, cost, and performance.« less

  16. The storage system of PCM based on random access file system

    NASA Astrophysics Data System (ADS)

    Han, Wenbing; Chen, Xiaogang; Zhou, Mi; Li, Shunfen; Li, Gezi; Song, Zhitang

    2016-10-01

    Emerging memory technologies such as Phase change memory (PCM) tend to offer fast, random access to persistent storage with better scalability. It's a hot topic of academic and industrial research to establish PCM in storage hierarchy to narrow the performance gap. However, the existing file systems do not perform well with the emerging PCM storage, which access storage medium via a slow, block-based interface. In this paper, we propose a novel file system, RAFS, to bring about good performance of PCM, which is built in the embedded platform. We attach PCM chips to the memory bus and build RAFS on the physical address space. In the proposed file system, we simplify traditional system architecture to eliminate block-related operations and layers. Furthermore, we adopt memory mapping and bypassed page cache to reduce copy overhead between the process address space and storage device. XIP mechanisms are also supported in RAFS. To the best of our knowledge, we are among the first to implement file system on real PCM chips. We have analyzed and evaluated its performance with IOZONE benchmark tools. Our experimental results show that the RAFS on PCM outperforms Ext4fs on SDRAM with small record lengths. Based on DRAM, RAFS is significantly faster than Ext4fs by 18% to 250%.

  17. General consumer communication tools for improved image management and communication in medicine.

    PubMed

    Rosset, Chantal; Rosset, Antoine; Ratib, Osman

    2005-12-01

    We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.

  18. Lysosomal abnormalities in hereditary spastic paraplegia types SPG15 and SPG11

    PubMed Central

    Renvoisé, Benoît; Chang, Jaerak; Singh, Rajat; Yonekawa, Sayuri; FitzGibbon, Edmond J; Mankodi, Ami; Vanderver, Adeline; Schindler, Alice B; Toro, Camilo; Gahl, William A; Mahuran, Don J; Blackstone, Craig; Pierson, Tyler Mark

    2014-01-01

    Objective Hereditary spastic paraplegias (HSPs) are among the most genetically diverse inherited neurological disorders, with over 70 disease loci identified (SPG1-71) to date. SPG15 and SPG11 are clinically similar, autosomal recessive disorders characterized by progressive spastic paraplegia along with thin corpus callosum, white matter abnormalities, cognitive impairment, and ophthalmologic abnormalities. Furthermore, both have been linked to early-onset parkinsonism. Methods We describe two new cases of SPG15 and investigate cellular changes in SPG15 and SPG11 patient-derived fibroblasts, seeking to identify shared pathogenic themes. Cells were evaluated for any abnormalities in cell division, DNA repair, endoplasmic reticulum, endosomes, and lysosomes. Results Fibroblasts prepared from patients with SPG15 have selective enlargement of LAMP1-positive structures, and they consistently exhibited abnormal lysosomal storage by electron microscopy. A similar enlargement of LAMP1-positive structures was also observed in cells from multiple SPG11 patients, though prominent abnormal lysosomal storage was not evident. The stabilities of the SPG15 protein spastizin/ZFYVE26 and the SPG11 protein spatacsin were interdependent. Interpretation Emerging studies implicating these two proteins in interactions with the late endosomal/lysosomal adaptor protein complex AP-5 are consistent with shared abnormalities in lysosomes, supporting a converging mechanism for these two disorders. Recent work with Zfyve26−/− mice revealed a similar phenotype to human SPG15, and cells in these mice had endolysosomal abnormalities. SPG15 and SPG11 are particularly notable among HSPs because they can also present with juvenile parkinsonism, and this lysosomal trafficking or storage defect may be relevant for other forms of parkinsonism associated with lysosomal dysfunction. PMID:24999486

  19. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE PAGES

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

  20. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wadhwa, Bharti; Byna, Suren; Butt, Ali R.

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objectsmore » to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.« less

Top