Shared Storage Usage Policy | High-Performance Computing | NREL
Shared Storage Usage Policy Shared Storage Usage Policy To use NREL's high-performance computing (HPC) systems, you must abide by the Shared Storage Usage Policy. /projects NREL HPC allocations include storage space in the /projects filesystem. However, /projects is a shared resource and project
Cake: Enabling High-level SLOs on Shared Storage Systems
2012-11-07
Cake: Enabling High-level SLOs on Shared Storage Systems Andrew Wang Shivaram Venkataraman Sara Alspaugh Randy H. Katz Ion Stoica Electrical...Date) * * * * * * * Professor R. Katz Second Reader (Date) Cake: Enabling High-level SLOs on Shared Storage Systems Andrew Wang, Shivaram Venkataraman ...Report MIT-LCS-TR-667, MIT, Laboratory for Computer Science, 1995. [39] A. Wang, S. Venkataraman , S. Alspaugh, I. Stoica, and R. Katz. Sweet storage SLOs
NASA Technical Reports Server (NTRS)
Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.
1996-01-01
The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.
Parallel checksumming of data chunks of a shared data object using a log-structured file system
Bent, John M.; Faibish, Sorin; Grider, Gary
2016-09-06
Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.
Parallel compression of data chunks of a shared data object using a log-structured file system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Grider, Gary
2016-10-25
Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.
A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less
Fujiwara, M.; Waseda, A.; Nojima, R.; Moriai, S.; Ogata, W.; Sasaki, M.
2016-01-01
Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir’s (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km). PMID:27363566
Fujiwara, M; Waseda, A; Nojima, R; Moriai, S; Ogata, W; Sasaki, M
2016-07-01
Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir's (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km).
The Design and Application of Data Storage System in Miyun Satellite Ground Station
NASA Astrophysics Data System (ADS)
Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu
2015-04-01
China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16 channels with 240MB/s network throughput.When it is integrated,sharing system can provide 1020MB/s write speed simultaneously.When the master storage server fails, the backup storage server takes over the normal service.The literacy of client will not be affected,in which switching time is less than 5s.The design and integrated storage system meet users requirements. Anyway, all-fiber way is too expensive in SAN; SCSI hard disk transfer rate may still be the bottleneck in the development of the entire storage system. Stornext can provide users with efficient sharing, management, automatic archiving of large numbers of files and hardware solutions. It occupies a leading position in big data management. Storage is the most popular sharing shareware, and there are drawbacks in Stornext: Firstly, Stornext software is expensive, in which charge by the sites. When the network scale is large, the purchase cost will be very high. Secondly, the parameters of Stornext software are more demands on the skills of technical staff. If there is a problem, it is difficult to exclude.
SSeCloud: Using secret sharing scheme to secure keys
NASA Astrophysics Data System (ADS)
Hu, Liang; Huang, Yang; Yang, Disheng; Zhang, Yuzhen; Liu, Hengchang
2017-08-01
With the use of cloud storage services, one of the concerns is how to protect sensitive data securely and privately. While users enjoy the convenience of data storage provided by semi-trusted cloud storage providers, they are confronted with all kinds of risks at the same time. In this paper, we present SSeCloud, a secure cloud storage system that improves security and usability by applying secret sharing scheme to secure keys. The system encrypts uploading files on the client side and splits encrypted keys into three shares. Each of them is respectively stored by users, cloud storage providers and the alternative third trusted party. Any two of the parties can reconstruct keys. Evaluation results of prototype system show that SSeCloud provides high security without too much performance penalty.
System and method for programmable bank selection for banked memory subsystems
Blumrich, Matthias A.; Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Hoenicke, Dirk; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan
2010-09-07
A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.
How much electrical energy storage do we need? A synthesis for the U.S., Europe, and Germany
Cebulla, Felix; Haas, Jannik; Eichman, Josh; ...
2018-02-03
Electrical energy storage (EES) is a promising flexibility source for prospective low-carbon energy systems. In the last couple of years, many studies for EES capacity planning have been produced. However, these resulted in a very broad range of power and energy capacity requirements for storage, making it difficult for policymakers to identify clear storage planning recommendations. Therefore, we studied 17 recent storage expansion studies pertinent to the U.S., Europe, and Germany. We then systemized the storage requirement per variable renewable energy (VRE) share and generation technology. Our synthesis reveals that with increasing VRE shares, the EES power capacity increases linearly;more » and the energy capacity, exponentially. Further, by analyzing the outliers, the EES energy requirements can be at least halved. It becomes clear that grids dominated by photovoltaic energy call for more EES, while large shares of wind rely more on transmission capacity. Taking into account the energy mix clarifies - to a large degree - the apparent conflict of the storage requirements between the existing studies. Finally, there might exist a negative bias towards storage because transmission costs are frequently optimistic (by neglecting execution delays and social opposition) and storage can cope with uncertainties, but these issues are rarely acknowledged in the planning process.« less
How much electrical energy storage do we need? A synthesis for the U.S., Europe, and Germany
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cebulla, Felix; Haas, Jannik; Eichman, Josh
Electrical energy storage (EES) is a promising flexibility source for prospective low-carbon energy systems. In the last couple of years, many studies for EES capacity planning have been produced. However, these resulted in a very broad range of power and energy capacity requirements for storage, making it difficult for policymakers to identify clear storage planning recommendations. Therefore, we studied 17 recent storage expansion studies pertinent to the U.S., Europe, and Germany. We then systemized the storage requirement per variable renewable energy (VRE) share and generation technology. Our synthesis reveals that with increasing VRE shares, the EES power capacity increases linearly;more » and the energy capacity, exponentially. Further, by analyzing the outliers, the EES energy requirements can be at least halved. It becomes clear that grids dominated by photovoltaic energy call for more EES, while large shares of wind rely more on transmission capacity. Taking into account the energy mix clarifies - to a large degree - the apparent conflict of the storage requirements between the existing studies. Finally, there might exist a negative bias towards storage because transmission costs are frequently optimistic (by neglecting execution delays and social opposition) and storage can cope with uncertainties, but these issues are rarely acknowledged in the planning process.« less
Fair-share scheduling algorithm for a tertiary storage system
NASA Astrophysics Data System (ADS)
Jakl, Pavel; Lauret, Jérôme; Šumbera, Michal
2010-04-01
Any experiment facing Peta bytes scale problems is in need for a highly scalable mass storage system (MSS) to keep a permanent copy of their valuable data. But beyond the permanent storage aspects, the sheer amount of data makes complete data-set availability onto live storage (centralized or aggregated space such as the one provided by Scalla/Xrootd) cost prohibitive implying that a dynamic population from MSS to faster storage is needed. One of the most challenging aspects of dealing with MSS is the robotic tape component. If a robotic system is used as the primary storage solution, the intrinsically long access times (latencies) can dramatically affect the overall performance. To speed the retrieval of such data, one could organize the requests according to criterion with an aim to deliver maximal data throughput. However, such approaches are often orthogonal to fair resource allocation and a trade-off between quality of service, responsiveness and throughput is necessary for achieving an optimal and practical implementation of a truly faire-share oriented file restore policy. Starting from an explanation of the key criterion of such a policy, we will present evaluations and comparisons of three different MSS file restoration algorithms which meet fair-share requirements, and discuss their respective merits. We will quantify their impact on a typical file restoration cycle for the RHIC/STAR experimental setup and this, within a development, analysis and production environment relying on a shared MSS service [1].
Architecture and method for a burst buffer using flash technology
Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung
2016-03-15
A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.
Secure key storage and distribution
Agrawal, Punit
2015-06-02
This disclosure describes a distributed, fault-tolerant security system that enables the secure storage and distribution of private keys. In one implementation, the security system includes a plurality of computing resources that independently store private keys provided by publishers and encrypted using a single security system public key. To protect against malicious activity, the security system private key necessary to decrypt the publication private keys is not stored at any of the computing resources. Rather portions, or shares of the security system private key are stored at each of the computing resources within the security system and multiple security systems must communicate and share partial decryptions in order to decrypt the stored private key.
Implementing Journaling in a Linux Shared Disk File System
NASA Technical Reports Server (NTRS)
Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew;
2000-01-01
In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.
The performance of disk arrays in shared-memory database machines
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Hong, Wei
1993-01-01
In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.
An object-based storage model for distributed remote sensing images
NASA Astrophysics Data System (ADS)
Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng
2006-10-01
It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.
ERIC Educational Resources Information Center
Walker, Ben
2008-01-01
In August 2007, an $11.2 million proposal for a shared statewide high-density storage facility was submitted to the Board of Governors, the governing body of the State University System in Florida. The project was subsequently approved at a slightly lower level and funding was delayed until 2010/2011. The experiences of coordinating data…
dCache, Sync-and-Share for Big Data
NASA Astrophysics Data System (ADS)
Millar, AP; Fuhrmann, P.; Mkrtchyan, T.; Behrmann, G.; Bernardt, C.; Buchholz, Q.; Guelzow, V.; Litvintsev, D.; Schwank, K.; Rossi, A.; van der Reest, P.
2015-12-01
The availability of cheap, easy-to-use sync-and-share cloud services has split the scientific storage world into the traditional big data management systems and the very attractive sync-and-share services. With the former, the location of data is well understood while the latter is mostly operated in the Cloud, resulting in a rather complex legal situation. Beside legal issues, those two worlds have little overlap in user authentication and access protocols. While traditional storage technologies, popular in HEP, are based on X.509, cloud services and sync-and-share software technologies are generally based on username/password authentication or mechanisms like SAML or Open ID Connect. Similarly, data access models offered by both are somewhat different, with sync-and-share services often using proprietary protocols. As both approaches are very attractive, dCache.org developed a hybrid system, providing the best of both worlds. To avoid reinventing the wheel, dCache.org decided to embed another Open Source project: OwnCloud. This offers the required modern access capabilities but does not support the managed data functionality needed for large capacity data storage. With this hybrid system, scientists can share files and synchronize their data with laptops or mobile devices as easy as with any other cloud storage service. On top of this, the same data can be accessed via established mechanisms, like GridFTP to serve the Globus Transfer Service or the WLCG FTS3 tool, or the data can be made available to worker nodes or HPC applications via a mounted filesystem. As dCache provides a flexible authentication module, the same user can access its storage via different authentication mechanisms; e.g., X.509 and SAML. Additionally, users can specify the desired quality of service or trigger media transitions as necessary, thus tuning data access latency to the planned access profile. Such features are a natural consequence of using dCache. We will describe the design of the hybrid dCache/OwnCloud system, report on several months of operations experience running it at DESY, and elucidate the future road-map.
From Physics to industry: EOS outside HEP
NASA Astrophysics Data System (ADS)
Espinal, X.; Lamanna, M.
2017-10-01
In the competitive market for large-scale storage solutions the current main disk storage system at CERN EOS has been showing its excellence in the multi-Petabyte high-concurrency regime. It has also shown a disruptive potential in powering the service in providing sync and share capabilities and in supporting innovative analysis environments along the storage of LHC data. EOS has also generated interest as generic storage solution ranging from university systems to very large installations for non-HEP applications.
QoS support for end users of I/O-intensive applications using shared storage systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Marion Kei; Zhang, Xuechen; Jiang, Song
2011-01-19
I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less
Lead/acid batteries in systems to improve power quality
NASA Astrophysics Data System (ADS)
Taylor, P.; Butler, P.; Nerbun, W.
Increasing dependence on computer technology is driving needs for extremely high-quality power to prevent loss of information, material, and workers' time that represent billions of dollars annually. This cost has motivated commercial and Federal research and development of energy storage systems that detect and respond to power-quality failures in milliseconds. Electrochemical batteries are among the storage media under investigation for these systems. Battery energy storage systems that employ either flooded lead/acid or valve-regulated lead/acid battery technologies are becoming commercially available to capture a share of this emerging market. Cooperative research and development between the US Department of Energy and private industry have led to installations of lead/acid-based battery energy storage systems to improve power quality at utility and industrial sites and commercial development of fully integrated, modular battery energy storage system products for power quality. One such system by AC Battery Corporation, called the PQ2000, is installed at a test site at Pacific Gas and Electric Company (San Ramon, CA, USA) and at a customer site at Oglethorpe Power Corporation (Tucker, GA, USA). The PQ2000 employs off-the-shelf power electronics in an integrated methodology to control the factors that affect the performance and service life of production-model, low-maintenance, flooded lead/acid batteries. This system, and other members of this first generation of lead/acid-based energy storage systems, will need to compete vigorously for a share of an expanding, yet very aggressive, power quality market.
A Combination Therapy of JO-I and Chemotherapy in Ovarian Cancer Models
2013-10-01
which consists of a 3PAR storage backend and is sharing data via a highly available NetApp storage gateway and 2 high throughput commodity storage...Environment is configured as self- service Enterprise cloud and currently hosts more than 700 virtual machines. The network infrastructure consists of...technology infrastructure and information system applications designed to integrate, automate, and standardize operations. These systems fuse state of
Characterizing output bottlenecks in a supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Bing; Chase, Jeffrey; Dillow, David A
2012-01-01
Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less
Anderson, Beth M.; Stevens, Michael C.; Glahn, David C.; Assaf, Michal; Pearlson, Godfrey D.
2013-01-01
We present a modular, high performance, open-source database system that incorporates popular neuroimaging database features with novel peer-to-peer sharing, and a simple installation. An increasing number of imaging centers have created a massive amount of neuroimaging data since fMRI became popular more than 20 years ago, with much of that data unshared. The Neuroinformatics Database (NiDB) provides a stable platform to store and manipulate neuroimaging data and addresses several of the impediments to data sharing presented by the INCF Task Force on Neuroimaging Datasharing, including 1) motivation to share data, 2) technical issues, and 3) standards development. NiDB solves these problems by 1) minimizing PHI use, providing a cost effective simple locally stored platform, 2) storing and associating all data (including genome) with a subject and creating a peer-to-peer sharing model, and 3) defining a sample, normalized definition of a data storage structure that is used in NiDB. NiDB not only simplifies the local storage and analysis of neuroimaging data, but also enables simple sharing of raw data and analysis methods, which may encourage further sharing. PMID:23912507
Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure
Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R.; ...
2016-04-05
While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulkmore » data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.« less
Optimizing End-to-End Big Data Transfers over Terabits Network Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R.
While future terabit networks hold the promise of significantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today's 100 gigabit networks to realize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink, for instance, the data storage infrastructure at both the source and sink and its interplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this study, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network environment, and we present a new bulkmore » data movement framework for terabit networks, called LADS. LADS exploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to benefit from hardware-level zero-copy, and operating system bypass capabilities when available. It can further improve data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared storage resource, improving input/output bandwidth, and data transfer rates across the high speed networks. We also investigate the performance degradation problems of LADS due to I/O contention on the parallel file system (PFS), when multiple LADS tools share the PFS. We design and evaluate a meta-scheduler to coordinate multiple I/O streams while sharing the PFS, to minimize the I/O contention on the PFS. Finally, with our evaluations, we observe that LADS with meta-scheduling can further improve the performance by up to 14 percent relative to LADS without meta-scheduling.« less
40 CFR 60.434 - Monitoring of operations and recordkeeping.
Code of Federal Regulations, 2012 CFR
2012-07-01
... affected facility using waterborne ink systems or solvent-borne ink systems with solvent recovery systems...) If affected facilities share the same raw ink storage/handling system with existing facilities...
40 CFR 60.434 - Monitoring of operations and recordkeeping.
Code of Federal Regulations, 2014 CFR
2014-07-01
... affected facility using waterborne ink systems or solvent-borne ink systems with solvent recovery systems...) If affected facilities share the same raw ink storage/handling system with existing facilities...
40 CFR 60.434 - Monitoring of operations and recordkeeping.
Code of Federal Regulations, 2013 CFR
2013-07-01
... affected facility using waterborne ink systems or solvent-borne ink systems with solvent recovery systems...) If affected facilities share the same raw ink storage/handling system with existing facilities...
Dryden Flight Research Center Chemical Pharmacy Program
NASA Technical Reports Server (NTRS)
Davis, Bette
1997-01-01
The Dryden Flight Research Center (DFRC) Chemical Pharmacy "Crib" is a chemical sharing system which loans chemicals to users, rather than issuing them or having each individual organization or group purchasing the chemicals. This cooperative system of sharing chemicals eliminates multiple ownership of the same chemicals and also eliminates stockpiles. Chemical management duties are eliminated for each of the participating organizations. The chemical storage issues, hazards and responsibilities are eliminated. The system also ensures safe storage of chemicals and proper disposal practices. The purpose of this program is to reduce the total releases and transfers of toxic chemicals. The initial cost of the program to DFRC was $585,000. A savings of $69,000 per year has been estimated for the Center. This savings includes the reduced costs in purchasing, disposal and chemical inventory/storage responsibilities. DFRC has chemicals stored in 47 buildings and at 289 locations. When the program is fully implemented throughout the Center, there will be three chemical locations at this facility. The benefits of this program are the elimination of chemical management duties; elimination of the hazard associated with chemical storage; elimination of stockpiles; assurance of safe storage; assurance of proper disposal practices; assurance of a safer workplace; and more accurate emissions reports.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perelmutov, T.; Bakken, J.; Petravick, D.
Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less
Measuring household consumption and waste in unmetered, intermittent piped water systems
NASA Astrophysics Data System (ADS)
Kumpel, Emily; Woelfle-Erskine, Cleo; Ray, Isha; Nelson, Kara L.
2017-01-01
Measurements of household water consumption are extremely difficult in intermittent water supply (IWS) regimes in low- and middle-income countries, where water is delivered for short durations, taps are shared, metering is limited, and household storage infrastructure varies widely. Nonetheless, consumption estimates are necessary for utilities to improve water delivery. We estimated household water use in Hubli-Dharwad, India, with a mixed-methods approach combining (limited) metered data, storage container inventories, and structured observations. We developed a typology of household water access according to infrastructure conditions based on the presence of an overhead storage tank and a shared tap. For households with overhead tanks, container measurements and metered data produced statistically similar consumption volumes; for households without overhead tanks, stored volumes underestimated consumption because of significant water use directly from the tap during delivery periods. Households that shared taps consumed much less water than those that did not. We used our water use calculations to estimate waste at the household level and in the distribution system. Very few households used 135 L/person/d, the Government of India design standard for urban systems. Most wasted little water even when unmetered, however, unaccounted-for water in the neighborhood distribution systems was around 50%. Thus, conservation efforts should target loss reduction in the network rather than at households.
Analysis of Energy Storage System with Distributed Hydrogen Production and Gas Turbine
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Bartela, Łukasz; Dubiel-Jurgaś, Klaudia
2017-12-01
Paper presents the concept of energy storage system based on power-to-gas-to-power (P2G2P) technology. The system consists of a gas turbine co-firing hydrogen, which is supplied from a distributed electrolysis installations, powered by the wind farms located a short distance from the potential construction site of the gas turbine. In the paper the location of this type of investment was selected. As part of the analyses, the area of wind farms covered by the storage system and the share of the electricity production which is subjected storage has been changed. The dependence of the changed quantities on the potential of the hydrogen production and the operating time of the gas turbine was analyzed. Additionally, preliminary economic analyses of the proposed energy storage system were carried out.
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)
1998-01-01
This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence.
Data Storage and sharing for the long tail of science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B.; Pouchard, L.; Smith, P. M.
Research data infrastructure such as storage must now accommodate new requirements resulting from trends in research data management that require researchers to store their data for the long term and make it available to other researchers. We propose Data Depot, a system and service that provides capabilities for shared space within a group, shared applications, flexible access patterns and ease of transfer at Purdue University. We evaluate Depot as a solution for storing and sharing multiterabytes of data produced in the long tail of science with a use case in soundscape ecology studies from the Human- Environment Modeling and Analysismore » Laboratory. We observe that with the capabilities enabled by Data Depot, researchers can easily deploy fine-grained data access control, manage data transfer and sharing, as well as integrate their workflows into a High Performance Computing environment.« less
NASA Astrophysics Data System (ADS)
Feng, Junshu; Zhang, Fuqiang
2018-02-01
To realize low-emission and low-carbon energy production and consumption, large-scale development and utilization of renewable energy has been put into practice in China. And it has been recognized that power system of future high renewable energy shares can operate more reliably with the participation of energy storage. Considering the significant role of storage playing in the future power system, this paper focuses on the application of energy storage with high renewable energy penetration. Firstly, two application modes are given, including demand side application mode and centralized renewable energy farm application mode. Afterwards, a high renewable energy penetration scenario of northwest region in China is designed, and its production simulation with application of energy storage in 2050 has been calculated and analysed. Finally, a development path and outlook of energy storage is given.
40 CFR 60.433 - Performance test and compliance provisions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... facilities routinely share the same raw ink storage/handling system with existing facilities, then temporary measurement procedures for segregating the raw inks, related coatings, VOC solvent, and water used at the... the purpose of measuring bulk storage tank quantities of each color of raw ink and each related...
40 CFR 60.433 - Performance test and compliance provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... facilities routinely share the same raw ink storage/handling system with existing facilities, then temporary measurement procedures for segregating the raw inks, related coatings, VOC solvent, and water used at the... the purpose of measuring bulk storage tank quantities of each color of raw ink and each related...
40 CFR 60.433 - Performance test and compliance provisions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... facilities routinely share the same raw ink storage/handling system with existing facilities, then temporary measurement procedures for segregating the raw inks, related coatings, VOC solvent, and water used at the... the purpose of measuring bulk storage tank quantities of each color of raw ink and each related...
ERIC Educational Resources Information Center
Husby, Ole
1990-01-01
The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…
Dynamic Collaboration Infrastructure for Hydrologic Science
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.
2016-12-01
Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.
CERN data services for LHC computing
NASA Astrophysics Data System (ADS)
Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.
2017-10-01
Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.
NASA Astrophysics Data System (ADS)
You, Xiaozhen; Yao, Zhihong
2005-04-01
As a standard of communication and storage for medical digital images, DICOM has been playing a very important role in integration of hospital information. In DICOM, tags are expressed by numbers, and only standard data elements can be shared by looking up Data Dictionary while private tags can not. As such, a DICOM file's readability and extensibility is limited. In addition, reading DICOM files needs special software. In our research, we introduced XML into DICOM, defining an XML-based DICOM special transfer format, XML-DCM, a DICOM storage format, X-DCM, as well as developing a program package to realize format interchange among DICOM, XML-DCM, and X-DCM. XML-DCM is based on the DICOM structure while replacing numeric tags with accessible XML character string tags. The merits are as following: a) every character string tag of XML-DCM has explicit meaning, so users can understand standard data elements and those private data elements easily without looking up the Data Dictionary. In this way, the readability and data sharing of DICOM files are greatly improved; b) According to requirements, users can set new character string tags with explicit meaning to their own system to extend the capacity of data elements; c) User can read the medical image and associated information conveniently through IE, ultimately enlarging the scope of data sharing. The application of storage format X-DCM will reduce data redundancy and save storage memory. The result of practical application shows that XML-DCM does favor integration and share of medical image data among different systems or devices.
NASA Astrophysics Data System (ADS)
Kies, Alexander; Nag, Kabitri; von Bremen, Lueder; Lorenz, Elke; Heinemann, Detlev
2015-04-01
The penetration of renewable energies in the European power system has increased in the last decades (23.5% share of renewables in the gross electricity consumption of the EU-28 in 2012) and is expected to increase further up to very high shares close to 100%. Planning and organizing this European energy transition towards sustainable power sources will be one of the major challenges of the 21st century. It is very likely that in a fully renewable European power system wind and photovoltaics (pv) will contribute the largest shares to the generation mix followed by hydro power. However, feed-in from wind and pv is due to the weather dependant nature of their resources fluctuating and non-controllable. To match generation and consumption several solutions and their combinations were proposed like very high backup-capacities of conventional power generation (e.g. fossile or nuclear), storages or the extension of the transmission grid. Apart from those options hydro power can be used to counterbalance fluctuating wind and pv generation to some extent. In this work we investigate the effects of hydro power from Norway and Sweden on residual storage needs in Europe depending on the overlaying grid scenario. High temporally and spatially resolved weather data with a spatial resolution of 7 x 7 km and a temporal resolution of 1 hour was used to model the feed-in from wind and pv for 34 investigated European countries for the years 2003-2012. Inflow into hydro storages and generation by run-of-river power plants were computed from ERA-Interim reanalysis runoff data at a spatial resolution of 0.75° x 0.75° and a daily temporal resolution. Power flows in a simplified transmission grid connecting the 34 European countries were modelled minimizing dissipation using a DC-flow approximation. Previous work has shown that hydro power, namely in Norway and Sweden, can reduce storage needs in a renewable European power system by a large extent. A 15% share of hydro power in Europe can reduce storage needs by up to 50% with respect to stored energy. This requires however large transmission capacities between the major hydro power producers in Scandinavia and the largest consumers of electrical energy in Western Europe. We show how Scandinavian hydro power can reduce storage needs in dependency of the transmission grid for two fully renewable scenarios: The first one has its wind and pv generation capacities distributed according to an empirically derived approach. The second scenario has an optimal spatial distribution to minimize storage needs distribution of wind and pv generation capacities across Europe. We show that in both cases hydro power together with a well developed transmission grid has the potential to contribute a large share to the solution of the generation-consumption mismatch problem. The work is part of the RESTORE 2050 project (BMBF) that investigates the requirements for cross-country grid extensions, usage of storage technologies and capacities and the development of new balancing technologies.
Tuning HDF5 subfiling performance on parallel file systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey
Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less
Design Considerations for a Web-based Database System of ELISpot Assay in Immunological Research
Ma, Jingming; Mosmann, Tim; Wu, Hulin
2005-01-01
The enzyme-linked immunospot (ELISpot) assay has been a primary means in immunological researches (such as HIV-specific T cell response). Due to huge amount of data involved in ELISpot assay testing, the database system is needed for efficient data entry, easy retrieval, secure storage, and convenient data process. Besides, the NIH has recently issued a policy to promote the sharing of research data (see http://grants.nih.gov/grants/policy/data_sharing). The Web-based database system will be definitely benefit to data sharing among broad research communities. Here are some considerations for a database system of ELISpot assay (DBSEA). PMID:16779326
40 CFR 60.434 - Monitoring of operations and recordkeeping.
Code of Federal Regulations, 2011 CFR
2011-07-01
... recordkeeping. 60.434 Section 60.434 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... affected facility using waterborne ink systems or solvent-borne ink systems with solvent recovery systems...) If affected facilities share the same raw ink storage/handling system with existing facilities...
40 CFR 60.434 - Monitoring of operations and recordkeeping.
Code of Federal Regulations, 2010 CFR
2010-07-01
... recordkeeping. 60.434 Section 60.434 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... affected facility using waterborne ink systems or solvent-borne ink systems with solvent recovery systems...) If affected facilities share the same raw ink storage/handling system with existing facilities...
Design and Implementation of Telemedicine based on Java Media Framework
NASA Astrophysics Data System (ADS)
Xiong, Fengguang; Jia, Zhiyan
According to analyze the importance and problem of telemedicine in this paper, a telemedicine system based on JMF is proposed to design and implement capturing, compression, storage, transmission, reception and play of a medical audio and video. The telemedicine system can solve existing problems that medical information is not shared, platform-dependent is high, software is incompatibilities and so on. Experimental data prove that the system has low hardware cost, and is easy to transmission and storage, and is portable and powerful.
Virtual memory support for distributed computing environments using a shared data object model
NASA Astrophysics Data System (ADS)
Huang, F.; Bacon, J.; Mapp, G.
1995-12-01
Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.
Establishment of key grid-connected performance index system for integrated PV-ES system
NASA Astrophysics Data System (ADS)
Li, Q.; Yuan, X. D.; Qi, Q.; Liu, H. M.
2016-08-01
In order to further promote integrated optimization operation of distributed new energy/ energy storage/ active load, this paper studies the integrated photovoltaic-energy storage (PV-ES) system which is connected with the distribution network, and analyzes typical structure and configuration selection for integrated PV-ES generation system. By combining practical grid- connected characteristics requirements and technology standard specification of photovoltaic generation system, this paper takes full account of energy storage system, and then proposes several new grid-connected performance indexes such as paralleled current sharing characteristic, parallel response consistency, adjusting characteristic, virtual moment of inertia characteristic, on- grid/off-grid switch characteristic, and so on. A comprehensive and feasible grid-connected performance index system is then established to support grid-connected performance testing on integrated PV-ES system.
Lee, Im-Yeong
2014-01-01
Data outsourcing services have emerged with the increasing use of digital information. They can be used to store data from various devices via networks that are easy to access. Unlike existing removable storage systems, storage outsourcing is available to many users because it has no storage limit and does not require a local storage medium. However, the reliability of storage outsourcing has become an important topic because many users employ it to store large volumes of data. To protect against unethical administrators and attackers, a variety of cryptography systems are used, such as searchable encryption and proxy reencryption. However, existing searchable encryption technology is inconvenient for use in storage outsourcing environments where users upload their data to be shared with others as necessary. In addition, some existing schemes are vulnerable to collusion attacks and have computing cost inefficiencies. In this paper, we analyze existing proxy re-encryption with keyword search. PMID:24693240
Lee, Sun-Ho; Lee, Im-Yeong
2014-01-01
Data outsourcing services have emerged with the increasing use of digital information. They can be used to store data from various devices via networks that are easy to access. Unlike existing removable storage systems, storage outsourcing is available to many users because it has no storage limit and does not require a local storage medium. However, the reliability of storage outsourcing has become an important topic because many users employ it to store large volumes of data. To protect against unethical administrators and attackers, a variety of cryptography systems are used, such as searchable encryption and proxy reencryption. However, existing searchable encryption technology is inconvenient for use in storage outsourcing environments where users upload their data to be shared with others as necessary. In addition, some existing schemes are vulnerable to collusion attacks and have computing cost inefficiencies. In this paper, we analyze existing proxy re-encryption with keyword search.
Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L
2008-01-15
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.
Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.
2007-01-01
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812
Tech Transfer Webinar: Amoeba Cysts as Natural Containers for the Transport and Storage of Pathogens
DOE Office of Scientific and Technical Information (OSTI.GOV)
El-Etr, Sahar
2014-10-08
Sahar El-Etr, Biomedical Scientist at the Lawrence Livermore National Laboratory, shares a unique method for transporting clinical samples from the field to a laboratory. The use of amoeba as “natural” containers for pathogens was utilized to develop the first living system for the transport and storage of pathogens. The amoeba system works at ambient temperature for extended periods of time—capabilities currently not available for biological sample transport.
High-Pressure Oxygen Generation for Outpost EVA Study
NASA Technical Reports Server (NTRS)
Jeng, Frank F.; Conger, Bruce; Ewert, Michael K.; Anderson, Molly S.
2009-01-01
The amount of oxygen consumption for crew extravehicular activity (EVA) in future lunar exploration missions will be significant. Eight technologies to provide high pressure EVA O2 were investigated. They are: high pressure O2 storage, liquid oxygen (LOX) storage followed by vaporization, scavenging LOX from Lander followed by vaporization, LOX delivery followed by sorption compression, water electrolysis followed by compression, stand-alone high pressure water electrolyzer, Environmental Control and Life Support System (ECLSS) and Power Elements sharing a high pressure water electrolyzer, and ECLSS and In-Situ Resource Utilization (ISRU) Elements sharing a high pressure electrolyzer. A trade analysis was conducted comparing launch mass and equivalent system mass (ESM) of the eight technologies in open and closed ECLSS architectures. Technologies considered appropriate for the two architectures were selected and suggested for development.
Vehicle-to-Grid Automatic Load Sharing with Driver Preference in Micro-Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yubo; Nazaripouya, Hamidreza; Chu, Chi-Cheng
Integration of Electrical Vehicles (EVs) with power grid not only brings new challenges for load management, but also opportunities for distributed storage and generation. This paper comprehensively models and analyzes distributed Vehicle-to-Grid (V2G) for automatic load sharing with driver preference. In a micro-grid with limited communications, V2G EVs need to decide load sharing based on their own power and voltage profile. A droop based controller taking into account driver preference is proposed in this paper to address the distributed control of EVs. Simulations are designed for three fundamental V2G automatic load sharing scenarios that include all system dynamics of suchmore » applications. Simulation results demonstrate that active power sharing is achieved proportionally among V2G EVs with consideration of driver preference. In additional, the results also verify the system stability and reactive power sharing analysis in system modelling, which sheds light on large scale V2G automatic load sharing in more complicated cases.« less
Decibel: The Relational Dataset Branching System
Maddox, Michael; Goehring, David; Elmore, Aaron J.; Madden, Samuel; Parameswaran, Aditya; Deshpande, Amol
2017-01-01
As scientific endeavors and data analysis become increasingly collaborative, there is a need for data management systems that natively support the versioning or branching of datasets to enable concurrent analysis, cleaning, integration, manipulation, or curation of data across teams of individuals. Common practice for sharing and collaborating on datasets involves creating or storing multiple copies of the dataset, one for each stage of analysis, with no provenance information tracking the relationships between these datasets. This results not only in wasted storage, but also makes it challenging to track and integrate modifications made by different users to the same dataset. In this paper, we introduce the Relational Dataset Branching System, Decibel, a new relational storage system with built-in version control designed to address these shortcomings. We present our initial design for Decibel and provide a thorough evaluation of three versioned storage engine designs that focus on efficient query processing with minimal storage overhead. We also develop an exhaustive benchmark to enable the rigorous testing of these and future versioned storage engine designs. PMID:28149668
A Secure and Efficient Audit Mechanism for Dynamic Shared Data in Cloud Storage
2014-01-01
With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data. PMID:24959630
A secure and efficient audit mechanism for dynamic shared data in cloud storage.
Kwon, Ohmin; Koo, Dongyoung; Shin, Yongjoo; Yoon, Hyunsoo
2014-01-01
With popularization of cloud services, multiple users easily share and update their data through cloud storage. For data integrity and consistency in the cloud storage, the audit mechanisms were proposed. However, existing approaches have some security vulnerabilities and require a lot of computational overheads. This paper proposes a secure and efficient audit mechanism for dynamic shared data in cloud storage. The proposed scheme prevents a malicious cloud service provider from deceiving an auditor. Moreover, it devises a new index table management method and reduces the auditing cost by employing less complex operations. We prove the resistance against some attacks and show less computation cost and shorter time for auditing when compared with conventional approaches. The results present that the proposed scheme is secure and efficient for cloud storage services managing dynamic shared data.
Cricket: A Mapped, Persistent Object Store
NASA Technical Reports Server (NTRS)
Shekita, Eugene; Zwilling, Michael
1996-01-01
This paper describes Cricket, a new database storage system that is intended to be used as a platform for design environments and persistent programming languages. Cricket uses the memory management primitives of the Mach operating system to provide the abstraction of a shared, transactional single-level store that can be directly accessed by user applications. In this paper, we present the design and motivation for Cricket. We also present some initial performance results which show that, for its intended applications, Cricket can provide better performance than a general-purpose database storage system.
Tech Transfer Webinar: Amoeba Cysts as Natural Containers for the Transport and Storage of Pathogens
El-Etr, Sahar
2018-01-16
Sahar El-Etr, Biomedical Scientist at the Lawrence Livermore National Laboratory, shares a unique method for transporting clinical samples from the field to a laboratory. The use of amoeba as ânaturalâ containers for pathogens was utilized to develop the first living system for the transport and storage of pathogens. The amoeba system works at ambient temperature for extended periods of timeâcapabilities currently not available for biological sample transport.
1977-04-01
task of data organization, management, and storage has been given to a select group of specialists . These specialists (the Data Base Administrators...report writers, etc.) the task of data organi?9tion, management, and storage has been given to a select group of specialists . These specialists (the...distributed DBMS Involves first identifying a set of two or more tasks blocking each other from a collection of shared 12 records. Once the set of
Digital radiography: spatial and contrast resolution
NASA Astrophysics Data System (ADS)
Bjorkholm, Paul; Annis, M.; Frederick, E.; Stein, J.; Swift, R.
1981-07-01
The addition of digital image collection and storage to standard and newly developed x-ray imaging techniques has allowed spectacular improvements in some diagnostic procedures. There is no reason to expect that the developments in this area are yet complete. But no matter what further developments occur in this field, all the techniques will share a common element, digital image storage and processing. This common element alone determines some of the important imaging characteristics. These will be discussed using one system, the Medical MICRODOSE System as an example.
Parallel file system with metadata distributed across partitioned key-value store c
Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron
2017-09-19
Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).
FORCEnet Net Centric Architecture - A Standards View
2006-06-01
SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION DATA MANAGEMENT APPLICATION...R V I C E P L A T F O R M S E R V I C E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM...E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION
Williamson, Rebecca; Meacham, Lillian; Cherven, Brooke; Hassen-Schilling, Leann; Edwards, Paula; Palgon, Michael; Espinoza, Sofia; Mertens, Ann
2014-09-01
Cancer SurvivorLink™, www.cancersurvivorlink.org , is a patient-controlled communication tool where survivors can electronically store and share documents with healthcare providers. Functionally, SurvivorLink serves as an electronic personal health record-a record of health-related information managed and controlled by the survivor. Recruitment methods to increase registration and the characteristics of registrants who completed each step of using SurvivorLink are described. Pediatric cancer survivors were recruited via mailings, survivor clinic, and community events. Recruitment method and Aflac Survivor Clinic attendance was determined for each registrant. Registration date, registrant type (parent vs. survivor), zip code, creation of a personal health record in SurvivorLink, storage of documents, and document sharing were measured. Logistic regression was used to determine the characteristics that predicted creation of a health record and storage of documents. To date, 275 survivors/parents have completed registration: 63 were recruited via mailing, 99 from clinic, 56 from community events, and 57 via other methods. Overall, 66.9 % registrants created a personal health record and 45.7 % of those stored a health document. There were no significant predictors for creating a personal health record. Attending a survivor clinic was the strongest predictor of document storage (p < 0.01). Of those with a document stored, 21.4 % shared with a provider. Having attended survivor clinic is the biggest predictor of registering and using SurvivorLink. Many survivors must advocate for their survivorship care. Survivor Link provides educational material and supports the dissemination of survivor-specific follow-up recommendations to facilitate shared clinical care decision making.
NASA Astrophysics Data System (ADS)
Suftin, I.; Read, J. S.; Walker, J.
2013-12-01
Scientists prefer not having to be tied down to a specific machine or operating system in order to analyze local and remote data sets or publish work. Increasingly, analysis has been migrating to decentralized web services and data sets, using web clients to provide the analysis interface. While simplifying workflow access, analysis, and publishing of data, the move does bring with it its own unique set of issues. Web clients used for analysis typically offer workflows geared towards a single user, with steps and results that are often difficult to recreate and share with others. Furthermore, workflow results often may not be easily used as input for further analysis. Older browsers further complicate things by having no way to maintain larger chunks of information, often offloading the job of storage to the back-end server or trying to squeeze it into a cookie. It has been difficult to provide a concept of "session storage" or "workflow sharing" without a complex orchestration of the back-end for storage depending on either a centralized file system or database. With the advent of HTML5, browsers gained the ability to store more information through the use of the Web Storage API (a browser-cookie holds a maximum of 4 kilobytes). Web Storage gives us the ability to store megabytes of arbitrary data in-browser either with an expiration date or just for a session. This allows scientists to create, update, persist and share their workflow without depending on the backend to store session information, providing the flexibility for new web-based workflows to emerge. In the DSASWeb portal ( http://cida.usgs.gov/DSASweb/ ), using these techniques, the representation of every step in the analyst's workflow is stored as plain-text serialized JSON, which we can generate as a text file and provide to the analyst as an upload. This file may then be shared with others and loaded back into the application, restoring the application to the state it was in when the session file was generated. A user may then view results produced during that session or go back and alter input parameters, creating new results and producing new, unique sessions which they can then again share. This technique not only provides independence for the user to manage their session as they like, but also allows much greater freedom for the application provider to scale out without having to worry about carrying over user information or maintaining it in a central location.
A class Hierarchical, object-oriented approach to virtual memory management
NASA Technical Reports Server (NTRS)
Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.
1989-01-01
The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.
Grid data access on widely distributed worker nodes using scalla and SRM
NASA Astrophysics Data System (ADS)
Jakl, P.; Lauret, J.; Hanushevsky, A.; Shoshani, A.; Sim, A.; Gu, J.
2008-07-01
Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.
NASA Astrophysics Data System (ADS)
Poat, M. D.; Lauret, J.; Betts, W.
2015-12-01
The STAR online computing infrastructure has become an intensive dynamic system used for first-hand data collection and analysis resulting in a dense collection of data output. As we have transitioned to our current state, inefficient, limited storage systems have become an impediment to fast feedback to online shift crews. Motivation for a centrally accessible, scalable and redundant distributed storage system had become a necessity in this environment. OpenStack Swift Object Storage and Ceph Object Storage are two eye-opening technologies as community use and development have led to success elsewhere. In this contribution, OpenStack Swift and Ceph have been put to the test with single and parallel I/O tests, emulating real world scenarios for data processing and workflows. The Ceph file system storage, offering a POSIX compliant file system mounted similarly to an NFS share was of particular interest as it aligned with our requirements and was retained as our solution. I/O performance tests were run against the Ceph POSIX file system and have presented surprising results indicating true potential for fast I/O and reliability. STAR'S online compute farm historical use has been for job submission and first hand data analysis. The goal of reusing the online compute farm to maintain a storage cluster and job submission will be an efficient use of the current infrastructure.
Teleradiology mobile internet system with a new information security solution
NASA Astrophysics Data System (ADS)
Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kusumoto, Masahiko; Kaneko, Masahiro; Moriyama, Noriyuki
2014-03-01
We have developed an external storage system by using secret sharing scheme and tokenization for regional medical cooperation, PHR service and information preservation. The use of mobile devices such as smart phones and tablets will be accelerated for a PHR service, and the confidential medical information is exposed to the risk of damage and intercept. We verified the transfer rate of the sending and receiving of data to and from the external storage system that connected it with PACS by the Internet this time. External storage systems are the data centers that exist in Okinawa, in Osaka, in Sapporo and in Tokyo by using secret sharing scheme. PACS continuously transmitted 382 CT images to the external data centers. Total capacity of the CT images is about 200MB. The total time that had been required to transmit was about 250 seconds. Because the preservation method to use secret sharing scheme is applied, security is strong. But, it also takes the information transfer time of this system too much. Therefore, DICOM data is masked to the header information part because it is made to anonymity in our method. The DICOM data made anonymous is preserved in the data base in the hospital. Header information including individual information is divided into two or more tallies by secret sharing scheme, and preserved at two or more external data centers. The token to relate the DICOM data anonymity made to header information preserved outside is strictly preserved in the token server. The capacity of header information that contains patient's individual information is only about 2% of the entire DICOM data. This total time that had been required to transmit was about 5 seconds. Other, common solutions that can protect computer communication networks from attacks are classified as cryptographic techniques or authentication techniques. Individual number IC card is connected with electronic certification authority of web medical image conference system. Individual number IC card is given only to the person to whom the authority to operate web medical image conference system was given.
Huang, Shuo; Liu, Jing
2010-05-01
Application of clinical digital medical imaging has raised many tough issues to tackle, such as data storage, management, and information sharing. Here we investigated a mobile phone based medical image management system which is capable of achieving personal medical imaging information storage, management and comprehensive health information analysis. The technologies related to the management system spanning the wireless transmission technology, the technical capabilities of phone in mobile health care and management of mobile medical database were discussed. Taking medical infrared images transmission between phone and computer as an example, the working principle of the present system was demonstrated.
Cryptonite: A Secure and Performant Data Repository on Public Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor
2012-06-29
Cloud storage has become immensely popular for maintaining synchronized copies of files and for sharing documents with collaborators. However, there is heightened concern about the security and privacy of Cloud-hosted data due to the shared infrastructure model and an implicit trust in the service providers. Emerging needs of secure data storage and sharing for domains like Smart Power Grids, which deal with sensitive consumer data, require the persistence and availability of Cloud storage but with client-controlled security and encryption, low key management overhead, and minimal performance costs. Cryptonite is a secure Cloud storage repository that addresses these requirements using amore » StrongBox model for shared key management.We describe the Cryptonite service and desktop client, discuss performance optimizations, and provide an empirical analysis of the improvements. Our experiments shows that Cryptonite clients achieve a 40% improvement in file upload bandwidth over plaintext storage using the Azure Storage Client API despite the added security benefits, while our file download performance is 5 times faster than the baseline for files greater than 100MB.« less
NASA Technical Reports Server (NTRS)
Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne
2004-01-01
The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.
Cooperative storage of shared files in a parallel computing system with dynamic block size
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-11-10
Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).
Virtualization - A Key Cost Saver in NASA Multi-Mission Ground System Architecture
NASA Technical Reports Server (NTRS)
Swenson, Paul; Kreisler, Stephen; Sager, Jennifer A.; Smith, Dan
2014-01-01
With science team budgets being slashed, and a lack of adequate facilities for science payload teams to operate their instruments, there is a strong need for innovative new ground systems that are able to provide necessary levels of capability processing power, system availability and redundancy while maintaining a small footprint in terms of physical space, power utilization and cooling.The ground system architecture being presented is based off of heritage from several other projects currently in development or operations at Goddard, but was designed and built specifically to meet the needs of the Science and Planetary Operations Control Center (SPOCC) as a low-cost payload command, control, planning and analysis operations center. However, this SPOCC architecture was designed to be generic enough to be re-used partially or in whole by other labs and missions (since its inception that has already happened in several cases!)The SPOCC architecture leverages a highly available VMware-based virtualization cluster with shared SAS Direct-Attached Storage (DAS) to provide an extremely high-performing, low-power-utilization and small-footprint compute environment that provides Virtual Machine resources shared among the various tenant missions in the SPOCC. The storage is also expandable, allowing future missions to chain up to 7 additional 2U chassis of storage at an extremely competitive cost if they require additional archive or virtual machine storage space.The software architecture provides a fully-redundant GMSEC-based message bus architecture based on the ActiveMQ middleware to track all health and safety status within the SPOCC ground system. All virtual machines utilize the GMSEC system agents to report system host health over the GMSEC bus, and spacecraft payload health is monitored using the Hammers Integrated Test and Operations System (ITOS) Galaxy Telemetry and Command (TC) system, which performs near-real-time limit checking and data processing on the downlinked data stream and injects messages into the GMSEC bus that are monitored to automatically page the on-call operator or Systems Administrator (SA) when an off-nominal condition is detected. This architecture, like the LTSP thin clients, are shared across all tenant missions.Other required IT security controls are implemented at the ground system level, including physical access controls, logical system-level authentication authorization management, auditing and reporting, network management and a NIST 800-53 FISMA-Moderate IT Security plan Risk Assessment Contingency Plan, helping multiple missions share the cost of compliance with agency-mandated directives.The SPOCC architecture provides science payload control centers and backup mission operations centers with a cost-effective, standardized approach to virtualizing and monitoring resources that were traditionally multiple racks full of physical machines. The increased agility in deploying new virtual systems and thin client workstations can provide significant savings in personnel costs for maintaining the ground system. The cost savings in procurement, power, rack footprint and cooling as well as the shared multi-mission design greatly reduces upfront cost for missions moving into the facility. Overall, the authors hope that this architecture will become a model for how future NASA operations centers are constructed!
ERIC Educational Resources Information Center
Glantz, Richard S.
Until recently, the emphasis in information storage and retrieval systems has been towards batch-processing of large files. In contrast, SHOEBOX is designed for the unformatted, personal file collection of the computer-naive individual. Operating through display terminals in a time-sharing, interactive environment on the IBM 360, the user can…
ERIC Educational Resources Information Center
Diffin, Jennifer; Chirombo, Fanuel; Nangle, Dennis; de Jong, Mark
2010-01-01
This article explains how the document management team (circulation and interlibrary loan) at the University of Maryland University College implemented Microsoft's SharePoint product to create a central hub for online collaboration, communication, and storage. Enhancing the team's efficiency, organization, and cooperation was the primary goal.…
Huang, Mingbo; Hu, Ding; Yu, Donglan; Zheng, Zhensheng; Wang, Kuijian
2011-12-01
Enhanced extracorporeal counterpulsation (EECP) information consists of both text and hemodynamic waveform data. At present EECP text information has been successfully managed through Web browser, while the management and sharing of hemodynamic waveform data through Internet has not been solved yet. In order to manage EECP information completely, based on the in-depth analysis of EECP hemodynamic waveform file of digital imaging and communications in medicine (DICOM) format and its disadvantages in Internet sharing, we proposed the use of the extensible markup language (XML), which is currently the Internet popular data exchange standard, as the storage specification for the sharing of EECP waveform data. Then we designed a web-based sharing system of EECP hemodynamic waveform data via ASP. NET 2.0 platform. Meanwhile, we specifically introduced the four main system function modules and their implement methods, including DICOM to XML conversion module, EECP waveform data management module, retrieval and display of EECP waveform module and the security mechanism of the system.
C-MOS array design techniques: SUMC multiprocessor system study
NASA Technical Reports Server (NTRS)
Clapp, W. A.; Helbig, W. A.; Merriam, A. S.
1972-01-01
The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.
Sharing from Scratch: How To Network CD-ROM.
ERIC Educational Resources Information Center
Doering, David
1998-01-01
Examines common CD-ROM networking architectures: via existing operating systems (OS), thin server towers, and dedicated servers. Discusses digital video disc (DVD) and non-CD/DVD optical storage solutions and presents case studies of networks that work. (PEN)
Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakl, Pavel; /Prague, Inst. Phys.; Lauret, Jerome
2011-11-10
Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of themore » largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.« less
Emerging Security Mechanisms for Medical Cyber Physical Systems.
Kocabas, Ovunc; Soyata, Tolga; Aktas, Mehmet K
2016-01-01
The following decade will witness a surge in remote health-monitoring systems that are based on body-worn monitoring devices. These Medical Cyber Physical Systems (MCPS) will be capable of transmitting the acquired data to a private or public cloud for storage and processing. Machine learning algorithms running in the cloud and processing this data can provide decision support to healthcare professionals. There is no doubt that the security and privacy of the medical data is one of the most important concerns in designing an MCPS. In this paper, we depict the general architecture of an MCPS consisting of four layers: data acquisition, data aggregation, cloud processing, and action. Due to the differences in hardware and communication capabilities of each layer, different encryption schemes must be used to guarantee data privacy within that layer. We survey conventional and emerging encryption schemes based on their ability to provide secure storage, data sharing, and secure computation. Our detailed experimental evaluation of each scheme shows that while the emerging encryption schemes enable exciting new features such as secure sharing and secure computation, they introduce several orders-of-magnitude computational and storage overhead. We conclude our paper by outlining future research directions to improve the usability of the emerging encryption schemes in an MCPS.
Software Defined Cyberinfrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, Ian; Blaiszik, Ben; Chard, Kyle
Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policiesmore » by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob
Hydropower plant (HPP) generation comprises a considerable portion of bulk electricity generation and is delivered with a low-carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which include wind and solar. Increasing penetration levels of wind and solar lead to a lower inertia on the electric grid, which poses stability challenges. In recent years, breakthroughs in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments of renewable energy resources on electric grids. If integrated with scalable, multi-time-step energy storage so that the total output can be controlled, multiple run-of-the-river (ROR)more » HPPs can be deployed. Although the size of a single energy storage system is much smaller than that of a typical reservoir, the ratings of storages and multiple ROR HPPs approximately equal the rating of a large, conventional HPP. This paper proposes cohesively managing multiple sets of energy storage systems distributed in different locations. This paper also describes the challenges associated with ROR HPP system architecture and operation.« less
NASA Astrophysics Data System (ADS)
Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.
2009-12-01
Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem architectures using PxFS and QFS were found to be incompatible with our software architecture, so sharing of data between systems is accomplished via traditional NFS. Linux was found to be limited in terms of deployment flexibility and consistency between versions. Despite the experimentation with various technologies, our current virtualized architecture is stable to the point of an average daily real time data return rate of 92.34% over the entire lifetime of the project to date.
A thermal storage capacity market for non dispatchable renewable energies
NASA Astrophysics Data System (ADS)
Bennouna, El Ghali; Mouaky, Ammar; Arrad, Mouad; Ghennioui, Abdellatif; Mimet, Abdelaziz
2017-06-01
Due to the increasingly high capacity of wind power and solar PV in Germany and some other European countries and the high share of variable renewable energy resources in comparison to fossil and nuclear capacity, a power reserve market structured by auction systems was created to facilitate the exchange of balance power capacities between systems and even grid operators. Morocco has a large potential for both wind and solar energy and is engaged in a program to deploy 2000MW of wind capacity by 2020 and 3000 MW of solar capacity by 2030. Although the competitiveness of wind energy is very strong, it appears clearly that the wind program could be even more ambitious than what it is, especially when compared to the large exploitable potential. On the other hand, heavy investments on concentrated solar power plants equipped with thermal energy storage have triggered a few years ago including the launching of the first part of the Nour Ouarzazate complex, the goal being to reach stable, dispatchable and affordable electricity especially during evening peak hours. This paper aims to demonstrate the potential of shared thermal storage capacity between dispatchable and non dispatchable renewable energies and particularly CSP and wind power. Thus highlighting the importance of a storage capacity market in parallel to the power reserve market and the and how it could enhance the development of both wind and CSP market penetration.
Advancing Collaboration through Hydrologic Data and Model Sharing
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.
2015-12-01
HydroShare is an online, collaborative system for open sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined primarily by standardized metadata, content data models for each resource type, and an overarching resource data model based on the Open Archives Initiative's Object Reuse and Exchange (OAI-ORE) standard and a hierarchical file packaging system called "BagIt". HydroShare expands the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated to include geospatial and multidimensional space-time datasets commonly used in hydrology. HydroShare also includes new capability for sharing models, model components, and analytical tools and will take advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. It also supports web services and server/cloud based computation operating on resources for the execution of hydrologic models and analysis and visualization of hydrologic data. HydroShare uses iRODS as a network file system for underlying storage of datasets and models. Collaboration is enabled by casting datasets and models as "social objects". Social functions include both private and public sharing, formation of collaborative groups of users, and value-added annotation of shared datasets and models. The HydroShare web interface and social media functions were developed using the Django web application framework coupled to iRODS. Data visualization and analysis is supported through the Tethys Platform web GIS software stack. Links to external systems are supported by RESTful web service interfaces to HydroShare's content. This presentation will introduce the HydroShare functionality developed to date and describe ongoing development of functionality to support collaboration and integration of data and models.
Digital Photograph Security: What Plastic Surgeons Need to Know.
Thomas, Virginia A; Rugeley, Patricia B; Lau, Frank H
2015-11-01
Sharing and storing digital patient photographs occur daily in plastic surgery. Two major risks associated with the practice, data theft and Health Insurance Portability and Accountability Act (HIPAA) violations, have been dramatically amplified by high-speed data connections and digital camera ubiquity. The authors review what plastic surgeons need to know to mitigate those risks and provide recommendations for implementing an ideal, HIPAA-compliant solution for plastic surgeons' digital photography needs: smartphones and cloud storage. Through informal discussions with plastic surgeons, the authors identified the most common photograph sharing and storage methods. For each method, a literature search was performed to identify the risks of data theft and HIPAA violations. HIPAA violation risks were confirmed by the second author (P.B.R.), a compliance liaison and privacy officer. A comprehensive review of HIPAA-compliant cloud storage services was performed. When possible, informal interviews with cloud storage services representatives were conducted. The most common sharing and storage methods are not HIPAA compliant, and several are prone to data theft. The authors' review of cloud storage services identified six HIPAA-compliant vendors that have strong to excellent security protocols and policies. These options are reasonably priced. Digital photography and technological advances offer major benefits to plastic surgeons but are not without risks. A proper understanding of data security and HIPAA regulations needs to be applied to these technologies to safely capture their benefits. Cloud storage services offer efficient photograph sharing and storage with layers of security to ensure HIPAA compliance and mitigate data theft risk.
Storing, Browsing, Querying, and Sharing Data: the THREDDS Data Repository (TDR)
NASA Astrophysics Data System (ADS)
Wilson, A.; Lindholm, D.; Baltzer, T.
2005-12-01
The Unidata Internet Data Distribution (IDD) network delivers gigabytes of data per day in near real time to sites across the U.S. and beyond. The THREDDS Data Server (TDS) supports public browsing of metadata and data access via OPeNDAP enabled URLs for datasets such as these. With such large quantities of data, sites generally employ a simple data management policy, keeping the data for a relatively short term on the order of hours to perhaps a week or two. In order to save interesting data in longer term storage and make it available for sharing, a user must move the data herself. In this case the user is responsible for determining where space is available, executing the data movement, generating any desired metadata, and setting access control to enable sharing. This task sequence is generally based on execution of a sequence of low level operating system specific commands with significant user involvement. The LEAD (Linked Environments for Atmospheric Discovery) project is building a cyberinfrastructure to support research and education in mesoscale meteorology. LEAD orchestrations require large, robust, and reliable storage with speedy access to stage data and store both intermediate and final results. These requirements suggest storage solutions that involve distributed storage, replication, and interfacing to archival storage systems such as mass storage systems and tape or removable disks. LEAD requirements also include metadata generation and access in order to support querying. In support of both THREDDS and LEAD requirements, Unidata is designing and prototyping the THREDDS Data Repository (TDR), a framework for a modular data repository to support distributed data storage and retrieval using a variety of back end storage media and interchangeable software components. The TDR interface will provide high level abstractions for long term storage, controlled, fast and reliable access, and data movement capabilities via a variety of technologies such as OPeNDAP and gridftp. The modular structure will allow substitution of software components so that both simple and complex storage media can be integrated into the repository. It will also allow integration of different varieties of supporting software. For example, if replication is desired, replica management could be handled via a simple hash table or a complex solution such as Replica Locater Service (RLS). In order to ensure that metadata is available for all the data in the repository, the TDR will also generate THREDDS metadata when necessary. Users will be able to establish levels of access control to their metadata and data. Coupled with a THREDDS Data Server, both browsing via THREDDS catalogs and querying capabilities will be supported. This presentation will describe the motivating factors, current status, and future plans of the TDR. References: IDD: http://www.unidata.ucar.edu/content/software/idd/index.html THREDDS: http://www.unidata.ucar.edu/content/projects/THREDDS/tech/server/ServerStatus.html LEAD: http://lead.ou.edu/ RLS: http://www.isi.edu/~annc/papers/chervenakRLSjournal05.pdf
Intelligent Energy Management System for PV-Battery-based Microgrids in Future DC Homes
NASA Astrophysics Data System (ADS)
Chauhan, R. K.; Rajpurohit, B. S.; Gonzalez-Longatt, F. M.; Singh, S. N.
2016-06-01
This paper presents a novel intelligent energy management system (IEMS) for a DC microgrid connected to the public utility (PU), photovoltaic (PV) and multi-battery bank (BB). The control objectives of the proposed IEMS system are: (i) to ensure the load sharing (according to the source capacity) among sources, (ii) to reduce the power loss (high efficient) in the system, and (iii) to enhance the system reliability and power quality. The proposed IEMS is novel because it follows the ideal characteristics of the battery (with some assumptions) for the power sharing and the selection of the closest source to minimize the power losses. The IEMS allows continuous and accurate monitoring with intelligent control of distribution system operations such as battery bank energy storage (BBES) system, PV system and customer utilization of electric power. The proposed IEMS gives the better operational performance for operating conditions in terms of load sharing, loss minimization, and reliability enhancement of the DC microgrid.
Social Influences on User Behavior in Group Information Repositories
ERIC Educational Resources Information Center
Rader, Emilee Jeanne
2009-01-01
Group information repositories are systems for organizing and sharing files kept in a central location that all group members can access. These systems are often assumed to be tools for storage and control of files and their metadata, not tools for communication. The purpose of this research is to better understand user behavior in group…
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1993-01-01
The need to manage large amounts of data on robotically controlled devices has been critical to the mission of this Agency for many years. In many respects this Agency has helped pioneer, with their industry counterparts, the development of a number of products long before these systems became commercially available. Numerous attempts have been made to field both robotically controlled tape and optical disk technology and systems to satisfy our tertiary storage needs. Custom developed products were architected, designed, and developed without vendor partners over the past two decades to field workable systems to handle our ever increasing storage requirements. Many of the attendees of this symposium are familiar with some of the older products, such as: the Braegen Automated Tape Libraries (ATL's), the IBM 3850, the Ampex TeraStore, just to name a few. In addition, we embarked on an in-house development of a shared disk input/output support processor to manage our every increasing tape storage needs. For all intents and purposes, this system was a file server by current definitions which used CDC Cyber computers as the control processors. It served us well and was just recently removed from production usage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jy-An John; Feng, Zhili; Zhang, Wei
An apparatus and system is described for storing high-pressure fluids such as hydrogen. An inner tank and pre-stressed concrete pressure vessel share the structural and/or pressure load on the inner tank. The system and apparatus provide a high performance and low cost container while mitigating hydrogen embrittlement of the metal tank. System is useful for distributing hydrogen to a power grid or to a vehicle refueling station.
Efficient Access to Massive Amounts of Tape-Resident Data
NASA Astrophysics Data System (ADS)
Yu, David; Lauret, Jérôme
2017-10-01
Randomly restoring files from tapes degrades the read performance primarily due to frequent tape mounts. The high latency and time-consuming tape mount and dismount is a major issue when accessing massive amounts of data from tape storage. BNL’s mass storage system currently holds more than 80 PB of data on tapes, managed by HPSS. To restore files from HPSS, we make use of a scheduler software, called ERADAT. This scheduler system was originally based on code from Oak Ridge National Lab, developed in the early 2000s. After some major modifications and enhancements, ERADAT now provides advanced HPSS resource management, priority queuing, resource sharing, web-browser visibility of real-time staging activities and advanced real-time statistics and graphs. ERADAT is also integrated with ACSLS and HPSS for near real-time mount statistics and resource control in HPSS. ERADAT is also the interface between HPSS and other applications such as the locally developed Data Carousel, providing fair resource-sharing policies and related capabilities. ERADAT has demonstrated great performance at BNL.
A Distributed Multi-Agent System for Collaborative Information Management and Learning
NASA Technical Reports Server (NTRS)
Chen, James R.; Wolfe, Shawn R.; Wragg, Stephen D.; Koga, Dennis (Technical Monitor)
2000-01-01
In this paper, we present DIAMS, a system of distributed, collaborative agents to help users access, manage, share and exchange information. A DIAMS personal agent helps its owner find information most relevant to current needs. It provides tools and utilities for users to manage their information repositories with dynamic organization and virtual views. Flexible hierarchical display is integrated with indexed query search-to support effective information access. Automatic indexing methods are employed to support user queries and communication between agents. Contents of a repository are kept in object-oriented storage to facilitate information sharing. Collaboration between users is aided by easy sharing utilities as well as automated information exchange. Matchmaker agents are designed to establish connections between users with similar interests and expertise. DIAMS agents provide needed services for users to share and learn information from one another on the World Wide Web.
1986-08-19
Thus in and g (X, Y) A and X share one element, and B and Y share another. Assigning a value to A (via its storage element) also assigns that value to X...functionality as well as generate it. i4 29 References [Ada] ’ADA as a Hardware Description Language: An Initial Report’ M.R. Bar- bacci, S. Grout, G ...1985; pp. 303-320. (Expert] ’An Expert-System Paradigm for Design’ Forrest D. Brewer, Daniel D. Gajski ; 23rd Design Automation Conference, 1986; pp
Prototyping an online wetland ecosystem services model using open model sharing standards
Feng, M.; Liu, S.; Euliss, N.H.; Young, Caitlin; Mushet, D.M.
2011-01-01
Great interest currently exists for developing ecosystem models to forecast how ecosystem services may change under alternative land use and climate futures. Ecosystem services are diverse and include supporting services or functions (e.g., primary production, nutrient cycling), provisioning services (e.g., wildlife, groundwater), regulating services (e.g., water purification, floodwater retention), and even cultural services (e.g., ecotourism, cultural heritage). Hence, the knowledge base necessary to quantify ecosystem services is broad and derived from many diverse scientific disciplines. Building the required interdisciplinary models is especially challenging as modelers from different locations and times may develop the disciplinary models needed for ecosystem simulations, and these models must be identified and made accessible to the interdisciplinary simulation. Additional difficulties include inconsistent data structures, formats, and metadata required by geospatial models as well as limitations on computing, storage, and connectivity. Traditional standalone and closed network systems cannot fully support sharing and integrating interdisciplinary geospatial models from variant sources. To address this need, we developed an approach to openly share and access geospatial computational models using distributed Geographic Information System (GIS) techniques and open geospatial standards. We included a means to share computational models compliant with Open Geospatial Consortium (OGC) Web Processing Services (WPS) standard to ensure modelers have an efficient and simplified means to publish new models. To demonstrate our approach, we developed five disciplinary models that can be integrated and shared to simulate a few of the ecosystem services (e.g., water storage, waterfowl breeding) that are provided by wetlands in the Prairie Pothole Region (PPR) of North America.
Peregrine System Configuration | High-Performance Computing | NREL
nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a
NASA Technical Reports Server (NTRS)
Burleigh, Scott C.
2011-01-01
Sptrace is a general-purpose space utilization tracing system that is conceptually similar to the commercial Purify product used to detect leaks and other memory usage errors. It is designed to monitor space utilization in any sort of heap, i.e., a region of data storage on some device (nominally memory; possibly shared and possibly persistent) with a flat address space. This software can trace usage of shared and/or non-volatile storage in addition to private RAM (random access memory). Sptrace is implemented as a set of C function calls that are invoked from within the software that is being examined. The function calls fall into two broad classes: (1) functions that are embedded within the heap management software [e.g., JPL's SDR (Simple Data Recorder) and PSM (Personal Space Management) systems] to enable heap usage analysis by populating a virtual time-sequenced log of usage activity, and (2) reporting functions that are embedded within the application program whose behavior is suspect. For ease of use, these functions may be wrapped privately inside public functions offered by the heap management software. Sptrace can be used for VxWorks or RTEMS realtime systems as easily as for Linux or OS/X systems.
XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital.
Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Choi, Young Hwan; Cho, Yong Kyun
2013-12-01
The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE.
XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital
Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Cho, Yong Kyun
2013-01-01
Objectives The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Methods Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. Results The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Conclusions Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE. PMID:24523994
Intelligent Integrated System Health Management
NASA Technical Reports Server (NTRS)
Figueroa, Fernando
2012-01-01
Intelligent Integrated System Health Management (ISHM) is the management of data, information, and knowledge (DIaK) with the purposeful objective of determining the health of a system (Management: storage, distribution, sharing, maintenance, processing, reasoning, and presentation). Presentation discusses: (1) ISHM Capability Development. (1a) ISHM Knowledge Model. (1b) Standards for ISHM Implementation. (1c) ISHM Domain Models (ISHM-DM's). (1d) Intelligent Sensors and Components. (2) ISHM in Systems Design, Engineering, and Integration. (3) Intelligent Control for ISHM-Enabled Systems
Implementation of NASTRAN on the IBM/370 CMS operating system
NASA Technical Reports Server (NTRS)
Britten, S. S.; Schumacker, B.
1980-01-01
The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.
easyDAS: Automatic creation of DAS servers
2011-01-01
Background The Distributed Annotation System (DAS) has proven to be a successful way to publish and share biological data. Although there are more than 750 active registered servers from around 50 organizations, setting up a DAS server comprises a fair amount of work, making it difficult for many research groups to share their biological annotations. Given the clear advantage that the generalized sharing of relevant biological data is for the research community it would be desirable to facilitate the sharing process. Results Here we present easyDAS, a web-based system enabling anyone to publish biological annotations with just some clicks. The system, available at http://www.ebi.ac.uk/panda-srv/easydas is capable of reading different standard data file formats, process the data and create a new publicly available DAS source in a completely automated way. The created sources are hosted on the EBI systems and can take advantage of its high storage capacity and network connection, freeing the data provider from any network management work. easyDAS is an open source project under the GNU LGPL license. Conclusions easyDAS is an automated DAS source creation system which can help many researchers in sharing their biological data, potentially increasing the amount of relevant biological data available to the scientific community. PMID:21244646
Shared prefetching to reduce execution skew in multi-threaded systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eichenberger, Alexandre E; Gunnels, John A
Mechanisms are provided for optimizing code to perform prefetching of data into a shared memory of a computing device that is shared by a plurality of threads that execute on the computing device. A memory stream of a portion of code that is shared by the plurality of threads is identified. A set of prefetch instructions is distributed across the plurality of threads. Prefetch instructions are inserted into the instruction sequences of the plurality of threads such that each instruction sequence has a separate sub-portion of the set of prefetch instructions, thereby generating optimized code. Executable code is generated basedmore » on the optimized code and stored in a storage device. The executable code, when executed, performs the prefetches associated with the distributed set of prefetch instructions in a shared manner across the plurality of threads.« less
Dual-Wavelength Sensitized Photopolymer for Holographic Data Storage
NASA Astrophysics Data System (ADS)
Tao, Shiquan; Zhao, Yuxia; Wan, Yuhong; Zhai, Qianli; Liu, Pengfei; Wang, Dayong; Wu, Feipeng
2010-08-01
Novel photopolymers for holographic storage were investigated by combining acrylate monomers and/or vinyl monomers as recording media and liquid epoxy resins plus an amine harder as binder. In order to improve the holographic performances of the material at blue-green wavelength band two novel dyes were used as sensitizer. The methods of evaluating the holographic performances of the material, including the shrinkage and noise characteristics, are described in detail. Preliminary experiments show that samples with optimized composite have good holographic performances, and it is possible to record dual-wavelength hologram simultaneously in this photopolymer by sharing the same optical system, thus the storage density and data rate can be doubly increased.
Progress in preliminary studies at Ottana Solar Facility
NASA Astrophysics Data System (ADS)
Demontis, V.; Camerada, M.; Cau, G.; Cocco, D.; Damiano, A.; Melis, T.; Musio, M.
2016-05-01
The fast increasing share of distributed generation from non-programmable renewable energy sources, such as the strong penetration of photovoltaic technology in the distribution networks, has generated several problems for the management and security of the whole power grid. In order to meet the challenge of a significant share of solar energy in the electricity mix, several actions aimed at increasing the grid flexibility and its hosting capacity, as well as at improving the generation programmability, need to be investigated. This paper focuses on the ongoing preliminary studies at the Ottana Solar Facility, a new experimental power plant located in Sardinia (Italy) currently under construction, which will offer the possibility to progress in the study of solar plants integration in the power grid. The facility integrates a concentrating solar power (CSP) plant, including a thermal energy storage system and an organic Rankine cycle (ORC) unit, with a concentrating photovoltaic (CPV) plant and an electrical energy storage system. The facility has the main goal to assess in real operating conditions the small scale concentrating solar power technology and to study the integration of the two technologies and the storage systems to produce programmable and controllable power profiles. A model for the CSP plant yield was developed to assess different operational strategies that significantly influence the plant yearly yield and its global economic effectiveness. In particular, precise assumptions for the ORC module start-up operation behavior, based on discussions with the manufacturers and technical datasheets, will be described. Finally, the results of the analysis of the: "solar driven", "weather forecasts" and "combined storage state of charge (SOC)/ weather forecasts" operational strategies will be presented.
Engineering model system study for a regenerative fuel cell: Study report
NASA Technical Reports Server (NTRS)
Chang, B. J.; Schubert, F. H.; Kovach, A. J.; Wynveen, R. A.
1984-01-01
Key design issues of the regenerative fuel cell system concept were studied and a design definition of an alkaline electrolyte based engineering model system or low Earth orbit missions was completed. Definition of key design issues for a regenerative fuel cell system include gaseous reactant storage, shared heat exchangers and high pressure pumps. A power flow diagram for the 75 kW initial space station and the impact of different regenerative fuel cell modular sizes on the total 5 year to orbit weight and volume are determined. System characteristics, an isometric drawing, component sizes and mass and energy balances are determined for the 10 kW engineering model system. An open loop regenerative fuel cell concept is considered for integration of the energy storage system with the life support system of the space station. Technical problems and their solutions, pacing technologies and required developments and demonstrations for the regenerative fuel cell system are defined.
Thermal storage for industrial process and reject heat
NASA Technical Reports Server (NTRS)
Duscha, R. A.; Masica, W. J.
1978-01-01
Industrial production uses about 40 percent of the total energy consumed in the United States. The major share of this is derived from fossil fuel. Potential savings of scarce fuel is possible through the use of thermal energy storage (TES) of reject or process heat for subsequent use. Three especially significant industries where high temperature TES appears attractive - paper and pulp, iron and steel, and cement are discussed. Potential annual fuel savings, with large scale implementation of near-term TES systems for these three industries, is nearly 9,000,000 bbl of oil.
An International Review of the Development and Implementation of Shared Print Storage
ERIC Educational Resources Information Center
Genoni, Paul
2013-01-01
This article undertakes a review of the literature related to shared print storage and national repositories from 1980-2013. There is a separate overview of the relevant Australian literature. The coverage includes both relevant journal literature and major reports. In the process the article traces the developments in the theory and practice of…
NASA Technical Reports Server (NTRS)
Moseley, E. C.
1974-01-01
The Medical Information Computer System (MEDICS) is a time shared, disk oriented minicomputer system capable of meeting storage and retrieval needs for the space- or non-space-related applications of at least 16 simultaneous users. At the various commercially available low cost terminals, the simple command and control mechanism and the generalized communication activity of the system permit multiple form inputs, real-time updating, and instantaneous retrieval capability with a full range of options.
NASA Astrophysics Data System (ADS)
Gan, T.; Tarboton, D. G.; Dash, P. K.; Gichamo, T.; Horsburgh, J. S.
2017-12-01
Web based apps, web services and online data and model sharing technology are becoming increasingly available to support research. This promises benefits in terms of collaboration, platform independence, transparency and reproducibility of modeling workflows and results. However, challenges still exist in real application of these capabilities and the programming skills researchers need to use them. In this research we combined hydrologic modeling web services with an online data and model sharing system to develop functionality to support reproducible hydrologic modeling work. We used HydroDS, a system that provides web services for input data preparation and execution of a snowmelt model, and HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. To make the web services easy to use, we developed a HydroShare app (based on the Tethys platform) to serve as a browser based user interface for HydroDS. In this integration, HydroDS receives web requests from the HydroShare app to process the data and execute the model. HydroShare supports storage and sharing of the results generated by HydroDS web services. The snowmelt modeling example served as a use case to test and evaluate this approach. We show that, after the integration, users can prepare model inputs or execute the model through the web user interface of the HydroShare app without writing program code. The model input/output files and metadata describing the model instance are stored and shared in HydroShare. These files include a Python script that is automatically generated by the HydroShare app to document and reproduce the model input preparation workflow. Once stored in HydroShare, inputs and results can be shared with other users, or published so that other users can directly discover, repeat or modify the modeling work. This approach provides a collaborative environment that integrates hydrologic web services with a data and model sharing system to enable model development and execution. The entire system comprised of the HydroShare app, HydroShare and HydroDS web services is open source and contributes to capability for web based modeling research.
Strabo: An App and Database for Structural Geology and Tectonics Data
NASA Astrophysics Data System (ADS)
Newman, J.; Williams, R. T.; Tikoff, B.; Walker, J. D.; Good, J.; Michels, Z. D.; Ash, J.
2016-12-01
Strabo is a data system designed to facilitate digital storage and sharing of structural geology and tectonics data. The data system allows researchers to store and share field and laboratory data as well as construct new multi-disciplinary data sets. Strabo is built on graph database technology, as opposed to a relational database, which provides the flexibility to define relationships between objects of any type. This framework allows observations to be linked in a complex and hierarchical manner that is not possible in traditional database topologies. Thus, the advantage of the Strabo data structure is the ability of graph databases to link objects in both numerous and complex ways, in a manner that more accurately reflects the realities of the collecting and organizing of geological data sets. The data system is accessible via a mobile interface (iOS and Android devices) that allows these data to be stored, visualized, and shared during primary collection in the field or the laboratory. The Strabo Data System is underlain by the concept of a "Spot," which we define as any observation that characterizes a specific area. This can be anything from a strike and dip measurement of bedding to cross-cutting relationships between faults in complex dissected terrains. Each of these spots can then contain other Spots and/or measurements (e.g., lithology, slickenlines, displacement magnitude.) Hence, the Spot concept is applicable to all relationships and observation sets. Strabo is therefore capable of quantifying and digitally storing large spatial variations and complex geometries of naturally deformed rocks within hierarchically related maps and images. These approaches provide an observational fidelity comparable to a traditional field book, but with the added benefits of digital data storage, processing, and ease of sharing. This approach allows Strabo to integrate seamlessly into the workflow of most geologists. Future efforts will focus on extending Strabo to other sub-disciplines as well as developing a desktop system for the enhanced collection and organization of microstructural data.
The SBOL Stack: A Platform for Storing, Publishing, and Sharing Synthetic Biology Designs.
Madsen, Curtis; McLaughlin, James Alastair; Mısırlı, Göksel; Pocock, Matthew; Flanagan, Keith; Hallinan, Jennifer; Wipat, Anil
2016-06-17
Recently, synthetic biologists have developed the Synthetic Biology Open Language (SBOL), a data exchange standard for descriptions of genetic parts, devices, modules, and systems. The goals of this standard are to allow scientists to exchange designs of biological parts and systems, to facilitate the storage of genetic designs in repositories, and to facilitate the description of genetic designs in publications. In order to achieve these goals, the development of an infrastructure to store, retrieve, and exchange SBOL data is necessary. To address this problem, we have developed the SBOL Stack, a Resource Description Framework (RDF) database specifically designed for the storage, integration, and publication of SBOL data. This database allows users to define a library of synthetic parts and designs as a service, to share SBOL data with collaborators, and to store designs of biological systems locally. The database also allows external data sources to be integrated by mapping them to the SBOL data model. The SBOL Stack includes two Web interfaces: the SBOL Stack API and SynBioHub. While the former is designed for developers, the latter allows users to upload new SBOL biological designs, download SBOL documents, search by keyword, and visualize SBOL data. Since the SBOL Stack is based on semantic Web technology, the inherent distributed querying functionality of RDF databases can be used to allow different SBOL stack databases to be queried simultaneously, and therefore, data can be shared between different institutes, centers, or other users.
A Grid Connected Photovoltaic Inverter with Battery-Supercapacitor Hybrid Energy Storage
Guerrero-Martínez, Miguel Ángel; Barrero-González, Fermín
2017-01-01
The power generation from renewable power sources is variable in nature, and may contain unacceptable fluctuations, which can be alleviated by using energy storage systems. However, the cost of batteries and their limited lifetime are serious disadvantages. To solve these problems, an improvement consisting in the collaborative association of batteries and supercapacitors has been studied. Nevertheless, these studies don’t address in detail the case of residential and large-scale photovoltaic systems. In this paper, a selected combined topology and a new control scheme are proposed to control the power sharing between batteries and supercapacitors. Also, a method for sizing the energy storage system together with the hybrid distribution based on the photovoltaic power curves is introduced. This innovative contribution not only reduces the stress levels on the battery, and hence increases its life span, but also provides constant power injection to the grid during a defined time interval. The proposed scheme is validated through detailed simulation and experimental tests. PMID:28800102
A Grid Connected Photovoltaic Inverter with Battery-Supercapacitor Hybrid Energy Storage.
Miñambres-Marcos, Víctor Manuel; Guerrero-Martínez, Miguel Ángel; Barrero-González, Fermín; Milanés-Montero, María Isabel
2017-08-11
The power generation from renewable power sources is variable in nature, and may contain unacceptable fluctuations, which can be alleviated by using energy storage systems. However, the cost of batteries and their limited lifetime are serious disadvantages. To solve these problems, an improvement consisting in the collaborative association of batteries and supercapacitors has been studied. Nevertheless, these studies don't address in detail the case of residential and large-scale photovoltaic systems. In this paper, a selected combined topology and a new control scheme are proposed to control the power sharing between batteries and supercapacitors. Also, a method for sizing the energy storage system together with the hybrid distribution based on the photovoltaic power curves is introduced. This innovative contribution not only reduces the stress levels on the battery, and hence increases its life span, but also provides constant power injection to the grid during a defined time interval. The proposed scheme is validated through detailed simulation and experimental tests.
PANDA: A distributed multiprocessor operating system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chubb, P.
1989-01-01
PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less
THE WIDE-AREA ENERGY STORAGE AND MANAGEMENT SYSTEM PHASE II Final Report - Flywheel Field Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Makarov, Yuri V.; Weimar, Mark R.
2010-08-31
This research was conducted by Pacific Northwest National Laboratory (PNNL) operated for the U.S. department of Energy (DOE) by Battelle Memorial Institute for Bonneville Power Administration (BPA), California Institute for Energy and Environment (CIEE) and California Energy Commission (CEC). A wide-area energy management system (WAEMS) is a centralized control system that operates energy storage devices (ESDs) located in different places to provide energy and ancillary services that can be shared among balancing authorities (BAs). The goal of this research is to conduct flywheel field tests, investigate the technical characteristics and economics of combined hydro-flywheel regulation services that can be sharedmore » between Bonneville Power Administration (BPA) and California Independent System Operator (CAISO) controlled areas. This report is the second interim technical report for Phase II of the WAEMS project. This report presents: 1) the methodology of sharing regulation service between balancing authorities, 2) the algorithm to allocate the regulation signal between the flywheel and hydro power plant to minimize the wear-and-tear of the hydro power plants, 3) field results of the hydro-flywheel regulation service (conducted by the Beacon Power), and 4) the performance metrics and economic analysis of the combined hydro-flywheel regulation service.« less
The Scalable Checkpoint/Restart Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.
The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less
Distributed geospatial model sharing based on open interoperability standards
Feng, Min; Liu, Shuguang; Euliss, Ned H.; Fang, Yin
2009-01-01
Numerous geospatial computational models have been developed based on sound principles and published in journals or presented in conferences. However modelers have made few advances in the development of computable modules that facilitate sharing during model development or utilization. Constraints hampering development of model sharing technology includes limitations on computing, storage, and connectivity; traditional stand-alone and closed network systems cannot fully support sharing and integrating geospatial models. To address this need, we have identified methods for sharing geospatial computational models using Service Oriented Architecture (SOA) techniques and open geospatial standards. The service-oriented model sharing service is accessible using any tools or systems compliant with open geospatial standards, making it possible to utilize vast scientific resources available from around the world to solve highly sophisticated application problems. The methods also allow model services to be empowered by diverse computational devices and technologies, such as portable devices and GRID computing infrastructures. Based on the generic and abstract operations and data structures required for Web Processing Service (WPS) standards, we developed an interactive interface for model sharing to help reduce interoperability problems for model use. Geospatial computational models are shared on model services, where the computational processes provided by models can be accessed through tools and systems compliant with WPS. We developed a platform to help modelers publish individual models in a simplified and efficient way. Finally, we illustrate our technique using wetland hydrological models we developed for the prairie pothole region of North America.
Developing a Business Intelligence Process for a Training Module in SharePoint 2010
NASA Technical Reports Server (NTRS)
Schmidtchen, Bryce; Solano, Wanda M.; Albasini, Colby
2015-01-01
Prior to this project, training information for the employees of the National Center for Critical Processing and Storage (NCCIPS) was stored in an array of unrelated spreadsheets and SharePoint lists that had to be manually updated. By developing a content management system through a web application platform named SharePoint, this training system is now highly automated and provides a much less intensive method of storing training data and scheduling training courses. This system was developed by using SharePoint Designer and laying out the data structure for the interaction between different lists of data about the employees. The automation of data population inside of the lists was accomplished by implementing SharePoint workflows which essentially lay out the logic for how data is connected and calculated between certain lists. The resulting training system is constructed from a combination of five lists of data with a single list acting as the user-friendly interface. This interface is populated with the courses required for each employee and includes past and future information about course requirements. The employees of NCCIPS now have the ability to view, log, and schedule their training information and courses with much more ease. This system will relieve a significant amount of manual input and serve as a powerful informational resource for the employees of NCCIPS in the future.
Parallelization of KENO-Va Monte Carlo code
NASA Astrophysics Data System (ADS)
Ramón, Javier; Peña, Jorge
1995-07-01
KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.
BRISK--research-oriented storage kit for biology-related data.
Tan, Alan; Tripp, Ben; Daley, Denise
2011-09-01
In genetic science, large-scale international research collaborations represent a growing trend. These collaborations have demanding and challenging database, storage, retrieval and communication needs. These studies typically involve demographic and clinical data, in addition to the results from numerous genomic studies (omics studies) such as gene expression, eQTL, genome-wide association and methylation studies, which present numerous challenges, thus the need for data integration platforms that can handle these complex data structures. Inefficient methods of data transfer and access control still plague research collaboration. As science becomes more and more collaborative in nature, the need for a system that adequately manages data sharing becomes paramount. Biology-Related Information Storage Kit (BRISK) is a package of several web-based data management tools that provide a cohesive data integration and management platform. It was specifically designed to provide the architecture necessary to promote collaboration and expedite data sharing between scientists. The software, documentation, Java source code and demo are available at http://genapha.icapture.ubc.ca/brisk/index.jsp. BRISK was developed in Java, and tested on an Apache Tomcat 6 server with a MySQL database. denise.daley@hli.ubc.ca.
REQUIREMENTS AND GUIDELINES FOR NSLS EXPERIMENTAL BEAM LINE VACUUM SYSTEMS-REVISION B.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FOERSTER,C.
Typical beam lines are comprised of an assembly of vacuum valves and shutters referred to as a ''front end'', optical elements to monochromatize, focus and split the photon beam, and an experimental area where a target sample is placed into the photon beam and data from the interaction is detected and recorded. Windows are used to separate sections of beam lines that are not compatible with storage ring ultra high vacuum. Some experimental beam lines share a common vacuum with storage rings. Sections of beam lines are only allowed to vent up to atmospheric pressure using pure nitrogen gas aftermore » a vacuum barrier is established to protect ring vacuum. The front end may only be bled up when there is no current in the machine. This is especially true on the VUV storage ring where for most experiments, windows are not used. For the shorter wavelength, more energetic photons of the x-ray ring, beryllium windows are used at various beam line locations so that the monochromator, mirror box or sample chamber may be used in a helium atmosphere or rough vacuum. The window separates ring vacuum from the environment of the downstream beam line components. The stored beam lifetime in the storage rings and the maintenance of desirable reflection properties of optical surfaces depend upon hydrocarbon-free, ultra-high vacuum systems. Storage ring vacuum systems will operate at pressures of {approximately} 1 x 10{sup {minus}10} Torr without beam and {approximately} 1 x 10{sup {minus}9} Torr with beam. Systems are free of hydrocarbons in the sense that no pumps, valves, etc. containing organics are used. Components are all-metal, chemically cleaned and bakeable. To the extent that beam lines share a common vacuum with the storage ring, the same criteria will hold for beam line components. The design philosophy for NSLS beam lines is to use all-metal, hydrocarbon-free front end components and recommend that experimenters use this approach for common vacuum hardware downstream of front ends. O-ring-sealed valves, if used, are not permitted upstream of the monochromator exit aperture. It will be the responsibility of users to demonstrate that their experiment will not degrade the pressure or quality of the storage ring vacuum. As a matter of operating policy, all beam lines will be monitored for prescribed pressure and the contribution of high mass gases to this pressure each time a beam line has been opened to ring vacuum.« less
The Impact of Storage on Processing: How Is Information Maintained in Working Memory?
ERIC Educational Resources Information Center
Vergauwe, Evie; Camos, Valérie; Barrouillet, Pierre
2014-01-01
Working memory is typically defined as a system devoted to the simultaneous maintenance and processing of information. However, the interplay between these 2 functions is still a matter of debate in the literature, with views ranging from complete independence to complete dependence. The time-based resource-sharing model assumes that a central…
1988-2000 Long-Range Plan for Technology of the Texas State Board of Education.
ERIC Educational Resources Information Center
Texas State Board of Education, Austin.
This plan plots the course for meeting educational needs in Texas through such technologies as computer-based systems, devices for storage and retrieval of massive amounts of information, telecommunications for audio, video, and information sharing, and other electronic media devised by the year 2000 that can help meet the instructional and…
MARC and the Library Service Center: Automation at Bargain Rates.
ERIC Educational Resources Information Center
Pearson, Karl M.
Despite recent research and development in the field of library automation, libraries have been unable to reap the benefits promised by technology due to the high cost of building and maintaining their own computer-based systems. Time-sharing and disc mass storage devices will bring automation costs, if spread over a number of users, within the…
LADS: Optimizing Data Transfers using Layout-Aware Data Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Atchley, Scott; Vallee, Geoffroy R
While future terabit networks hold the promise of signifi- cantly improving big-data motion among geographically distributed data centers, significant challenges must be overcome even on today s 100 gigabit networks to real- ize end-to-end performance. Multiple bottlenecks exist along the end-to-end path from source to sink. Data stor- age infrastructure at both the source and sink and its in- terplay with the wide-area network are increasingly the bottleneck to achieving high performance. In this paper, we identify the issues that lead to congestion on the path of an end-to-end data transfer in the terabit network en- vironment, and we presentmore » a new bulk data movement framework called LADS for terabit networks. LADS ex- ploits the underlying storage layout at each endpoint to maximize throughput without negatively impacting the performance of shared storage resources for other users. LADS also uses the Common Communication Interface (CCI) in lieu of the sockets interface to use zero-copy, OS-bypass hardware when available. It can further im- prove data transfer performance under congestion on the end systems using buffering at the source using flash storage. With our evaluations, we show that LADS can avoid congested storage elements within the shared stor- age resource, improving I/O bandwidth, and data transfer rates across the high speed networks.« less
Flexible operation of thermal plants with integrated energy storage technologies
NASA Astrophysics Data System (ADS)
Koytsoumpa, Efthymia Ioanna; Bergins, Christian; Kakaras, Emmanouil
2017-08-01
The energy system in the EU requires today as well as towards 2030 to 2050 significant amounts of thermal power plants in combination with the continuously increasing share of Renewables Energy Sources (RES) to assure the grid stability and to secure electricity supply as well as to provide heat. The operation of the conventional fleet should be harmonised with the fluctuating renewable energy sources and their intermittent electricity production. Flexible thermal plants should be able to reach their lowest minimum load capabilities while keeping the efficiency drop moderate as well as to increase their ramp up and down rates. A novel approach for integrating energy storage as an evolutionary measure to overcome many of the challenges, which arise from increasing RES and balancing with thermal power is presented. Energy storage technologies such as Power to Fuel, Liquid Air Energy Storage and Batteries are investigated in conjunction with flexible power plants.
The Contribution of Reservoirs to Global Land Surface Water Storage Variations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Tian; Nijssen, Bart; Gao, Huilin
Man-made reservoirs play a key role in the terrestrial water system. They alter water fluxes at the land surface and impact surface water storage through water management regulations for diverse purposes such as irrigation, municipal water supply, hydropower generation, and flood control. Although most developed countries have established sophisticated observing systems for many variables in the land surface water cycle, long-term and consistent records of reservoir storage are much more limited and not always shared. Furthermore, most land surface hydrological models do not represent the effects of water management activities. Here, the contribution of reservoirs to seasonal water storage variationsmore » is investigated using a large-scale water management model to simulate the effects of reservoir management at basin and continental scales. The model was run from 1948 to 2010 at a spatial resolution of 0.258 latitude–longitude. A total of 166 of the largest reservoirs in the world with a total capacity of about 3900 km3 (nearly 60%of the globally integrated reservoir capacity) were simulated. The global reservoir storage time series reflects the massive expansion of global reservoir capacity; over 30 000 reservoirs have been constructed during the past half century, with a mean absolute interannual storage variation of 89 km3. The results indicate that the average reservoir-induced seasonal storage variation is nearly 700 km3 or about 10%of the global reservoir storage. For some river basins, such as the Yellow River, seasonal reservoir storage variations can be as large as 72%of combined snow water equivalent and soil moisture storage.« less
Research and Development on the Storage Ring Vacuum System for the APS Upgrade Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stillwell, B.; Brajuskovic, B.; Carter, J.
A number of research and development activities are underway at Argonne National Laboratory to build confidence in the designs for the storage ring vacuum system required for the Advanced Photon Source Up-grade project (APS-U) [1]. The predominant technical risks are: excessive residual gas pressures during operation; insufficient beam position monitor stability; excessive beam impedance; excessive heating by induced electrical surface currents; and insufficient operational reliability. Present efforts to mitigate these risks include: building and evaluating mockup assemblies; performing mechanical testing of chamber weld joints; developing computational tools; investigating design alternatives; and performing electrical bench measurements. Status of these activities andmore » some of what has been learned to date will be shared.« less
MEDLARS and the Library Community
Adams, Scott
1964-01-01
The intention of the National Library of Medicine is to share with other libraries the products and the capabilities developed by the MEDLARS system. MEDLARS will provide bibliographic services of use to other libraries from the central system. The decentralization of the central system to permit libraries with access to computers to establish local machine retrieval systems is also indicated. The implications of such decentralization for the American medical library network and its effect on library evolution are suggested, as are the implications for international development of mechanized storage and retrieval systems. PMID:14119289
Human Milk Handling and Storage Practices Among Peer Milk-Sharing Mothers.
Reyes-Foster, Beatriz M; Carter, Shannon K; Hinojosa, Melanie Sberna
2017-02-01
Peer milk sharing, the noncommercial sharing of human milk from one parent or caretaker directly to another for the purposes of feeding a child, appears to be an increasing infant-feeding practice. Although the U.S. Food and Drug Administration has issued a warning against the practice, little is known about how people who share human milk handle and store milk and whether these practices are consistent with clinical safety protocols. Research aim: This study aimed to learn about the milk-handling practices of expressed human milk by milk-sharing donors and recipient caretakers. In this article, we explore the degree to which donors and recipients adhere to the Academy of Breastfeeding Medicine clinical recommendations for safe handling and storage. Online surveys were collected from 321 parents engaged in peer milk sharing. Univariate descriptive statistics were used to describe the safe handling and storage procedures for milk donors and recipients. A two-sample t-test was used to compare safety items common to each group. Multivariate ordinary least squares regression analysis was used to examine sociodemographic correlates of milk safety practices within the sample group. Findings indicate that respondents engaged in peer milk sharing report predominantly positive safety practices. Multivariate analysis did not reveal any relationship between safety practices and sociodemographic characteristics. The number of safe practices did not differ between donors and recipients. Parents and caretakers who participate in peer human milk sharing report engaging in practices that should reduce risk of bacterial contamination of expressed peer shared milk. More research on this particular population is recommended.
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
IHE profiles applied to regional PACS.
Fernandez-Bayó, Josep
2011-05-01
PACS has been widely adopted as an image storage solution that perfectly fits the radiology department workflow and that can be easily extended to other hospital departments. Integrations with other hospital systems, like the Radiology Information System, the Hospital Information System and the Electronic Patient Record are fully achieved but still challenging aims. PACS also creates the perfect environment for teleradiology and teleworking setups. One step further is the regional PACS concept where different hospitals or health care enterprises share the images in an integrated Electronic Patient Record. Among the different solutions available to share images between different hospitals IHE (Integrating the Healthcare Enterprise) organization presents the Cross Enterprise Document Sharing profile (XDS) which allows sharing images from different hospitals even if they have different PACS vendors. Adopting XDS has multiple advantages, images do not need to be duplicated in a central archive to be shared among the different healthcare enterprises, they only need to be indexed and published in a central document registry. In the XDS profile IHE defines the mechanisms to publish and index the images in the central document registry. It also defines the mechanisms that each hospital will use to retrieve those images regardless on the Hospital PACS they are stored. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Data publication and sharing using the SciDrive service
NASA Astrophysics Data System (ADS)
Mishin, Dmitry; Medvedev, D.; Szalay, A. S.; Plante, R. L.
2014-01-01
Despite the last years progress in scientific data storage, still remains the problem of public data storage and sharing system for relatively small scientific datasets. These are collections forming the “long tail” of power log datasets distribution. The aggregated size of the long tail data is comparable to the size of all data collections from large archives, and the value of data is significant. The SciDrive project's main goal is providing the scientific community with a place to reliably and freely store such data and provide access to it to broad scientific community. The primary target audience of the project is astoromy community, and it will be extended to other fields. We're aiming to create a simple way of publishing a dataset, which can be then shared with other people. Data owner controls the permissions to modify and access the data and can assign a group of users or open the access to everyone. The data contained in the dataset will be automaticaly recognized by a background process. Known data formats will be extracted according to the user's settings. Currently tabular data can be automatically extracted to the user's MyDB table where user can make SQL queries to the dataset and merge it with other public CasJobs resources. Other data formats can be processed using a set of plugins that upload the data or metadata to user-defined side services. The current implementation targets some of the data formats commonly used by the astronomy communities, including FITS, ASCII and Excel tables, TIFF images, and YT simulations data archives. Along with generic metadata, format-specific metadata is also processed. For example, basic information about celestial objects is extracted from FITS files and TIFF images, if present. A 100TB implementation has just been put into production at Johns Hopkins University. The system features public data storage REST service supporting VOSpace 2.0 and Dropbox protocols, HTML5 web portal, command-line client and Java standalone client to synchronize a local folder with the remote storage. We use VAO SSO (Single Sign On) service from NCSA for users authentication that provides free registration for everyone.
CARGO: effective format-free compressed storage of genomic information
Roguski, Łukasz; Ribeca, Paolo
2016-01-01
The recent super-exponential growth in the amount of sequencing data generated worldwide has put techniques for compressed storage into the focus. Most available solutions, however, are strictly tied to specific bioinformatics formats, sometimes inheriting from them suboptimal design choices; this hinders flexible and effective data sharing. Here, we present CARGO (Compressed ARchiving for GenOmics), a high-level framework to automatically generate software systems optimized for the compressed storage of arbitrary types of large genomic data collections. Straightforward applications of our approach to FASTQ and SAM archives require a few lines of code, produce solutions that match and sometimes outperform specialized format-tailored compressors and scale well to multi-TB datasets. All CARGO software components can be freely downloaded for academic and non-commercial use from http://bio-cargo.sourceforge.net. PMID:27131376
NASA Astrophysics Data System (ADS)
Arias Muñoz, C.; Brovelli, M. A.; Kilsedar, C. E.; Moreno-Sanchez, R.; Oxoli, D.
2017-09-01
The availability of water-related data and information across different geographical and jurisdictional scales is of critical importance for the conservation and management of water resources in the 21st century. Today information assets are often found fragmented across multiple agencies that use incompatible data formats and procedures for data collection, storage, maintenance, analysis, and distribution. The growing adoption of Web mapping systems in the water domain is reducing the gap between data availability and its practical use and accessibility. Nevertheless, more attention must be given to the design and development of these systems to achieve high levels of interoperability and usability while fulfilling different end user informational needs. This paper first presents a brief overview of technologies used in the water domain, and then presents three examples of Web mapping architectures based on free and open source software (FOSS) and the use of open specifications (OS) that address different users' needs for data sharing, visualization, manipulation, scenario simulations, and map production. The purpose of the paper is to illustrate how the latest developments in OS for geospatial and water-related data collection, storage, and sharing, combined with the use of mature FOSS projects facilitate the creation of sophisticated interoperable Web-based information systems in the water domain.
Yang, Ping-Heng; Yuan, Dao-Xian; Ren, You-Rong; Xie, Shi-You; He, Qiu-Fang; Hu, Xiao-Feng
2012-09-01
In order to investigate the nitrate storage and transport in the karst aquifer system, the hydrochemical dynamics of Qingmuguan underground river system was monitored online by achieving high-resolution data during storm events and monthly data in normal weather. The principal component analysis was employed to analyze the karst water geochemistry. Results showed that nitrate in Jiangjia spring did not share the same source with soluble iron, manganese and aluminum, and exhibited different geochemical behaviors. Nitrate was derived from land surface and infiltrated together with soil water, which was mainly stored in fissure, pore and solution crack of karst unsaturated zone, whereas soluble iron, manganese and aluminum were derived from soil erosion and directly recharged the underground river through sinkholes and shafts. Nitrate transport in the karst aquifer system could be ideally divided into three phases, including input storage, fast output and re-inputting storage. Under similar external conditions, the karstification intensity of vadose zone was the key factor to determine the dynamics of nitrate concentrations in the groundwater during storm events. Nitrate stored in the karst vadose zone was easily released, which would impair the aquatic ecosystem and pose seriously threats to the local health. Thus, to strengthen the management of ecological system, changing the land-use patterns and scientifically applying fertilizer could effectively make a contribution to controlling mass nutrient input from the surface.
Final Test and Evaluation Results from the Solar Two Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
BRADSHAW, ROBERT W.; DAWSON, DANIEL B.; DE LA ROSA, WILFREDO
Solar Two was a collaborative, cost-shared project between 11 U. S. industry and utility partners and the U. S. Department of Energy to validate molten-salt power tower technology. The Solar Two plant, located east of Barstow, CA, comprised 1926 heliostats, a receiver, a thermal storage system, a steam generation system, and steam-turbine power block. Molten nitrate salt was used as the heat transfer fluid and storage media. The steam generator powered a 10-MWe (megawatt electric), conventional Rankine cycle turbine. Solar Two operated from June 1996 to April 1999. The major objective of the test and evaluation phase of the projectmore » was to validate the technical characteristics of a molten salt power tower. This report describes the significant results from the test and evaluation activities, the operating experience of each major system, and overall plant performance. Tests were conducted to measure the power output (MW) of the each major system, the efficiencies of the heliostat, receiver, thermal storage, and electric power generation systems and the daily energy collected, daily thermal-to-electric conversion, and daily parasitic energy consumption. Also included are detailed test and evaluation reports.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob
Electricity generated by Hydropower Plants (HPPs) contributes a considerable portion of bulk electricity generation and delivers it with a low carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which includes solar and wind energy. The increasing penetration of wind and solar penetration leads to a lowered inertia in the grid and hence poses stability challenges. In recent years, breakthrough in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments in power grids. Multiple ROR HPPs if integrated with scalable, multi time-step energy storage so that the total output canmore » be controlled. Although, the size of a single energy storage is far smaller than that of a typical reservoir, cohesively managing multiple sets of energy storage distributed in different locations is proposed. The ratings of storages and multiple ROR HPPs approximately equals the rating of a large, conventional HPP. The challenges associated with the system architecture and operation are described. Energy storage technologies such as supercapacitors, flywheels, batteries etc. can function as a dispatchable synthetic reservoir with a scalable size of energy storage will be integrated. Supercapacitors, flywheels, and battery are chosen to provide fast, medium, and slow responses to support grid requirements. Various dynamic and transient power grid conditions are simulated and performances of integrated ROR HPPs with energy storage is provided. The end goal of this research is to investigate the inertial equivalence of a large, conventional HPP with a unique set of multiple ROR HPPs and optimally rated energy storage systems.« less
Nuclear Hybrid Energy Systems Initial Integrated Case Study Development and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Thomas J.; Greenwood, Michael Scott
The US Department of Energy Office of Nuclear Energy established the Nuclear Hybrid Energy System (NHES) project to develop a systematic, rigorous, technically accurate set of methods to model, analyze, and optimize the integration of dispatchable nuclear, fossil, and electric storage with an industrial customer. Ideally, the optimized integration of these systems will provide economic and operational benefits to the overall system compared to independent operation, and it will enhance the stability and responsiveness of the grid as intermittent, nondispatchable, renewable resources provide a greater share of grid power.
Storing and sharing water in sand rivers: a water balance modelling approach
NASA Astrophysics Data System (ADS)
Love, D.; van der Zaag, P.; Uhlenbrook, S.
2009-04-01
Sand rivers and sand dams offer an alternative to conventional surface water reservoirs for storage. The alluvial aquifers that make up the beds of sand rivers can store water with minimal evaporation (extinction depth is 0.9 m) and natural filtration. The alluvial aquifers of the Mzingwane Catchment are the most extensive of any tributaries in the Limpopo Basin. The lower Mzingwane aquifer, which is currently underutilised, is recharged by managed releases from Zhovhe Dam (capacity 133 Mm3). The volume of water released annually is only twice the size of evaporation losses from the dam; the latter representing nearly one third of the dam's storage capacity. The Lower Mzingwane valley currently support commercial agro-businesses (1,750 ha irrigation) and four smallholder irrigation schemes (400 ha with provision for a further 1,200 ha). In order to support planning for optimising water use and storage over evaporation and to provide for more equitable water allocation, the spreadsheet-based balance model WAFLEX was used. It is a simple and userfriendly model, ideal for use by institutions such as the water management authorities in Zimbabwe which are challenged by capacity shortfalls and inadequate data. In this study, WAFLEX, which is normally used for accounting the surface water balance, is adapted to incorporate alluvial aquifers into the water balance, including recharge, baseflow and groundwater flows. Results of the WAFLEX modelling suggest that there is surplus water in the lower Mzingwane system, and thus there should not be any water conflicts. Through more frequent timing of releases from the dam and maintaining the alluvial aquifers permanently saturated, less evaporation losses will occur in the system and the water resources can be better shared to provide more irrigation water for smallholder farmers in the highly resource-poor communal lands along the river. Sand dams are needed to augment the aquifer storage system and improve access to water. An alternative to the current scenario was modelled in WAFLEX: making fuller use of the alluvial aquifers upstream and downstream of Zhovhe Dam. These alluvial aquifers have an estimated average water storage capacity of 0.37 Mm3 km
Facilitating a culture of responsible and effective sharing of cancer genome data.
Siu, Lillian L; Lawler, Mark; Haussler, David; Knoppers, Bartha Maria; Lewin, Jeremy; Vis, Daniel J; Liao, Rachel G; Andre, Fabrice; Banks, Ian; Barrett, J Carl; Caldas, Carlos; Camargo, Anamaria Aranha; Fitzgerald, Rebecca C; Mao, Mao; Mattison, John E; Pao, William; Sellers, William R; Sullivan, Patrick; Teh, Bin Tean; Ward, Robyn L; ZenKlusen, Jean Claude; Sawyers, Charles L; Voest, Emile E
2016-05-05
Rapid and affordable tumor molecular profiling has led to an explosion of clinical and genomic data poised to enhance the diagnosis, prognostication and treatment of cancer. A critical point has now been reached at which the analysis and storage of annotated clinical and genomic information in unconnected silos will stall the advancement of precision cancer care. Information systems must be harmonized to overcome the multiple technical and logistical barriers to data sharing. Against this backdrop, the Global Alliance for Genomic Health (GA4GH) was established in 2013 to create a common framework that enables responsible, voluntary and secure sharing of clinical and genomic data. This Perspective from the GA4GH Clinical Working Group Cancer Task Team highlights the data-aggregation challenges faced by the field, suggests potential collaborative solutions and describes how GA4GH can catalyze a harmonized data-sharing culture.
Tagliaferri, Luca; Gobitti, Carlo; Colloca, Giuseppe Ferdinando; Boldrini, Luca; Farina, Eleonora; Furlan, Carlo; Paiar, Fabiola; Vianello, Federica; Basso, Michela; Cerizza, Lorenzo; Monari, Fabio; Simontacchi, Gabriele; Gambacorta, Maria Antonietta; Lenkowicz, Jacopo; Dinapoli, Nicola; Lanzotti, Vito; Mazzarotto, Renzo; Russi, Elvio; Mangoni, Monica
2018-07-01
The big data approach offers a powerful alternative to Evidence-based medicine. This approach could guide cancer management thanks to machine learning application to large-scale data. Aim of the Thyroid CoBRA (Consortium for Brachytherapy Data Analysis) project is to develop a standardized web data collection system, focused on thyroid cancer. The Metabolic Radiotherapy Working Group of Italian Association of Radiation Oncology (AIRO) endorsed the implementation of a consortium directed to thyroid cancer management and data collection. The agreement conditions, the ontology of the collected data and the related software services were defined by a multicentre ad hoc working-group (WG). Six Italian cancer centres were firstly started the project, defined and signed the Thyroid COBRA consortium agreement. Three data set tiers were identified: Registry, Procedures and Research. The COBRA-Storage System (C-SS) appeared to be not time-consuming and to be privacy respecting, as data can be extracted directly from the single centre's storage platforms through a secured connection that ensures reliable encryption of sensible data. Automatic data archiving could be directly performed from Image Hospital Storage System or the Radiotherapy Treatment Planning Systems. The C-SS architecture will allow "Cloud storage way" or "distributed learning" approaches for predictive model definition and further clinical decision support tools development. The development of the Thyroid COBRA data Storage System C-SS through a multicentre consortium approach appeared to be a feasible tool in the setup of complex and privacy saving data sharing system oriented to the management of thyroid cancer and in the near future every cancer type. Copyright © 2018 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Virtualization and cloud computing in dentistry.
Chow, Frank; Muftu, Ali; Shorter, Richard
2014-01-01
The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.
CAD-CAM database management at Bendix Kansas City
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, D.R.
1985-05-01
The Bendix Kansas City Division of Allied Corporation began integrating mechanical CAD-CAM capabilities into its operations in June 1980. The primary capabilities include a wireframe modeling application, a solid modeling application, and the Bendix Integrated Computer Aided Manufacturing (BICAM) System application, a set of software programs and procedures which provides user-friendly access to graphic applications and data, and user-friendly sharing of data between applications and users. BICAM also provides for enforcement of corporate/enterprise policies. Three access categories, private, local, and global, are realized through the implementation of data-management metaphors: the desk, reading rack, file cabinet, and library are for themore » storage, retrieval, and sharing of drawings and models. Access is provided through menu selections; searching for designs is done by a paging method or a search-by-attribute-value method. The sharing of designs between all users of Part Data is key. The BICAM System supports 375 unique users per quarter and manages over 7500 drawings and models. The BICAM System demonstrates the need for generalized models, a high-level system framework, prototyping, information-modeling methods, and an understanding of the entire enterprise. Future BICAM System implementations are planned to take advantage of this knowledge.« less
Qiao, Liang; Li, Ying; Chen, Xin; Yang, Sheng; Gao, Peng; Liu, Hongjun; Feng, Zhengquan; Nian, Yongjian; Qiu, Mingguo
2015-09-01
There are various medical image sharing and electronic whiteboard systems available for diagnosis and discussion purposes. However, most of these systems ask clients to install special software tools or web plug-ins to support whiteboard discussion, special medical image format, and customized decoding algorithm of data transmission of HRIs (high-resolution images). This limits the accessibility of the software running on different devices and operating systems. In this paper, we propose a solution based on pure web pages for medical HRIs lossless sharing and e-whiteboard discussion, and have set up a medical HRI sharing and e-whiteboard system, which has four-layered design: (1) HRIs access layer: we improved an tile-pyramid model named unbalanced ratio pyramid structure (URPS), to rapidly share lossless HRIs and to adapt to the reading habits of users; (2) format conversion layer: we designed a format conversion engine (FCE) on server side to real time convert and cache DICOM tiles which clients requesting with window-level parameters, to make browsers compatible and keep response efficiency to server-client; (3) business logic layer: we built a XML behavior relationship storage structure to store and share users' behavior, to keep real time co-browsing and discussion between clients; (4) web-user-interface layer: AJAX technology and Raphael toolkit were used to combine HTML and JavaScript to build client RIA (rich Internet application), to meet clients' desktop-like interaction on any pure webpage. This system can be used to quickly browse lossless HRIs, and support discussing and co-browsing smoothly on any web browser in a diversified network environment. The proposal methods can provide a way to share HRIs safely, and may be used in the field of regional health, telemedicine and remote education at a low cost. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Extreme I/O on HPC for HEP using the Burst Buffer at NERSC
NASA Astrophysics Data System (ADS)
Bhimji, Wahid; Bard, Debbie; Burleigh, Kaylan; Daley, Chris; Farrell, Steve; Fasel, Markus; Friesen, Brian; Gerhardt, Lisa; Liu, Jialin; Nugent, Peter; Paul, Dave; Porter, Jeff; Tsulaia, Vakho
2017-10-01
In recent years there has been increasing use of HPC facilities for HEP experiments. This has initially focussed on less I/O intensive workloads such as generator-level or detector simulation. We now demonstrate the efficient running of I/O-heavy analysis workloads on HPC facilities at NERSC, for the ATLAS and ALICE LHC collaborations as well as astronomical image analysis for DESI and BOSS. To do this we exploit a new 900 TB NVRAM-based storage system recently installed at NERSC, termed a Burst Buffer. This is a novel approach to HPC storage that builds on-demand filesystems on all-SSD hardware that is placed on the high-speed network of the new Cori supercomputer. We describe the hardware and software involved in this system, and give an overview of its capabilities, before focusing in detail on how the ATLAS, ALICE and astronomical workflows were adapted to work on this system. We describe these modifications and the resulting performance results, including comparisons to other filesystems. We demonstrate that we can meet the challenging I/O requirements of HEP experiments and scale to many thousands of cores accessing a single shared storage system.
An annotation system for 3D fluid flow visualization
NASA Technical Reports Server (NTRS)
Loughlin, Maria M.; Hughes, John F.
1995-01-01
Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Caldwell, Blake A; Hicks, Susan Elaine
The purpose of this report is to clarify the challenges associated with storage for secure enclaves. The major focus areas for the report are: - review of relevant parallel filesystem technologies to identify assets and gaps; - review of filesystem isolation/protection mechanisms, to include native filesystem capabilities and auxiliary/layered techniques; - definition of storage architectures that can be used for customizable compute enclaves (i.e., clarification of use-cases that must be supported for shared storage scenarios); - investigate vendor products related to secure storage. This study provides technical details on the storage and filesystem used for HPC with particular attention onmore » elements that contribute to creating secure storage. We outline the pieces for a a shared storage architecture that balances protection and performance by leveraging the isolation capabilities available in filesystems and virtualization technologies to maintain the integrity of the data. Key Points: There are a few existing and in-progress protection features in Lustre related to secure storage, which are discussed in (Chapter 3.1). These include authentication capabilities like GSSAPI/Kerberos and the in-progress work for GSSAPI/Host-keys. The GPFS filesystem provides native support for encryption, which is not directly available in Lustre. Additionally, GPFS includes authentication/authorization mechanisms for inter-cluster sharing of filesystems (Chapter 3.2). The limitations of key importance for secure storage/filesystems are: (i) restricting sub-tree mounts for parallel filesystem (which is not directly supported in Lustre or GPFS), and (ii) segregation of hosts on the storage network and practical complications with dynamic additions to the storage network, e.g., LNET. A challenge for VM based use cases will be to provide efficient IO forwarding of the parallel filessytem from the host to the guest (VM). There are promising options like para-virtualized filesystems to help with this issue, which are a particular instances of the more general challenge of efficient host/guest IO that is the focus of interfaces like virtio. A collection of bridging technologies have been identified in Chapter 4, which can be helpful to overcome the limitations and challenges of supporting efficient storage for secure enclaves. The synthesis of native filesystem security mechanisms and bridging technologies led to an isolation-centric storage architecture that is proposed in Chapter 5, which leverages isolation mechanisms from different layers to facilitate secure storage for an enclave. Recommendations: The following highlights recommendations from the investigations done thus far. - The Lustre filesystem offers excellent performance but does not support some security related features, e.g., encryption, that are included in GPFS. If encryption is of paramount importance, then GPFS may be a more suitable choice. - There are several possible Lustre related enhancements that may provide functionality of use for secure-enclaves. However, since these features are not currently integrated, the use of Lustre as a secure storage system may require more direct involvement (support). (*The network that connects the storage subsystem and users, e.g., Lustre s LNET.) - The use of OpenStack with GPFS will be more streamlined than with Lustre, as there are available drivers for GPFS. - The Manilla project offers Filesystem as a Service for OpenStack and is worth further investigation. Manilla has some support for GPFS. - The proposed Lustre enhancement of Dynamic-LNET should be further investigated to provide more dynamic changes to the storage network which could be used to isolate hosts and their tenants. - The Linux namespaces offer a good solution for creating efficient restrictions to shared HPC filesystems. However, we still need to conduct a thorough round of storage/filesystem benchmarks. - Vendor products should be more closely reviewed, possibly to include evaluation of performance/protection of select products. (Note, we are investigation the option of evaluating equipment from Seagate/Xyratex.) Outline: The remainder of this report is structured as follows: - Section 1: Describes the growing importance of secure storage architectures and highlights some challenges for HPC. - Section 2: Provides background information on HPC storage architectures, relevant supporting technologies for secure storage and details on OpenStack components related to storage. Note, that background material on HPC storage architectures in this chapter can be skipped if the reader is already familiar with Lustre and GPFS. - Section 3: A review of protection mechanisms in two HPC filesystems; details about available isolation, authentication/authorization and performance capabilities are discussed. - Section 4: Describe technologies that can be used to bridge gaps in HPC storage and filesystems to facilitate...« less
Zhai, Haibo; Rubin, Edward S
2016-04-05
Advanced cooling systems can be deployed to enhance the resilience of thermoelectric power generation systems. This study developed and applied a new power plant modeling option for a hybrid cooling system at coal- or natural-gas-fired power plants with and without amine-based carbon capture and storage (CCS) systems. The results of the plant-level analyses show that the performance and cost of hybrid cooling systems are affected by a range of environmental, technical, and economic parameters. In general, when hot periods last the entire summer, the wet unit of a hybrid cooling system needs to share about 30% of the total plant cooling load in order to minimize the overall system cost. CCS deployment can lead to a significant increase in the water use of hybrid cooling systems, depending on the level of CO2 capture. Compared to wet cooling systems, widespread applications of hybrid cooling systems can substantially reduce water use in the electric power sector with only a moderate increase in the plant-level cost of electricity generation.
Science and Applications Space Platform (SASP) End-to-End Data System Study
NASA Technical Reports Server (NTRS)
Crawford, P. R.; Kasulka, L. H.
1981-01-01
The capability of present technology and the Tracking and Data Relay Satellite System (TDRSS) to accommodate Science and Applications Space Platforms (SASP) payload user's requirements, maximum service to the user through optimization of the SASP Onboard Command and Data Management System, and the ability and availability of new technology to accommodate the evolution of SASP payloads were assessed. Key technology items identified to accommodate payloads on a SASP were onboard storage devices, multiplexers, and onboard data processors. The primary driver is the limited access to TDRSS for single access channels due to sharing with all the low Earth orbit spacecraft plus shuttle. Advantages of onboard data processing include long term storage of processed data until TRDSS is accessible, thus reducing the loss of data, eliminating large data processing tasks at the ground stations, and providing a more timely access to the data.
NASA Astrophysics Data System (ADS)
Wang, Jian
2017-01-01
In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.
The HydroShare Collaborative Repository for the Hydrology Community
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.
2017-12-01
HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of, and collaboration around, "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting our approach to making this system easy to use and serving the needs of the hydrology community represented by the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc. (CUAHSI). Metadata for uploaded files is harvested automatically or captured using easy to use web user interfaces. Users are encouraged to add or create resources in HydroShare early in the data life cycle. To encourage this we allow users to share and collaborate on HydroShare resources privately among individual users or groups, entering metadata while doing the work. HydroShare also provides enhanced functionality for users through web apps that provide tools and computational capability for actions on resources. HydroShare's architecture broadly is comprised of: (1) resource storage, (2) resource exploration website, and (3) web apps for actions on resources. System components are loosely coupled and interact through APIs, which enhances robustness, as components can be upgraded and advanced relatively independently. The full power of this paradigm is the extensibility it supports. Web apps are hosted on separate servers, which may be 3rd party servers. They are registered in HydroShare using a web app resource that configures the connectivity for them to be discovered and launched directly from resource types they are associated with.
Wide-area-distributed storage system for a multimedia database
NASA Astrophysics Data System (ADS)
Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro
1998-12-01
We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.
e!DAL - a framework to store, share and publish research data
2014-01-01
Background The life-science community faces a major challenge in handling “big data”, highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the “big data life cycle”. The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. Results e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed “out-of-the-box” as an on-site repository. Conclusions e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK’s role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de. PMID:24958009
e!DAL--a framework to store, share and publish research data.
Arend, Daniel; Lange, Matthias; Chen, Jinbo; Colmsee, Christian; Flemming, Steffen; Hecht, Denny; Scholz, Uwe
2014-06-24
The life-science community faces a major challenge in handling "big data", highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the "big data life cycle". The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed "out-of-the-box" as an on-site repository. e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK's role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de.
An Object-Relational Ifc Storage Model Based on Oracle Database
NASA Astrophysics Data System (ADS)
Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan
2016-06-01
With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.
Resource Management and Risk Mitigation in Online Storage Grids
ERIC Educational Resources Information Center
Du, Ye
2010-01-01
This dissertation examines the economic value of online storage resources that could be traded and shared as potential commodities and the consequential investments and deployment of such resources. The value proposition of emergent business models such as Akamai and Amazon S3 in online storage grids is capacity provision and content delivery at…
2011-01-01
Background Renewed interest in plant × environment interactions has risen in the post-genomic era. In this context, high-throughput phenotyping platforms have been developed to create reproducible environmental scenarios in which the phenotypic responses of multiple genotypes can be analysed in a reproducible way. These platforms benefit hugely from the development of suitable databases for storage, sharing and analysis of the large amount of data collected. In the model plant Arabidopsis thaliana, most databases available to the scientific community contain data related to genetic and molecular biology and are characterised by an inadequacy in the description of plant developmental stages and experimental metadata such as environmental conditions. Our goal was to develop a comprehensive information system for sharing of the data collected in PHENOPSIS, an automated platform for Arabidopsis thaliana phenotyping, with the scientific community. Description PHENOPSIS DB is a publicly available (URL: http://bioweb.supagro.inra.fr/phenopsis/) information system developed for storage, browsing and sharing of online data generated by the PHENOPSIS platform and offline data collected by experimenters and experimental metadata. It provides modules coupled to a Web interface for (i) the visualisation of environmental data of an experiment, (ii) the visualisation and statistical analysis of phenotypic data, and (iii) the analysis of Arabidopsis thaliana plant images. Conclusions Firstly, data stored in the PHENOPSIS DB are of interest to the Arabidopsis thaliana community, particularly in allowing phenotypic meta-analyses directly linked to environmental conditions on which publications are still scarce. Secondly, data or image analysis modules can be downloaded from the Web interface for direct usage or as the basis for modifications according to new requirements. Finally, the structure of PHENOPSIS DB provides a useful template for the development of other similar databases related to genotype × environment interactions. PMID:21554668
NASA Astrophysics Data System (ADS)
Karami, Mojtaba; Rangzan, Kazem; Saberi, Azim
2013-10-01
With emergence of air-borne and space-borne hyperspectral sensors, spectroscopic measurements are gaining more importance in remote sensing. Therefore, the number of available spectral reference data is constantly increasing. This rapid increase often exhibits a poor data management, which leads to ultimate isolation of data on disk storages. Spectral data without precise description of the target, methods, environment, and sampling geometry cannot be used by other researchers. Moreover, existing spectral data (in case it accompanied with good documentation) become virtually invisible or unreachable for researchers. Providing documentation and a data-sharing framework for spectral data, in which researchers are able to search for or share spectral data and documentation, would definitely improve the data lifetime. Relational Database Management Systems (RDBMS) are main candidates for spectral data management and their efficiency is proven by many studies and applications to date. In this study, a new approach to spectral data administration is presented based on spatial identity of spectral samples. This method benefits from scalability and performance of RDBMS for storage of spectral data, but uses GIS servers to provide users with interactive maps as an interface to the system. The spectral files, photographs and descriptive data are considered as belongings of a geospatial object. A spectral processing unit is responsible for evaluation of metadata quality and performing routine spectral processing tasks for newly-added data. As a result, by using internet browser software the users would be able to visually examine availability of data and/or search for data based on descriptive attributes associated to it. The proposed system is scalable and besides giving the users good sense of what data are available in the database, it facilitates participation of spectral reference data in producing geoinformation.
"Job-Sharing" Storage of Hydrogen in Ru/Li₂O Nanocomposites.
Fu, Lijun; Tang, Kun; Oh, Hyunchul; Manickam, Kandavel; Bräuniger, Thomas; Chandran, C Vinod; Menzel, Alexander; Hirscher, Michael; Samuelis, Dominik; Maier, Joachim
2015-06-10
A "job-sharing" hydrogen storage mechanism is proposed and experimentally investigated in Ru/Li2O nanocomposites in which H(+) is accommodated on the Li2O side, while H(-) or e(-) is stored on the side of Ru. Thermal desorption-mass spectroscopy results show that after loading with D2, Ru/Li2O exhibits an extra desorption peak, which is in contrast to Ru nanoparticles or ball-milled Li2O alone, indicating a synergistic hydrogen storage effect due to the presence of both phases. By varying the ratio of the two phases, it is shown that the effect increases monotonically with the area of the heterojunctions, indicating interface related hydrogen storage. X-ray diffraction, Fourier transform infrared spectroscopy, and nuclear magnetic resonance results show that a weak LiO···D bond is formed after loading in Ru/Li2O nanocomposites with D2. The storage-pressure curve seems to favor H(+)/H(-) over H(+)/e(-) mechanism.
Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture
NASA Technical Reports Server (NTRS)
Jones, W. H.
1983-01-01
The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.
Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A
2017-02-11
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.
2017-03-01
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.
A microprocessor controlled pressure scanning system
NASA Technical Reports Server (NTRS)
Anderson, R. C.
1976-01-01
A microprocessor-based controller and data logger for pressure scanning systems is described. The microcomputer positions and manages data from as many as four 48-port electro-mechanical pressure scanners. The maximum scanning rate is 80 pressure measurements per second (20 ports per second on each of four scanners). The system features on-line calibration, position-directed data storage, and once-per-scan display in engineering units of data from a selected port. The system is designed to be interfaced to a facility computer through a shared memory. System hardware and software are described. Factors affecting measurement error in this type of system are also discussed.
40 CFR 63.1360 - Applicability.
Code of Federal Regulations, 2014 CFR
2014-07-01
... process unit. If the greatest input to and/or output from a shared storage vessel is the same for two or... not have an intervening storage vessel. If two or more PAI process units have the same input to or... process unit that sends the most material to or receives the most material from the storage vessel. If two...
2010-06-01
Demonstration of an area-enclosing guided-atom interferometer for rotation sensing, Phys. Rev. Lett. 99, 173201 (2007). 4. Heralded Single- Magnon Quantum...excitations are quantized spin waves ( magnons ), such that transitions between its energy levels ( magnon number states) correspond to highly directional...polarization storage in the form of a single collective-spin excitation ( magnon ) that is shared between two spatially overlapped atomic ensembles
2008-08-08
Ms. Cindy E. Moran Director for Network Services 8 August 2008 DISN Forecast to Industry Report Documentation Page Form ApprovedOMB No. 0704-0188...TITLE AND SUBTITLE DISN (Defense Information system Network ) Forecast to Industry 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Prescribed by ANSI Std Z39-18 2 2 Integrated DISN Services by 2016: A Solid Goal Network Aware Applications Common Storage & Retrieval Shared Long
Privacy protection in HealthGrid: distributing encryption management over the VO.
Torres, Erik; de Alfonso, Carlos; Blanquer, Ignacio; Hernández, Vicente
2006-01-01
Grid technologies have proven to be very successful in tackling challenging problems in which data access and processing is a bottleneck. Notwithstanding the benefits that Grid technologies could have in Health applications, privacy leakages of current DataGrid technologies due to the sharing of data in VOs and the use of remote resources, compromise its widespreading. Privacy control for Grid technology has become a key requirement for the adoption of Grids in the Healthcare sector. Encrypted storage of confidential data effectively reduces the risk of disclosure. A self-enforcing scheme for encrypted data storage can be achieved by combining Grid security systems with distributed key management and classical cryptography techniques. Virtual Organizations, as the main unit of user management in Grid, can provide a way to organize key sharing, access control lists and secure encryption management. This paper provides programming models and discusses the value, costs and behavior of such a system implemented on top of one of the latest Grid middlewares. This work is partially funded by the Spanish Ministry of Science and Technology in the frame of the project Investigación y Desarrollo de Servicios GRID: Aplicación a Modelos Cliente-Servidor, Colaborativos y de Alta Productividad, with reference TIC2003-01318.
2010-01-01
Background Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. Results We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. Conclusions The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities. PMID:20482787
Tolopko, Andrew N; Sullivan, John P; Erickson, Sean D; Wrobel, David; Chiang, Su L; Rudnicki, Katrina; Rudnicki, Stewart; Nale, Jennifer; Selfors, Laura M; Greenhouse, Dara; Muhlich, Jeremy L; Shamu, Caroline E
2010-05-18
Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities.
The future application of GML database in GIS
NASA Astrophysics Data System (ADS)
Deng, Yuejin; Cheng, Yushu; Jing, Lianwen
2006-10-01
In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.
An Automated Medical Information Management System (OpScan-MIMS) in a Clinical Setting
Margolis, S.; Baker, T.G.; Ritchey, M.G.; Alterescu, S.; Friedman, C.
1981-01-01
This paper describes an automated medical information management system within a clinic setting. The system includes an optically scanned data entry system (OpScan), a generalized, interactive retrieval and storage software system(Medical Information Management System, MIMS) and the use of time-sharing. The system has the advantages of minimal hardware purchase and maintenance, rapid data entry and retrieval, user-created programs, no need for user knowledge of computer language or technology and is cost effective. The OpScan-MIMS system has been operational for approximately 16 months in a sexually transmitted disease clinic. The system's application to medical audit, quality assurance, clinic management and clinical training are demonstrated.
Informatics methods to enable sharing of quantitative imaging research data.
Levy, Mia A; Freymann, John B; Kirby, Justin S; Fedorov, Andriy; Fennessy, Fiona M; Eschrich, Steven A; Berglund, Anders E; Fenstermacher, David A; Tan, Yongqiang; Guo, Xiaotao; Casavant, Thomas L; Brown, Bartley J; Braun, Terry A; Dekker, Andre; Roelofs, Erik; Mountz, James M; Boada, Fernando; Laymon, Charles; Oborski, Matt; Rubin, Daniel L
2012-11-01
The National Cancer Institute Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community. We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN. There are a variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network. As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers. Copyright © 2012 Elsevier Inc. All rights reserved.
Telemedicine optoelectronic biomedical data processing system
NASA Astrophysics Data System (ADS)
Prosolovska, Vita V.
2010-08-01
The telemedicine optoelectronic biomedical data processing system is created to share medical information for the control of health rights and timely and rapid response to crisis. The system includes the main blocks: bioprocessor, analog-digital converter biomedical images, optoelectronic module for image processing, optoelectronic module for parallel recording and storage of biomedical imaging and matrix screen display of biomedical images. Rated temporal characteristics of the blocks defined by a particular triggering optoelectronic couple in analog-digital converters and time imaging for matrix screen. The element base for hardware implementation of the developed matrix screen is integrated optoelectronic couples produced by selective epitaxy.
Integration of end-user Cloud storage for CMS analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez
End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less
Integration of end-user Cloud storage for CMS analysis
Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...
2017-05-19
End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less
The Petascale Data Storage Institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, Garth; Long, Darrell; Honeyman, Peter
2013-07-01
Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability.The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools.The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz.
Sharing lattice QCD data over a widely distributed file system
NASA Astrophysics Data System (ADS)
Amagasa, T.; Aoki, S.; Aoki, Y.; Aoyama, T.; Doi, T.; Fukumura, K.; Ishii, N.; Ishikawa, K.-I.; Jitsumoto, H.; Kamano, H.; Konno, Y.; Matsufuru, H.; Mikami, Y.; Miura, K.; Sato, M.; Takeda, S.; Tatebe, O.; Togawa, H.; Ukawa, A.; Ukita, N.; Watanabe, Y.; Yamazaki, T.; Yoshie, T.
2015-12-01
JLDG is a data-grid for the lattice QCD (LQCD) community in Japan. Several large research groups in Japan have been working on lattice QCD simulations using supercomputers distributed over distant sites. The JLDG provides such collaborations with an efficient method of data management and sharing. File servers installed on 9 sites are connected to the NII SINET VPN and are bound into a single file system with the GFarm. The file system looks the same from any sites, so that users can do analyses on a supercomputer on a site, using data generated and stored in the JLDG at a different site. We present a brief description of hardware and software of the JLDG, including a recently developed subsystem for cooperating with the HPCI shared storage, and report performance and statistics of the JLDG. As of April 2015, 15 research groups (61 users) store their daily research data of 4.7PB including replica and 68 million files in total. Number of publications for works which used the JLDG is 98. The large number of publications and recent rapid increase of disk usage convince us that the JLDG has grown up into a useful infrastructure for LQCD community in Japan.
Simulating cloud environment for HIS backup using secret sharing.
Kuroda, Tomohiro; Kimura, Eizen; Matsumura, Yasushi; Yamashita, Yoshinori; Hiramatsu, Haruhiko; Kume, Naoto
2013-01-01
In the face of a disaster hospitals are expected to be able to continue providing efficient and high-quality care to patients. It is therefore crucial for hospitals to develop business continuity plans (BCPs) that identify their vulnerabilities, and prepare procedures to overcome them. A key aspect of most hospitals' BCPs is creating the backup of the hospital information system (HIS) data at multiple remote sites. However, the need to keep the data confidential dramatically increases the costs of making such backups. Secret sharing is a method to split an original secret message so that individual pieces are meaningless, but putting sufficient number of pieces together reveals the original message. It allows creation of pseudo-redundant arrays of independent disks for privacy-sensitive data over the Internet. We developed a secret sharing environment for StarBED, a large-scale network experiment environment, and evaluated its potential and performance during disaster recovery. Simulation results showed that the entire main HIS database of Kyoto University Hospital could be retrieved within three days even if one of the distributed storage systems crashed during a disaster.
Alvarez, Robert; Weilenmann, Martin; Novak, Philippe
2008-07-15
Regenerating exhaust after-treatment systems are increasingly employed in passenger cars in order to comply with regulatory emission standards. These systems include pollutant storage units that occasionally have to be regenerated. The regeneration strategy applied, the resultant emission levels and their share of the emission level during normal operation mode are key issues in determining realistic overall emission factors for these cars. In order to investigate these topics, test series with four cars featuring different types of such after-treatment systems were carried out. The emission performance in legislative and real-world cycles was monitored as well as at constant speeds. The extra emissions determined during regeneration stages are presented together with the methodology applied to calculate their impact on overall emissions. It can be concluded that exhaust after-treatment systems with storage units cause substantial overall extra emissions during regeneration mode and can appreciably affect the emission factors of cars equipped with such systems, depending on the frequency of regenerations. Considering that the fleet appearance of vehicles equipped with such after-treatment systems will increase due to the evolution of statutory pollutant emission levels, extra emissions originating from regenerations of pollutant storage units consequently need to be taken into account for fleet emission inventories. Accurately quantifying these extra emissions is achieved by either conducting sufficient repetitions of emission measurements with an individual car or by considerably increasing the size of the sample of cars with comparable after-treatment systems.
NASA Astrophysics Data System (ADS)
Horsburgh, J. S.; Jones, A. S.
2016-12-01
Data and models used within the hydrologic science community are diverse. New research data and model repositories have succeeded in making data and models more accessible, but have been, in most cases, limited to particular types or classes of data or models and also lack the type of collaborative, and iterative functionality needed to enable shared data collection and modeling workflows. File sharing systems currently used within many scientific communities for private sharing of preliminary and intermediate data and modeling products do not support collaborative data capture, description, visualization, and annotation. More recently, hydrologic datasets and models have been cast as "social objects" that can be published, collaborated around, annotated, discovered, and accessed. Yet it can be difficult using existing software tools to achieve the kind of collaborative workflows and data/model reuse that many envision. HydroShare is a new, web-based system for sharing hydrologic data and models with specific functionality aimed at making collaboration easier and achieving new levels of interactive functionality and interoperability. Within HydroShare, we have developed new functionality for creating datasets, describing them with metadata, and sharing them with collaborators. HydroShare is enabled by a generic data model and content packaging scheme that supports describing and sharing diverse hydrologic datasets and models. Interoperability among the diverse types of data and models used by hydrologic scientists is achieved through the use of consistent storage, management, sharing, publication, and annotation within HydroShare. In this presentation, we highlight and demonstrate how the flexibility of HydroShare's data model and packaging scheme, HydroShare's access control and sharing functionality, and versioning and publication capabilities have enabled the sharing and publication of research datasets for a large, interdisciplinary water research project called iUTAH (innovative Urban Transitions and Aridregion Hydro-sustainability). We discuss the experiences of iUTAH researchers now using HydroShare to collaboratively create, curate, and publish datasets and models in a way that encourages collaboration, promotes reuse, and meets funding agency requirements.
Integration of Variable Speed Pumped Hydro Storage in Automatic Generation Control Systems
NASA Astrophysics Data System (ADS)
Fulgêncio, N.; Moreira, C.; Silva, B.
2017-04-01
Pumped storage power (PSP) plants are expected to be an important player in modern electrical power systems when dealing with increasing shares of new renewable energies (NRE) such as solar or wind power. The massive penetration of NRE and consequent replacement of conventional synchronous units will significantly affect the controllability of the system. In order to evaluate the capability of variable speed PSP plants participation in the frequency restoration reserve (FRR) provision, taking into account the expected performance in terms of improved ramp response capability, a comparison with conventional hydro units is presented. In order to address this issue, a three area test network was considered, as well as the corresponding automatic generation control (AGC) systems, being responsible for re-dispatching the generation units to re-establish power interchange between areas as well as the system nominal frequency. The main issue under analysis in this paper is related to the benefits of the fast response of variable speed PSP with respect to its capability of providing fast power balancing in a control area.
The DAQ system for the AEḡIS experiment
NASA Astrophysics Data System (ADS)
Prelz, F.; Aghion, S.; Amsler, C.; Ariga, T.; Bonomi, G.; Brusa, R. S.; Caccia, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Comparat, D.; Consolati, G.; Demetrio, A.; Di Noto, L.; Doser, M.; Ereditato, A.; Evans, C.; Ferragut, R.; Fesel, J.; Fontana, A.; Gerber, S.; Giammarchi, M.; Gligorova, A.; Guatieri, F.; Haider, S.; Hinterberger, A.; Holmestad, H.; Kellerbauer, A.; Krasnický, D.; Lagomarsino, V.; Lansonneur, P.; Lebrun, P.; Malbrunot, C.; Mariazzi, S.; Matveev, V.; Mazzotta, Z.; Müller, S. R.; Nebbia, G.; Nedelec, P.; Oberthaler, M.; Pacifico, N.; Pagano, D.; Penasa, L.; Petracek, V.; Prevedelli, M.; Ravelli, L.; Rienaecker, B.; Robert, J.; Røhne, O. M.; Rotondi, A.; Sacerdoti, M.; Sandaker, H.; Santoro, R.; Scampoli, P.; Simon, M.; Smestad, L.; Sorrentino, F.; Testera, G.; Tietje, I. C.; Widmann, E.; Yzombard, P.; Zimmer, C.; Zmeskal, J.; Zurlo, N.
2017-10-01
In the sociology of small- to mid-sized (O(100) collaborators) experiments the issue of data collection and storage is sometimes felt as a residual problem for which well-established solutions are known. Still, the DAQ system can be one of the few forces that drive towards the integration of otherwise loosely coupled detector systems. As such it may be hard to complete with off-the-shelf components only. LabVIEW and ROOT are the (only) two software systems that were assumed to be familiar enough to all collaborators of the AEḡIS (AD6) experiment at CERN: working out of the GXML representation of LabVIEW Data types, a semantically equivalent representation as ROOT TTrees was developed for permanent storage and analysis. All data in the experiment is cast into this common format and can be produced and consumed on both systems and transferred over TCP and/or multicast over UDP for immediate sharing over the experiment LAN. We describe the setup that has been able to cater to all run data logging and long term monitoring needs of the AEḡIS experiment so far.
NASA Astrophysics Data System (ADS)
Mascetti, L.; Cano, E.; Chan, B.; Espinal, X.; Fiorot, A.; González Labrador, H.; Iven, J.; Lamanna, M.; Lo Presti, G.; Mościcki, JT; Peters, AJ; Ponce, S.; Rousseau, H.; van der Ster, D.
2015-12-01
CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. The total usable space available on disk for users is about 100 PB (with relative ratios 1:20:120). EOS actively uses the two CERN Tier0 centres (Meyrin and Wigner) with 50:50 ratio. IT DSS also provide sizeable on-demand resources for IT services most notably OpenStack and NFS-based clients: this is provided by a Ceph infrastructure (3 PB) and few proprietary servers (NetApp). We will describe our operational experience and recent changes to these systems with special emphasis to the present usages for LHC data taking, the convergence to commodity hardware (nodes with 200-TB each with optional SSD) shared across all services. We also describe our experience in coupling commodity and home-grown solution (e.g. CERNBox integration in EOS, Ceph disk pools for AFS, CASTOR and NFS) and finally the future evolution of these systems for WLCG and beyond.
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
NASA Astrophysics Data System (ADS)
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E.; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders
2018-01-01
Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. With the publication of Exdir, we invite the scientific community to join the development to create an open specification that will serve as many needs as possible and as a foundation for open access to and exchange of data. PMID:29706879
Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders
2018-01-01
Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. With the publication of Exdir, we invite the scientific community to join the development to create an open specification that will serve as many needs as possible and as a foundation for open access to and exchange of data.
A high reliability battery management system
NASA Technical Reports Server (NTRS)
Moody, M. H.
1986-01-01
Over a period of some 5 years Canadian Astronautics Limited (CAL) has developed a system to autonomously manage, and thus prolong the life of, secondary storage batteries. During the development, the system was aimed at the space vehicle application using nickel cadmium batteries, but is expected to be able to enhance the life and performance of any rechargeable electrochemical couple. The system handles the cells of a battery individually and thus avoids the problems of over, and under, drive that inevitably occur in a battery of cells managed by an averaging system. This individual handling also allow cells to be totally bypassed in the event of failure, thus avoiding the losses associated with low capacity, partial short circuit, and the catastrophe of open circuit. The system has an optional capability of managing redundant batteries simultaneously, adding the advantage of on line reconditioning of one battery, while the other maintains the energy storage capability of the overall system. As developed, the system contains a dedicated, redundant, microprocessor, but the capability exists to have this computing capability time shared, or remote, and operating through a data link. As adjuncts to the basic management system CAL has developed high efficiency, polyphase, power regulators for charge and discharge power conditioning.
Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.
2016-01-01
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi
A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. To support such capabilities, significant progress has been made in various components including the deployment of 100 Gbps networks with future 1 Tbps bandwidth, increases in end-host capabilities with multiple cores and buses, capacity improvements in large disk arrays, and deployment of parallel file systems such as Lustre and GPFS. High-performance source-to-sink datamore » flows must be composed of these component systems, which requires significant optimizations of the storage-to-host data and execution paths to match the edge and long-haul network connections. In particular, end systems are currently supported by 10-40 Gbps Network Interface Cards (NIC) and 8-32 Gbps storage Host Channel Adapters (HCAs), which carry the individual flows that collectively must reach network speeds of 100 Gbps and higher. Indeed, such data flows must be synthesized using multicore, multibus hosts connected to high-performance storage systems on one side and to the network on the other side. Current experimental results show that the constituent flows must be optimally composed and preserved from storage systems, across the hosts and the networks with minimal interference. Furthermore, such a capability must be made available transparently to the science users without placing undue demands on them to account for the details of underlying systems and networks. And, this task is expected to become even more complex in the future due to the increasing sophistication of hosts, storage systems, and networks that constitute the high-performance flows. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to achieve source-to-sink high-performance flows, and (2) develop tools that provide these capabilities through simple interfaces to users and applications. In terms of the former, we propose to develop (1) optimization methods that align and transition multiple storage flows to multiple network flows on multicore, multibus hosts; and (2) edge and long-haul network path realization and maintenance using advanced provisioning methods including OSCARS and OpenFlow. We also propose synthesis methods that combine these individual technologies to compose high-performance flows using a collection of constituent storage-network flows, and realize them across the storage and local network connections as well as long-haul connections. We propose to develop automated user tools that profile the hosts, storage systems, and network connections; compose the source-to-sink complex flows; and set up and maintain the needed network connections. These solutions will be tested using (1) 100 Gbps connection(s) between Oak Ridge National Laboratory (ORNL) and Argonne National Laboratory (ANL) with storage systems supported by Lustre and GPFS file systems with an asymmetric connection to University of Memphis (UM); (2) ORNL testbed with multicore and multibus hosts, switches with OpenFlow capabilities, and network emulators; and (3) 100 Gbps connections from ESnet and their Openflow testbed, and other experimental connections. This proposal brings together the expertise and facilities of the two national laboratories, ORNL and ANL, and UM. It also represents a collaboration between DOE and the Department of Defense (DOD) projects at ORNL by sharing technical expertise and personnel costs, and leveraging the existing DOD Extreme Scale Systems Center (ESSC) facilities at ORNL.« less
EDGE3: A web-based solution for management and analysis of Agilent two color microarray experiments
Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A
2009-01-01
Background The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE3 was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. Results EDGE3 has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE3 is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Conclusion Here, we present EDGE3, an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE3 provides a means for managing RNA samples and arrays during the hybridization process. EDGE3 is freely available for download at . PMID:19732451
Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A
2009-09-04
The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE(3) was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. EDGE(3) has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE(3) is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Here, we present EDGE(3), an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE(3) provides a means for managing RNA samples and arrays during the hybridization process. EDGE(3) is freely available for download at http://edge.oncology.wisc.edu/.
The Jade File System. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rao, Herman Chung-Hwa
1991-01-01
File systems have long been the most important and most widely used form of shared permanent storage. File systems in traditional time-sharing systems, such as Unix, support a coherent sharing model for multiple users. Distributed file systems implement this sharing model in local area networks. However, most distributed file systems fail to scale from local area networks to an internet. Four characteristics of scalability were recognized: size, wide area, autonomy, and heterogeneity. Owing to size and wide area, techniques such as broadcasting, central control, and central resources, which are widely adopted by local area network file systems, are not adequate for an internet file system. An internet file system must also support the notion of autonomy because an internet is made up by a collection of independent organizations. Finally, heterogeneity is the nature of an internet file system, not only because of its size, but also because of the autonomy of the organizations in an internet. The Jade File System, which provides a uniform way to name and access files in the internet environment, is presented. Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Because of autonomy, Jade is designed under the restriction that the underlying file systems may not be modified. In order to avoid the complexity of maintaining an internet-wide, global name space, Jade permits each user to define a private name space. In Jade's design, we pay careful attention to avoiding unnecessary network messages between clients and file servers in order to achieve acceptable performance. Jade's name space supports two novel features: (1) it allows multiple file systems to be mounted under one direction; and (2) it permits one logical name space to mount other logical name spaces. A prototype of Jade was implemented to examine and validate its design. The prototype consists of interfaces to the Unix File System, the Sun Network File System, and the File Transfer Protocol.
Information Systems to Support Surveillance for Malaria Elimination
Ohrt, Colin; Roberts, Kathryn W.; Sturrock, Hugh J. W.; Wegbreit, Jennifer; Lee, Bruce Y.; Gosling, Roly D.
2015-01-01
Robust and responsive surveillance systems are critical for malaria elimination. The ideal information system that supports malaria elimination includes: rapid and complete case reporting, incorporation of related data, such as census or health survey information, central data storage and management, automated and expert data analysis, and customized outputs and feedback that lead to timely and targeted responses. Spatial information enhances such a system, ensuring cases are tracked and mapped over time. Data sharing and coordination across borders are vital and new technologies can improve data speed, accuracy, and quality. Parts of this ideal information system exist and are in use, but have yet to be linked together coherently. Malaria elimination programs should support the implementation and refinement of information systems to support surveillance and response and ensure political and financial commitment to maintain the systems and the human resources needed to run them. National malaria programs should strive to improve the access and utility of these information systems and establish cross-border data sharing mechanisms through the use of standard indicators for malaria surveillance. Ultimately, investment in the information technologies that support a timely and targeted surveillance and response system is essential for malaria elimination. PMID:26013378
Information systems to support surveillance for malaria elimination.
Ohrt, Colin; Roberts, Kathryn W; Sturrock, Hugh J W; Wegbreit, Jennifer; Lee, Bruce Y; Gosling, Roly D
2015-07-01
Robust and responsive surveillance systems are critical for malaria elimination. The ideal information system that supports malaria elimination includes: rapid and complete case reporting, incorporation of related data, such as census or health survey information, central data storage and management, automated and expert data analysis, and customized outputs and feedback that lead to timely and targeted responses. Spatial information enhances such a system, ensuring cases are tracked and mapped over time. Data sharing and coordination across borders are vital and new technologies can improve data speed, accuracy, and quality. Parts of this ideal information system exist and are in use, but have yet to be linked together coherently. Malaria elimination programs should support the implementation and refinement of information systems to support surveillance and response and ensure political and financial commitment to maintain the systems and the human resources needed to run them. National malaria programs should strive to improve the access and utility of these information systems and establish cross-border data sharing mechanisms through the use of standard indicators for malaria surveillance. Ultimately, investment in the information technologies that support a timely and targeted surveillance and response system is essential for malaria elimination. © The American Society of Tropical Medicine and Hygiene.
NASA Technical Reports Server (NTRS)
Jones, Terry; Mark, Richard; Martin, Jeanne; May, John; Pierce, Elsie; Stanberry, Linda
1996-01-01
This paper describes an implementation of the proposed MPI-IO (Message Passing Interface - Input/Output) standard for parallel I/O. Our system uses third-party transfer to move data over an external network between the processors where it is used and the I/O devices where it resides. Data travels directly from source to destination, without the need for shuffling it among processors or funneling it through a central node. Our distributed server model lets multiple compute nodes share the burden of coordinating data transfers. The system is built on the High Performance Storage System (HPSS), and a prototype version runs on a Meiko CS-2 parallel computer.
Globus Identity, Access, and Data Management: Platform Services for Collaborative Science
NASA Astrophysics Data System (ADS)
Ananthakrishnan, R.; Foster, I.; Wagner, R.
2016-12-01
Globus is software-as-a-service for research data management, developed at, and operated by, the University of Chicago. Globus, accessible at www.globus.org, provides high speed, secure file transfer; file sharing directly from existing storage systems; and data publication to institutional repositories. 40,000 registered users have used Globus to transfer tens of billions of files totaling hundreds of petabytes between more than 10,000 storage systems within campuses and national laboratories in the US and internationally. Web, command line, and REST interfaces support both interactive use and integration into applications and infrastructures. An important component of the Globus system is its foundational identity and access management (IAM) platform service, Globus Auth. Both Globus research data management and other applications use Globus Auth for brokering authentication and authorization interactions between end-users, identity providers, resource servers (services), and a range of clients, including web, mobile, and desktop applications, and other services. Compliant with important standards such as OAuth, OpenID, and SAML, Globus Auth provides mechanisms required for an extensible, integrated ecosystem of services and clients for the research and education community. It underpins projects such as the US National Science Foundation's XSEDE system, NCAR's Research Data Archive, and the DOE Systems Biology Knowledge Base. Current work is extending Globus services to be compliant with FEDRAMP standards for security assessment, authorization, and monitoring for cloud services. We will present Globus IAM solutions and give examples of Globus use in various projects for federated access to resources. We will also describe how Globus Auth and Globus research data management capabilities enable rapid development and low-cost operations of secure data sharing platforms that leverage Globus services and integrate them with local policy and security.
MPD: a pathogen genome and metagenome database
Zhang, Tingting; Miao, Jiaojiao; Han, Na; Qiang, Yujun; Zhang, Wen
2018-01-01
Abstract Advances in high-throughput sequencing have led to unprecedented growth in the amount of available genome sequencing data, especially for bacterial genomes, which has been accompanied by a challenge for the storage and management of such huge datasets. To facilitate bacterial research and related studies, we have developed the Mypathogen database (MPD), which provides access to users for searching, downloading, storing and sharing bacterial genomics data. The MPD represents the first pathogenic database for microbial genomes and metagenomes, and currently covers pathogenic microbial genomes (6604 genera, 11 071 species, 41 906 strains) and metagenomic data from host, air, water and other sources (28 816 samples). The MPD also functions as a management system for statistical and storage data that can be used by different organizations, thereby facilitating data sharing among different organizations and research groups. A user-friendly local client tool is provided to maintain the steady transmission of big sequencing data. The MPD is a useful tool for analysis and management in genomic research, especially for clinical Centers for Disease Control and epidemiological studies, and is expected to contribute to advancing knowledge on pathogenic bacteria genomes and metagenomes. Database URL: http://data.mypathogen.org PMID:29917040
Efficiently sphere-decodable physical layer transmission schemes for wireless storage networks
NASA Astrophysics Data System (ADS)
Lu, Hsiao-Feng Francis; Barreal, Amaro; Karpuk, David; Hollanti, Camilla
2016-12-01
Three transmission schemes over a new type of multiple-access channel (MAC) model with inter-source communication links are proposed and investigated in this paper. This new channel model is well motivated by, e.g., wireless distributed storage networks, where communication to repair a lost node takes place from helper nodes to a repairing node over a wireless channel. Since in many wireless networks nodes can come and go in an arbitrary manner, there must be an inherent capability of inter-node communication between every pair of nodes. Assuming that communication is possible between every pair of helper nodes, the newly proposed schemes are based on various smart time-sharing and relaying strategies. In other words, certain helper nodes will be regarded as relays, thereby converting the conventional uncooperative multiple-access channel to a multiple-access relay channel (MARC). The diversity-multiplexing gain tradeoff (DMT) of the system together with efficient sphere-decodability and low structural complexity in terms of the number of antennas required at each end is used as the main design objectives. While the optimal DMT for the new channel model is fully open, it is shown that the proposed schemes outperform the DMT of the simple time-sharing protocol and, in some cases, even the optimal uncooperative MAC DMT. While using a wireless distributed storage network as a motivating example throughout the paper, the MAC transmission techniques proposed here are completely general and as such applicable to any MAC communication with inter-source communication links.
Two Years Experience With A Broadband Cable Network In An 1100-Bed Hospital
NASA Astrophysics Data System (ADS)
Cahill, Patrick T.; McCarthy, Robert H.; James, R.; Knowles, R.
1985-09-01
Early in 1983, a three-cable broadband network was installed in The New York Hospital-Cornell Medical Center using well-established cable-TV technology. This network was configured in a vertical tree topology. Currently, it extends over thirteen floors vertically and over two city blocks horizontally. It has now survived several major renovations on the various floors of the hospital. This survivability is a result of the siting of the main tree and of the isolation gained for the branches through the strategic placement of amplifiers. This communications system was designed in a modular fashion for later expansion and so that seven types of functions could be supported on the network without the addition of a new functional level disrupting the functions already existing on the system. Thus far, two functions (real-time image consultation and computer sharing) have been implemented, and two other functions (analog image storage and data base management) are in the prototype stage. Perhaps the most significant feature of our experience thus far has been the ease and utility of analog transmission and storage of images. This experience has lead us to postpone and even de-emphasize digital transmission and storage in our future plans.
NASA Astrophysics Data System (ADS)
Ram Prabhakar, J.; Ragavan, K.
2013-07-01
This article proposes new power management based current control strategy for integrated wind-solar-hydro system equipped with battery storage mechanism. In this control technique, an indirect estimation of load current is done, through energy balance model, DC-link voltage control and droop control. This system features simpler energy management strategy and necessitates few power electronic converters, thereby minimizing the cost of the system. The generation-demand (G-D) management diagram is formulated based on the stochastic weather conditions and demand, which would likely moderate the gap between both. The features of management strategy deploying energy balance model include (1) regulating DC-link voltage within specified tolerances, (2) isolated operation without relying on external electric power transmission network, (3) indirect current control of hydro turbine driven induction generator and (4) seamless transition between grid-connected and off-grid operation modes. Furthermore, structuring of the hybrid system with appropriate selection of control variables enables power sharing among each energy conversion systems and battery storage mechanism. By addressing these intricacies, it is viable to regulate the frequency and voltage of the remote network at load end. The performance of the proposed composite scheme is demonstrated through time-domain simulation in MATLAB/Simulink environment.
Fuelcell-Hybrid Mine loader (LHD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
James L Dippo; Tim Erikson; Kris Hess
2009-07-10
The fuel cell hybrid mine loader project, sponsored by a government-industry consortium, was implemented to determine the viability of proton exchange membrane (PEM) fuel cells in underground mining applications. The Department of Energy (DOE) sponsored this project with cost-share support from industry. The project had three main goals: (1) to develop a mine loader powered by a fuel cell, (2) to develop associated metal-hydride storage and refueling systems, and (3) to demonstrate the fuel cell hybrid loader in an underground mine in Nevada. The investigation of a zero-emissions fuel cell power plant, the safe storage of hydrogen, worker health advantagesmore » (over the negative health effects associated with exposure to diesel emissions), and lower operating costs are all key objectives for this project.« less
NASA Astrophysics Data System (ADS)
Alhamwi, Alaa; Kleinhans, David; Weitemeyer, Stefan; Vogt, Thomas
2014-12-01
Renewable Energy sources are gaining importance in the Middle East and North Africa (MENA) region. The purpose of this study is to quantify the optimal mix of renewable power generation in the MENA region, taking Morocco as a case study. Based on hourly meteorological data and load data, a 100% solar-plus-wind only scenario for Morocco is investigated. For the optimal mix analyses, a mismatch energy modelling approach is adopted with the objective to minimise the required storage capacities. For a hypothetical Moroccan energy supply system which is entirely based on renewable energy sources, our results show that the minimum storage capacity is achieved at a share of 63% solar and 37% wind power generations.
40 CFR 60.482-1a - Standards: General.
Code of Federal Regulations, 2014 CFR
2014-07-01
... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...
40 CFR 60.482-1a - Standards: General.
Code of Federal Regulations, 2012 CFR
2012-07-01
... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...
40 CFR 60.482-1a - Standards: General.
Code of Federal Regulations, 2010 CFR
2010-07-01
... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...
40 CFR 60.482-1a - Standards: General.
Code of Federal Regulations, 2013 CFR
2013-07-01
... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...
40 CFR 60.482-1a - Standards: General.
Code of Federal Regulations, 2011 CFR
2011-07-01
... time during the specified monitoring period (e.g., month, quarter, year), provided the monitoring is... Monthly Quarterly Semiannually. (2) Pumps and valves that are shared among two or more batch process units... be separated by at least 120 calendar days. (g) If the storage vessel is shared with multiple process...
The Shared Bibliographic Input Network (SBIN): A Summary of the Experiment.
ERIC Educational Resources Information Center
Cotter, Gladys A.
As part of its mission to provide centralized services for the acquisition, storage, retrieval, and dissemination of scientific and technical information (STI) to support Department of Defense (DoD) research, development, and engineering studies programs, the Defense Technical Information Center (DTIC) sponsors the Shared Bibliographic Input…
HydroShare: A Platform for Collaborative Data and Model Sharing in Hydrology
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.
2017-12-01
HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting its use as a virtual environment supporting education and research. HydroShare has components that support: (1) resource storage, (2) resource exploration, and (3) web apps for actions on resources. The HydroShare data discovery, sharing and publishing functions as well as HydroShare web apps provide the capability to analyze data and execute models completely in the cloud (servers remote from the user) overcoming desktop platform limitations. The HydroShare GIS app provides a basic capability to visualize spatial data. The HydroShare JupyterHub Notebook app provides flexible and documentable execution of Python code snippets for analysis and modeling in a way that results can be shared among HydroShare users and groups to support research collaboration and education. We will discuss how these developments can be used to support different types of educational efforts in Hydrology where being completely web based is of value in an educational setting as students can all have access to the same functionality regardless of their computer.
Space-time dependence between energy sources and climate related energy production
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Borga, Marco; Creutin, Jean-Dominique; Ramos, Maria-Helena; Tøfte, Lena; Warland, Geir
2014-05-01
The European Renewable Energy Directive adopted in 2009 focuses on achieving a 20% share of renewable energy in the EU overall energy mix by 2020. A major part of renewable energy production is related to climate, called "climate related energy" (CRE) production. CRE production systems (wind, solar, and hydropower) are characterized by a large degree of intermittency and variability on both short and long time scales due to the natural variability of climate variables. The main strategies to handle the variability of CRE production include energy-storage, -transport, -diversity and -information (smart grids). The three first strategies aim to smooth out the intermittency and variability of CRE production in time and space whereas the last strategy aims to provide a more optimal interaction between energy production and demand, i.e. to smooth out the residual load (the difference between demand and production). In order to increase the CRE share in the electricity system, it is essential to understand the space-time co-variability between the weather variables and CRE production under both current and future climates. This study presents a review of the literature that searches to tackle these problems. It reveals that the majority of studies deals with either a single CRE source or with the combination of two CREs, mostly wind and solar. This may be due to the fact that the most advanced countries in terms of wind equipment have also very little hydropower potential (Denmark, Ireland or UK, for instance). Hydropower is characterized by both a large storage capacity and flexibility in electricity production, and has therefore a large potential for both balancing and storing energy from wind- and solar-power. Several studies look at how to better connect regions with large share of hydropower (e.g., Scandinavia and the Alps) to regions with high shares of wind- and solar-power (e.g., green battery North-Sea net). Considering time scales, various studies consider wind and solar power production and their co-fluctuation at small time scales. The multi-scale nature of the variability is less studied, i.e., the potential adverse or favorable co-fluctuation at intermediate time scales involving water scarcity or abundance, is less present in the literature.Our review points out that it could be especially interesting to promote research on how the pronounced large-scale fluctuations in inflow to hydropower (intra-annual run-off) and smaller scale fluctuations in wind- and solar-power interact in an energy system. There is a need to better represent the profound difference between wind-, solar- and hydro-energy sources. On the one hand, they are all directly linked to the 2-D horizontal dynamics of meteorology. On the other hand, the branching structure of hydrological systems transforms this variability and governs the complex combination of natural inflows and reservoir storage.Finally, we note that the CRE production is, in addition to weather, also influenced by the energy system and market, i.e., the energy transport and demand across scales as well as changes of market regulation. The CRE production system lies thus in this nexus between climate, energy systems and market regulations. The work presented is part of the FP7 project COMPLEX (Knowledge based climate mitigation systems for a low carbon economy; http://www.complex.ac.uk)
Automatic Identification of Application I/O Signatures from Noisy Server-Side Traces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yang; Gunasekaran, Raghul; Ma, Xiaosong
2014-01-01
Competing workloads on a shared storage system cause I/O resource contention and application performance vagaries. This problem is already evident in today s HPC storage systems and is likely to become acute at exascale. We need more interaction between application I/O requirements and system software tools to help alleviate the I/O bottleneck, moving towards I/O-aware job scheduling. However, this requires rich techniques to capture application I/O characteristics, which remain evasive in production systems. Traditionally, I/O characteristics have been obtained using client-side tracing tools, with drawbacks such as non-trivial instrumentation/development costs, large trace traffic, and inconsistent adoption. We present a novelmore » approach, I/O Signature Identifier (IOSI), to characterize the I/O behavior of data-intensive applications. IOSI extracts signatures from noisy, zero-overhead server-side I/O throughput logs that are already collected on today s supercomputers, without interfering with the compiling/execution of applications. We evaluated IOSI using the Spider storage system at Oak Ridge National Laboratory, the S3D turbulence application (running on 18,000 Titan nodes), and benchmark-based pseudo-applications. Through our ex- periments we confirmed that IOSI effectively extracts an application s I/O signature despite significant server-side noise. Compared to client-side tracing tools, IOSI is transparent, interface-agnostic, and incurs no overhead. Compared to alternative data alignment techniques (e.g., dynamic time warping), it offers higher signature accuracy and shorter processing time.« less
Cui, Wenbin; Zheng, Peiyong; Yang, Jiahong; Zhao, Rong; Gao, Jiechun; Yu, Guangjun
2015-02-01
Biobanks are important resources and central tools for translational medicine, which brings scientific research outcomes to clinical practice. The key purpose of biobanking in translational medicine and other medical research is to provide biological samples that are integrated with clinical information. In 2008, the Shanghai Municipal Government launched the "Shanghai Tissue Bank" in an effort to promote research in translational medicine. Now a sharing service platform has been constructed to integrate clinical practice and biological information that can be used in diverse medical and pharmaceutical research studies. The platform collects two kinds of data: sample data and clinical data. The sample data are obtained from the hospital biobank management system, and mainly include the donors' age, gender, marital status, sample source, sample type, collection time, deposit time, and storage method. The clinical data are collected from the "Hospital-Link" system (a medical information sharing system that connects 23 tertiary hospitals in Shanghai). The main contents include donors' corresponding medication information, test reports, inspection reports, and hospital information. As of the end of September 2014, the project has a collection of 16,020 donors and 148,282 samples, which were obtained from 12 medical institutions, and automatically acquired donors' corresponding clinical data from the "Hospital-Link" system for 6830 occurrences. This project will contribute to scientific research at medical institutions in Shanghai, and will also support the development of the biopharmaceutical industry. In this article, we will describe the significance, the construction phases, the application prospects, and benefits of the sample repository and information sharing service platform.
Alternative Fuels Data Center: Installing B20 Equipment
operations to share the fueling site with you. Secure Permits, Adhere to State Requirements The contractor is storage tanks. The contractor will register storage tanks with the state environmental agency, which must the contractor and client to ensure the completed project meets expectations. Maps & Data U.S
NASA Technical Reports Server (NTRS)
Coles, W. A.
1975-01-01
The CAD/CAM interactive computer graphics system was described; uses to which it has been put were shown, and current developments of the system were outlined. The system supports batch, time sharing, and fully interactive graphic processing. Engineers using the system may switch between these methods of data processing and problem solving to make the best use of the available resources. It is concluded that the introduction of on-line computing in the form of teletypes, storage tubes, and fully interactive graphics has resulted in large increases in productivity and reduced timescales in the geometric computing, numerical lofting and part programming areas, together with a greater utilization of the system in the technical departments.
Negotiating designs of multi-purpose reservoir systems in international basins
NASA Astrophysics Data System (ADS)
Geressu, Robel; Harou, Julien
2016-04-01
Given increasing agricultural and energy demands, coordinated management of multi-reservoir systems could help increase production without further stressing available water resources. However, regional or international disputes about water-use rights pose a challenge to efficient expansion and management of many large reservoir systems. Even when projects are likely to benefit all stakeholders, agreeing on the design, operation, financing, and benefit sharing can be challenging. This is due to the difficulty of considering multiple stakeholder interests in the design of projects and understanding the benefit trade-offs that designs imply. Incommensurate performance metrics, incomplete knowledge on system requirements, lack of objectivity in managing conflict and difficulty to communicate complex issue exacerbate the problem. This work proposes a multi-step hybrid multi-objective optimization and multi-criteria ranking approach for supporting negotiation in water resource systems. The approach uses many-objective optimization to generate alternative efficient designs and reveal the trade-offs between conflicting objectives. This enables informed elicitation of criteria weights for further multi-criteria ranking of alternatives. An ideal design would be ranked as best by all stakeholders. Resource-sharing mechanisms such as power-trade and/or cost sharing may help competing stakeholders arrive at designs acceptable to all. Many-objective optimization helps suggests efficient designs (reservoir site, its storage size and operating rule) and coordination levels considering the perspectives of multiple stakeholders simultaneously. We apply the proposed approach to a proof-of-concept study of the expansion of the Blue Nile transboundary reservoir system.
Integrated System Health Management: Foundational Concepts, Approach, and Implementation
NASA Technical Reports Server (NTRS)
Figueroa, Fernando
2009-01-01
Implementation of integrated system health management (ISHM) capability is fundamentally linked to the management of data, information, and knowledge (DIaK) with the purposeful objective of determining the health of a system. It is akin to having a team of experts who are all individually and collectively observing and analyzing a complex system, and communicating effectively with each other in order to arrive to an accurate and reliable assessment of its health. This paper presents concepts, procedures, and a specific approach as a foundation for implementing a credible ISHM capability. The capability stresses integration of DIaK from all elements of a subsystem. The intent is also to make possible implementation of on-board ISHM capability, in contrast to a remote capability. The information presented is the result of many years of research, development, and maturation of technologies, and of prototype implementations in operational systems (rocket engine test facilities). The paper will address the following topics: ISHM Model of a system; detection of anomaly indicators; determination and confirmation of anomalies; diagnostic of causes and determination of effects; consistency checking cycle; sharing of health information; sharing of display information; storage and retrieval of health information; and example implementation.
Tagliaferri, Luca; Kovács, György; Autorino, Rosa; Budrukkar, Ashwini; Guinot, Jose Luis; Hildebrand, Guido; Johansson, Bengt; Monge, Rafael Martìnez; Meyer, Jens E; Niehoff, Peter; Rovirosa, Angeles; Takàcsi-Nagy, Zoltàn; Dinapoli, Nicola; Lanzotti, Vito; Damiani, Andrea; Soror, Tamer; Valentini, Vincenzo
2016-08-01
Aim of the COBRA (Consortium for Brachytherapy Data Analysis) project is to create a multicenter group (consortium) and a web-based system for standardized data collection. GEC-ESTRO (Groupe Européen de Curiethérapie - European Society for Radiotherapy & Oncology) Head and Neck (H&N) Working Group participated in the project and in the implementation of the consortium agreement, the ontology (data-set) and the necessary COBRA software services as well as the peer reviewing of the general anatomic site-specific COBRA protocol. The ontology was defined by a multicenter task-group. Eleven centers from 6 countries signed an agreement and the consortium approved the ontology. We identified 3 tiers for the data set: Registry (epidemiology analysis), Procedures (prediction models and DSS), and Research (radiomics). The COBRA-Storage System (C-SS) is not time-consuming as, thanks to the use of "brokers", data can be extracted directly from the single center's storage systems through a connection with "structured query language database" (SQL-DB), Microsoft Access(®), FileMaker Pro(®), or Microsoft Excel(®). The system is also structured to perform automatic archiving directly from the treatment planning system or afterloading machine. The architecture is based on the concept of "on-purpose data projection". The C-SS architecture is privacy protecting because it will never make visible data that could identify an individual patient. This C-SS can also benefit from the so called "distributed learning" approaches, in which data never leave the collecting institution, while learning algorithms and proposed predictive models are commonly shared. Setting up a consortium is a feasible and practicable tool in the creation of an international and multi-system data sharing system. COBRA C-SS seems to be well accepted by all involved parties, primarily because it does not influence the center's own data storing technologies, procedures, and habits. Furthermore, the method preserves the privacy of all patients.
A web platform for integrated surface water - groundwater modeling and data management
NASA Astrophysics Data System (ADS)
Fatkhutdinov, Aybulat; Stefan, Catalin; Junghanns, Ralf
2016-04-01
Model-based decision support systems are considered to be reliable and time-efficient tools for resources management in various hydrology related fields. However, searching and acquisition of the required data, preparation of the data sets for simulations as well as post-processing, visualization and publishing of the simulations results often requires significantly more work and time than performing the modeling itself. The purpose of the developed software is to combine data storage facilities, data processing instruments and modeling tools in a single platform which potentially can reduce time required for performing simulations, hence decision making. The system is developed within the INOWAS (Innovative Web Based Decision Support System for Water Sustainability under a Changing Climate) project. The platform integrates spatially distributed catchment scale rainfall - runoff, infiltration and groundwater flow models with data storage, processing and visualization tools. The concept is implemented in a form of a web-GIS application and is build based on free and open source components, including the PostgreSQL database management system, Python programming language for modeling purposes, Mapserver for visualization and publishing the data, Openlayers for building the user interface and others. Configuration of the system allows performing data input, storage, pre- and post-processing and visualization in a single not disturbed workflow. In addition, realization of the decision support system in the form of a web service provides an opportunity to easily retrieve and share data sets as well as results of simulations over the internet, which gives significant advantages for collaborative work on the projects and is able to significantly increase usability of the decision support system.
Adapting federated cyberinfrastructure for shared data collection facilities in structural biology
Stokes-Rees, Ian; Levesque, Ian; Murphy, Frank V.; Yang, Wei; Deacon, Ashley; Sliz, Piotr
2012-01-01
Early stage experimental data in structural biology is generally unmaintained and inaccessible to the public. It is increasingly believed that this data, which forms the basis for each macromolecular structure discovered by this field, must be archived and, in due course, published. Furthermore, the widespread use of shared scientific facilities such as synchrotron beamlines complicates the issue of data storage, access and movement, as does the increase of remote users. This work describes a prototype system that adapts existing federated cyberinfrastructure technology and techniques to significantly improve the operational environment for users and administrators of synchrotron data collection facilities used in structural biology. This is achieved through software from the Virtual Data Toolkit and Globus, bringing together federated users and facilities from the Stanford Synchrotron Radiation Lightsource, the Advanced Photon Source, the Open Science Grid, the SBGrid Consortium and Harvard Medical School. The performance and experience with the prototype provide a model for data management at shared scientific facilities. PMID:22514186
Adapting federated cyberinfrastructure for shared data collection facilities in structural biology.
Stokes-Rees, Ian; Levesque, Ian; Murphy, Frank V; Yang, Wei; Deacon, Ashley; Sliz, Piotr
2012-05-01
Early stage experimental data in structural biology is generally unmaintained and inaccessible to the public. It is increasingly believed that this data, which forms the basis for each macromolecular structure discovered by this field, must be archived and, in due course, published. Furthermore, the widespread use of shared scientific facilities such as synchrotron beamlines complicates the issue of data storage, access and movement, as does the increase of remote users. This work describes a prototype system that adapts existing federated cyberinfrastructure technology and techniques to significantly improve the operational environment for users and administrators of synchrotron data collection facilities used in structural biology. This is achieved through software from the Virtual Data Toolkit and Globus, bringing together federated users and facilities from the Stanford Synchrotron Radiation Lightsource, the Advanced Photon Source, the Open Science Grid, the SBGrid Consortium and Harvard Medical School. The performance and experience with the prototype provide a model for data management at shared scientific facilities.
Sacramento Municipal Utility District PV and Smart Grid Pilot at Anatolia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawson, Mark; Sanchez, Eddie Paul
2013-12-30
Under DE-FOA-0000085 High Penetration Solar Deployment, the U. S. Department of Energy funded agreements with SMUD and Navigant Consulting, SunPower, GridPoint, the National Renewable Energy Laboratory, and the California Energy Commission for this pilot demonstration project. Funding was $5,962,409.00. Cost share of $500,000 was also provided by the California Energy Commission. The project has strategic implications for SMUD, other utilities and the PV and energy-storage industries in business and resource planning, technology deployment and asset management. These implications include: -At this point, no dominant business models have emerged and the industry is open for new ideas. -Demonstrated two business modelsmore » for using distributed PV and energy storage, and brainstormed several dozen more, each with different pros and cons for SMUD, its customers and the industry. -Energy storage can be used to manage high penetrations of PV and mitigate potential issues such as reverse power flow, voltage control violations, power quality issues, increased wear and tear on utility equipment, and system wide power supply issues. - Smart meters are another tool utilities can use to manage high penetrations of PV. The necessary equipment and protocols exist, and the next step is to determine how to integrate the functionality with utility programs and what level of utility control is required. - Time-of-use rates for the residential customers who hosted energy storage systems did not cause a significant change in energy usage patterns. However, the rates we used were not optimized for PV and energy storage. Opportunities exist for utilities to develop new structures.« less
Consolidated Storage Facilities: Camel's Nose or Shared Burden? - 13112
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, James M.
2013-07-01
The Blue Ribbon Commission (BRC) made a strong argument why the reformulated nuclear waste program should make prompt efforts to develop one or more consolidated storage facilities (CSFs), and recommended the amendment of NWPA Section 145(b) 2 (linking 'monitored retrievable storage' to repository development) as an essential means to that end. However, other than recommending that the siting of CSFs should be 'consent-based' and that spent nuclear fuel (SNF) at stranded sites should be first-in-line for removal, the Commission made few recommendations regarding how CSF development should proceed. Working with three other key Senators, Jeff Bingaman attempted in the 112.more » Congress to craft legislation (S. 3469) to put the BRC recommendations into legislative language. The key reason why the Nuclear Waste Administration Act of 2012 did not proceed was the inability of the four senators to agree on whether and how to amend NWPA Section 145(b). A brief review of efforts to site consolidated storage since the Nuclear Waste Policy Amendments Act of 1987 suggests a strong and consistent motivation to shift the burden to someone (anyone) else. This paper argues that modification of NWPA Section 145(b) should be accompanied by guidelines for regional development and operation of CSFs. After review of the BRC recommendations regarding CSFs, and the 'camel's nose' prospects if implementation is not accompanied by further guidelines, the paper outlines a proposal for implementation of CSFs on a regional basis, including priorities for removal from reactor sites and subsequently from CSFs to repositories. Rather than allowing repository siting to be prejudiced by the location of a single remote CSF, the regional approach limits transport for off-site acceptance and storage, increases the efficiency of removal operations, provides a useful basis for compensation to states and communities that accept CSFs, and gives states with shared circumstances a shared stake in storage and disposal in an integrated national program. (authors)« less
Running and testing GRID services with Puppet at GRIF- IRFU
NASA Astrophysics Data System (ADS)
Ferry, S.; Schaer, F.; Meyer, JP
2015-12-01
GRIF is a distributed Tiers 2 centre, made of 6 different centres in the Paris region, and serving many VOs. The sub-sites are connected with 10 Gbps private network and share tools for central management. One of the sub-sites, GRIF-IRFU held and maintained in the CEA- Saclay centre, moved a year ago, to a configuration management using Puppet. Thanks to the versatility of Puppet/Foreman automation, the GRIF-IRFU site maintains usual grid services, with, among them: a CREAM-CE with a TORQUE+Maui (running a batch with more than 5000 jobs slots), a DPM storage of more than 2 PB, a Nagios monitoring essentially based on check_mk, as well as centralized services for the French NGI, like the accounting, or the argus central suspension system. We report on the actual functionalities of Puppet and present the last tests and evolutions including a monitoring with Graphite, a HT-condor multicore batch accessed with an ARC-CE and a CEPH storage file system.
Data Mining as a Service (DMaaS)
NASA Astrophysics Data System (ADS)
Tejedor, E.; Piparo, D.; Mascetti, L.; Moscicki, J.; Lamanna, M.; Mato, P.
2016-10-01
Data Mining as a Service (DMaaS) is a software and computing infrastructure that allows interactive mining of scientific data in the cloud. It allows users to run advanced data analyses by leveraging the widely adopted Jupyter notebook interface. Furthermore, the system makes it easier to share results and scientific code, access scientific software, produce tutorials and demonstrations as well as preserve the analyses of scientists. This paper describes how a first pilot of the DMaaS service is being deployed at CERN, starting from the notebook interface that has been fully integrated with the ROOT analysis framework, in order to provide all the tools for scientists to run their analyses. Additionally, we characterise the service backend, which combines a set of IT services such as user authentication, virtual computing infrastructure, mass storage, file synchronisation, development portals or batch systems. The added value acquired by the combination of the aforementioned categories of services is discussed, focusing on the opportunities offered by the CERNBox synchronisation service and its massive storage backend, EOS.
SEEK: a systems biology data and model management platform.
Wolstencroft, Katherine; Owen, Stuart; Krebs, Olga; Nguyen, Quyen; Stanford, Natalie J; Golebiewski, Martin; Weidemann, Andreas; Bittkowski, Meik; An, Lihua; Shockley, David; Snoep, Jacky L; Mueller, Wolfgang; Goble, Carole
2015-07-11
Systems biology research typically involves the integration and analysis of heterogeneous data types in order to model and predict biological processes. Researchers therefore require tools and resources to facilitate the sharing and integration of data, and for linking of data to systems biology models. There are a large number of public repositories for storing biological data of a particular type, for example transcriptomics or proteomics, and there are several model repositories. However, this silo-type storage of data and models is not conducive to systems biology investigations. Interdependencies between multiple omics datasets and between datasets and models are essential. Researchers require an environment that will allow the management and sharing of heterogeneous data and models in the context of the experiments which created them. The SEEK is a suite of tools to support the management, sharing and exploration of data and models in systems biology. The SEEK platform provides an access-controlled, web-based environment for scientists to share and exchange data and models for day-to-day collaboration and for public dissemination. A plug-in architecture allows the linking of experiments, their protocols, data, models and results in a configurable system that is available 'off the shelf'. Tools to run model simulations, plot experimental data and assist with data annotation and standardisation combine to produce a collection of resources that support analysis as well as sharing. Underlying semantic web resources additionally extract and serve SEEK metadata in RDF (Resource Description Format). SEEK RDF enables rich semantic queries, both within SEEK and between related resources in the web of Linked Open Data. The SEEK platform has been adopted by many systems biology consortia across Europe. It is a data management environment that has a low barrier of uptake and provides rich resources for collaboration. This paper provides an update on the functions and features of the SEEK software, and describes the use of the SEEK in the SysMO consortium (Systems biology for Micro-organisms), and the VLN (virtual Liver Network), two large systems biology initiatives with different research aims and different scientific communities.
Integrating TRENCADIS components in gLite to share DICOM medical images and structured reports.
Blanquer, Ignacio; Hernández, Vicente; Salavert, José; Segrelles, Damià
2010-01-01
The problem of sharing medical information among different centres has been tackled by many projects. Several of them target the specific problem of sharing DICOM images and structured reports (DICOM-SR), such as the TRENCADIS project. In this paper we propose sharing and organizing DICOM data and DICOM-SR metadata benefiting from the existent deployed Grid infrastructures compliant with gLite such as EGEE or the Spanish NGI. These infrastructures contribute with a large amount of storage resources for creating knowledge databases and also provide metadata storage resources (such as AMGA) to semantically organize reports in a tree-structure. First, in this paper, we present the extension of TRENCADIS architecture to use gLite components (LFC, AMGA, SE) on the shake of increasing interoperability. Using the metadata from DICOM-SR, and maintaining its tree structure, enables federating different but compatible diagnostic structures and simplifies the definition of complex queries. This article describes how to do this in AMGA and it shows an approach to efficiently code radiology reports to enable the multi-centre federation of data resources.
The HydroServer Platform for Sharing Hydrologic Data
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.
2010-12-01
The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its parts in advancing hydrologic research. Details of the CUAHSI HIS can be found at http://his.cuahsi.org, and HydroServer codeplex site http://hydroserver.codeplex.com.
KiMoSys: a web-based repository of experimental data for KInetic MOdels of biological SYStems
2014-01-01
Background The kinetic modeling of biological systems is mainly composed of three steps that proceed iteratively: model building, simulation and analysis. In the first step, it is usually required to set initial metabolite concentrations, and to assign kinetic rate laws, along with estimating parameter values using kinetic data through optimization when these are not known. Although the rapid development of high-throughput methods has generated much omics data, experimentalists present only a summary of obtained results for publication, the experimental data files are not usually submitted to any public repository, or simply not available at all. In order to automatize as much as possible the steps of building kinetic models, there is a growing requirement in the systems biology community for easily exchanging data in combination with models, which represents the main motivation of KiMoSys development. Description KiMoSys is a user-friendly platform that includes a public data repository of published experimental data, containing concentration data of metabolites and enzymes and flux data. It was designed to ensure data management, storage and sharing for a wider systems biology community. This community repository offers a web-based interface and upload facility to turn available data into publicly accessible, centralized and structured-format data files. Moreover, it compiles and integrates available kinetic models associated with the data. KiMoSys also integrates some tools to facilitate the kinetic model construction process of large-scale metabolic networks, especially when the systems biologists perform computational research. Conclusions KiMoSys is a web-based system that integrates a public data and associated model(s) repository with computational tools, providing the systems biology community with a novel application facilitating data storage and sharing, thus supporting construction of ODE-based kinetic models and collaborative research projects. The web application implemented using Ruby on Rails framework is freely available for web access at http://kimosys.org, along with its full documentation. PMID:25115331
NASA Astrophysics Data System (ADS)
Arias, Carolina; Brovelli, Maria Antonia; Moreno, Rafael
2015-04-01
We are in an age when water resources are increasingly scarce and the impacts of human activities on them are ubiquitous. These problems don't respect administrative or political boundaries and they must be addressed integrating information from multiple sources at multiple spatial and temporal scales. Communication, coordination and data sharing are critical for addressing the water conservation and management issues of the 21st century. However, different countries, provinces, local authorities and agencies dealing with water resources have diverse organizational, socio-cultural, economic, environmental and information technology (IT) contexts that raise challenges to the creation of information systems capable of integrating and distributing information across their areas of responsibility in an efficient and timely manner. Tight and disparate financial resources, and dissimilar IT infrastructures (data, hardware, software and personnel expertise) further complicate the creation of these systems. There is a pressing need for distributed interoperable water information systems that are user friendly, easily accessible and capable of managing and sharing large volumes of spatial and non-spatial data. In a distributed system, data and processes are created and maintained in different locations each with competitive advantages to carry out specific activities. Open Data (data that can be freely distributed) is available in the water domain, and it should be further promoted across countries and organizations. Compliance with Open Specifications for data collection, storage and distribution is the first step toward the creation of systems that are capable of interacting and exchanging data in a seamlessly (interoperable) way. The features of Free and Open Source Software (FOSS) offer low access cost that facilitate scalability and long-term viability of information systems. The World Wide Web (the Web) will be the platform of choice to deploy and access these systems. Geospatial capabilities for mapping, visualization, and spatial analysis will be important components of these new generation of Web-based interoperable information systems in the water domain. The purpose of this presentation is to increase the awareness of scientists, IT personnel and agency managers about the advantages offered by the combined use of Open Data, Open Specifications for geospatial and water-related data collection, storage and sharing, as well as mature FOSS projects for the creation of interoperable Web-based information systems in the water domain. A case study is used to illustrate how these principles and technologies can be integrated to create a system with the previously mentioned characteristics for managing and responding to flood events.
A Framework for Managing Inter-Site Storage Area Networks using Grid Technologies
NASA Technical Reports Server (NTRS)
Kobler, Ben; McCall, Fritz; Smorul, Mike
2006-01-01
The NASA Goddard Space Flight Center and the University of Maryland Institute for Advanced Computer Studies are studying mechanisms for installing and managing Storage Area Networks (SANs) that span multiple independent collaborating institutions using Storage Area Network Routers (SAN Routers). We present a framework for managing inter-site distributed SANs that uses Grid Technologies to balance the competing needs to control local resources, share information, delegate administrative access, and manage the complex trust relationships between the participating sites.
Integrating Grid Services into the Cray XT4 Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
NERSC; Cholia, Shreyas; Lin, Hwa-Chun Wendy
2009-05-01
The 38640 core Cray XT4"Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a shared-root environment and idiosyncratic application launching. Inour work, we describe how we resolved these challenges on a running, general-purpose production system to provide on-demand compute, storage, accounting and monitoring services through generic gridmore » interfaces that mask the underlying system-specific details for the end user.« less
NASA Astrophysics Data System (ADS)
Knox, S.; Meier, P.; Mohammed, K.; Korteling, B.; Matrosov, E. S.; Hurford, A.; Huskova, I.; Harou, J. J.; Rosenberg, D. E.; Thilmant, A.; Medellin-Azuara, J.; Wicks, J.
2015-12-01
Capacity expansion on resource networks is essential to adapting to economic and population growth and pressures such as climate change. Engineered infrastructure systems such as water, energy, or transport networks require sophisticated and bespoke models to refine management and investment strategies. Successful modeling of such complex systems relies on good data management and advanced methods to visualize and share data.Engineered infrastructure systems are often represented as networks of nodes and links with operating rules describing their interactions. Infrastructure system management and planning can be abstracted to simulating or optimizing new operations and extensions of the network. By separating the data storage of abstract networks from manipulation and modeling we have created a system where infrastructure modeling across various domains is facilitated.We introduce Hydra Platform, a Free Open Source Software designed for analysts and modelers to store, manage and share network topology and data. Hydra Platform is a Python library with a web service layer for remote applications, called Apps, to connect. Apps serve various functions including network or results visualization, data export (e.g. into a proprietary format) or model execution. This Client-Server architecture allows users to manipulate and share centrally stored data. XML templates allow a standardised description of the data structure required for storing network data such that it is compatible with specific models.Hydra Platform represents networks in an abstract way and is therefore not bound to a single modeling domain. It is the Apps that create domain-specific functionality. Using Apps researchers from different domains can incorporate different models within the same network enabling cross-disciplinary modeling while minimizing errors and streamlining data sharing. Separating the Python library from the web layer allows developers to natively expand the software or build web-based apps in other languages for remote functionality. Partner CH2M is developing a commercial user-interface for Hydra Platform however custom interfaces and visualization tools can be built. Hydra Platform is available on GitHub while Apps will be shared on a central repository.
Key Technologies of Phone Storage Forensics Based on ARM Architecture
NASA Astrophysics Data System (ADS)
Zhang, Jianghan; Che, Shengbing
2018-03-01
Smart phones are mainly running Android, IOS and Windows Phone three mobile platform operating systems. The android smart phone has the best market shares and its processor chips are almost ARM software architecture. The chips memory address mapping mechanism of ARM software architecture is different with x86 software architecture. To forensics to android mart phone, we need to understand three key technologies: memory data acquisition, the conversion mechanism from virtual address to the physical address, and find the system’s key data. This article presents a viable solution which does not rely on the operating system API for a complete solution to these three issues.
Pang, Shaoning; Ban, Tao; Kadobayashi, Youki; Kasabov, Nikola K
2012-04-01
To adapt linear discriminant analysis (LDA) to real-world applications, there is a pressing need to equip it with an incremental learning ability to integrate knowledge presented by one-pass data streams, a functionality to join multiple LDA models to make the knowledge sharing between independent learning agents more efficient, and a forgetting functionality to avoid reconstruction of the overall discriminant eigenspace caused by some irregular changes. To this end, we introduce two adaptive LDA learning methods: LDA merging and LDA splitting. These provide the benefits of ability of online learning with one-pass data streams, retained class separability identical to the batch learning method, high efficiency for knowledge sharing due to condensed knowledge representation by the eigenspace model, and more preferable time and storage costs than traditional approaches under common application conditions. These properties are validated by experiments on a benchmark face image data set. By a case study on the application of the proposed method to multiagent cooperative learning and system alternation of a face recognition system, we further clarified the adaptability of the proposed methods to complex dynamic learning tasks.
Above the cloud computing orbital services distributed data model
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2014-05-01
Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.
A database management capability for Ada
NASA Technical Reports Server (NTRS)
Chan, Arvola; Danberg, SY; Fox, Stephen; Landers, Terry; Nori, Anil; Smith, John M.
1986-01-01
The data requirements of mission critical defense systems have been increasing dramatically. Command and control, intelligence, logistics, and even weapons systems are being required to integrate, process, and share ever increasing volumes of information. To meet this need, systems are now being specified that incorporate data base management subsystems for handling storage and retrieval of information. It is expected that a large number of the next generation of mission critical systems will contain embedded data base management systems. Since the use of Ada has been mandated for most of these systems, it is important to address the issues of providing data base management capabilities that can be closely coupled with Ada. A comprehensive distributed data base management project has been investigated. The key deliverables of this project are three closely related prototype systems implemented in Ada. These three systems are discussed.
Cloud-based crowd sensing: a framework for location-based crowd analyzer and advisor
NASA Astrophysics Data System (ADS)
Aishwarya, K. C.; Nambi, A.; Hudson, S.; Nadesh, R. K.
2017-11-01
Cloud computing is an emerging field of computer science to integrate and explore large and powerful computing systems and storages for personal and also for enterprise requirements. Mobile Cloud Computing is the inheritance of this concept towards mobile hand-held devices. Crowdsensing, or to be precise, Mobile Crowdsensing is the process of sharing resources from an available group of mobile handheld devices that support sharing of different resources such as data, memory and bandwidth to perform a single task for collective reasons. In this paper, we propose a framework to use Crowdsensing and perform a crowd analyzer and advisor whether the user can go to the place or not. This is an ongoing research and is a new concept to which the direction of cloud computing has shifted and is viable for more expansion in the near future.
GIFT-Cloud: A data sharing and collaboration platform for medical imaging research.
Doel, Tom; Shakir, Dzhoshkun I; Pratt, Rosalind; Aertsen, Michael; Moggridge, James; Bellon, Erwin; David, Anna L; Deprest, Jan; Vercauteren, Tom; Ourselin, Sébastien
2017-02-01
Clinical imaging data are essential for developing research software for computer-aided diagnosis, treatment planning and image-guided surgery, yet existing systems are poorly suited for data sharing between healthcare and academia: research systems rarely provide an integrated approach for data exchange with clinicians; hospital systems are focused towards clinical patient care with limited access for external researchers; and safe haven environments are not well suited to algorithm development. We have established GIFT-Cloud, a data and medical image sharing platform, to meet the needs of GIFT-Surg, an international research collaboration that is developing novel imaging methods for fetal surgery. GIFT-Cloud also has general applicability to other areas of imaging research. GIFT-Cloud builds upon well-established cross-platform technologies. The Server provides secure anonymised data storage, direct web-based data access and a REST API for integrating external software. The Uploader provides automated on-site anonymisation, encryption and data upload. Gateways provide a seamless process for uploading medical data from clinical systems to the research server. GIFT-Cloud has been implemented in a multi-centre study for fetal medicine research. We present a case study of placental segmentation for pre-operative surgical planning, showing how GIFT-Cloud underpins the research and integrates with the clinical workflow. GIFT-Cloud simplifies the transfer of imaging data from clinical to research institutions, facilitating the development and validation of medical research software and the sharing of results back to the clinical partners. GIFT-Cloud supports collaboration between multiple healthcare and research institutions while satisfying the demands of patient confidentiality, data security and data ownership. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel
2015-12-01
We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.
A hybrid reconfigurable solar and wind energy system
NASA Astrophysics Data System (ADS)
Gadkari, Sagar A.
We study the feasibility of a novel hybrid solar-wind hybrid system that shares most of its infrastructure and components. During periods of clear sunny days the system will generate electricity from the sun using a parabolic concentrator. The concentrator is formed by individual mirror elements and focuses the light onto high intensity vertical multi-junction (VMJ) cells. During periods of high wind speeds and at night, the same concentrator setup will be reconfigured to channel the wind into a wind turbine which will be used to harness wind energy. In this study we report on the feasibility of this type of solar/wind hybrid energy system. The key mechanisms; optics, cooling mechanism of VMJ cells and air flow through the system were investigated using simulation tools. The results from these simulations, along with a simple economic analysis giving the levelized cost of energy for such a system are presented. An iterative method of design refinement based on the simulation results was used to work towards a prototype design. The levelized cost of the system achieved in the economic analysis shows the system to be a good alternative for a grid isolated site and could be used as a standalone system in regions of lower demand. The new approach to solar wind hybrid system reported herein will pave way for newer generation of hybrid systems that share common infrastructure in addition to the storage and distribution of energy.
Battery Energy Storage Systems to Mitigate the Variability of Photovoltaic Power Generation
NASA Astrophysics Data System (ADS)
Gurganus, Heath Alan
Methods of generating renewable energy such as through solar photovoltaic (PV) cells and wind turbines offer great promise in terms of a reduced carbon footprint and overall impact on the environment. However, these methods also share the attribute of being highly stochastic, meaning they are variable in such a way that is difficult to forecast with sufficient accuracy. While solar power currently constitutes a small amount of generating potential in most regions, the cost of photovoltaics continues to decline and a trend has emerged to build larger PV plants than was once feasible. This has brought the matter of increased variability to the forefront of research in the industry. Energy storage has been proposed as a means of mitigating this increased variability --- and thus reducing the need to utilize traditional spinning reserves --- as well as offering auxiliary grid services such as peak-shifting and frequency control. This thesis addresses the feasibility of using electrochemical storage methods (i.e. batteries) to decrease the ramp rates of PV power plants. By building a simulation of a grid-connected PV array and a typical Battery Energy Storage System (BESS) in the NetLogo simulation environment, I have created a parameterized tool that can be tailored to describe almost any potential PV setup. This thesis describes the design and function of this model, and makes a case for the accuracy of its measurements by comparing its simulated output to that of well-documented real world sites. Finally, a set of recommendations for the design and operational parameters of such a system are then put forth based on the results of several experiments performed using this model.
What CFOs should know before venturing into the cloud.
Rajendran, Janakan
2013-05-01
There are three major trends in the use of cloud-based services for healthcare IT: Cloud computing involves the hosting of health IT applications in a service provider cloud. Cloud storage is a data storage service that can involve, for example, long-term storage and archival of information such as clinical data, medical images, and scanned documents. Data center colocation involves rental of secure space in the cloud from a vendor, an approach that allows a hospital to share power capacity and proven security protocols, reducing costs.
NASA Astrophysics Data System (ADS)
Satoh, Hitoshi; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kaneko, Masahiro; Kakinuma, Ryutaru; Moriyama, Noriyuki
2011-03-01
We have developed the teleradiology network system with a new information security solution that provided with web medical image conference system. In the teleradiology network system, the security of information network is very important subjects. We are studying the secret sharing scheme as a method safely to store or to transmit the confidential medical information used with the teleradiology network system. The confidential medical information is exposed to the risk of the damage and intercept. Secret sharing scheme is a method of dividing the confidential medical information into two or more tallies. Individual medical information cannot be decoded by using one tally at all. Our method has the function of RAID. With RAID technology, if there is a failure in a single tally, there is redundant data already copied to other tally. Confidential information is preserved at an individual Data Center connected through internet because individual medical information cannot be decoded by using one tally at all. Therefore, even if one of the Data Centers is struck and information is damaged, the confidential medical information can be decoded by using the tallies preserved at the data center to which it escapes damage. We can safely share the screen of workstation to which the medical image of Data Center is displayed from two or more web conference terminals at the same time. Moreover, Real time biometric face authentication system is connected with Data Center. Real time biometric face authentication system analyzes the feature of the face image of which it takes a picture in 20 seconds with the camera and defends the safety of the medical information. We propose a new information transmission method and a new information storage method with a new information security solution.
Robust adaptive control for a hybrid solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Snyder, Steven
2011-12-01
Solid oxide fuel cells (SOFCs) are electrochemical energy conversion devices. They offer a number of advantages beyond those of most other fuel cells due to their high operating temperature (800-1000°C), such as internal reforming, heat as a byproduct, and faster reaction kinetics without precious metal catalysts. Mitigating fuel starvation and improving load-following capabilities of SOFC systems are conflicting control objectives. However, this can be resolved by the hybridization of the system with an energy storage device, such as an ultra-capacitor. In this thesis, a steady-state property of the SOFC is combined with an input-shaping method in order to address the issue of fuel starvation. Simultaneously, an overall adaptive system control strategy is employed to manage the energy sharing between the elements as well as to maintain the state-of-charge of the energy storage device. The adaptive control method is robust to errors in the fuel cell's fuel supply system and guarantees that the fuel cell current and ultra-capacitor state-of-charge approach their target values and remain uniformly, ultimately bounded about these target values. Parameter saturation is employed to guarantee boundedness of the parameters. The controller is validated through hardware-in-the-loop experiments as well as computer simulations.
Cloud Computing for radiologists.
Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit
2012-07-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.
Cloud Computing for radiologists
Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit
2012-01-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560
Saxton, Ronald E; Yeasmin, Farzana; Alam, Mahbub-Ul; Al-Masud, Abdullah; Dutta, Notan Chandra; Yeasmin, Dalia; Luby, Stephen P; Unicomb, Leanne; Winch, Peter J
2017-09-01
Provision of toilets is necessary but not sufficient to impact health as poor maintenance may impair toilet function and discourage their consistent use. Water in urban slums is both scarce and a prerequisite for toilet maintenance behaviours. We describe the development of behaviour change communications and selection of low-cost water storage hardware to facilitate adequate flushing among users of shared toilets. We conducted nine focus group discussions and six ranking exercises with adult users of shared toilets (50 females, 35 males), then designed and implemented three pilot interventions to facilitate regular flushing and improve hygienic conditions of shared toilets. We conducted follow-up assessments 1 and 2 months post-pilot including nine in-depth interviews and three focus group discussions with adult residents (23 females, 15 males) and three landlords in the pilot communities. Periodic water scarcity was common in the study communities. Residents felt embarrassed to carry water for flushing. Reserving water adjacent to the shared toilet enabled slum residents to flush regularly. Signs depicting rules for toilet use empowered residents and landlords to communicate these expectations for flushing to transient tenants. Residents in the pilot reported improvements in cleanliness and reduced odour inside toilet cubicles. Our pilot demonstrates the potential efficacy of low-cost water storage and behaviour change communications to improve maintenance of and user satisfaction with shared toilets in urban slum settings. © 2017 John Wiley & Sons Ltd.
Global Assessment of Exploitable Surface Reservoir Storage under Climate Change
NASA Astrophysics Data System (ADS)
Liu, L.; Parkinson, S.; Gidden, M.; Byers, E.; Satoh, Y.; Riahi, K.
2016-12-01
Surface water reservoirs provide us with reliable water supply systems, hydropower generation, flood control, and recreation services. Reliable reservoirs can be robust measures for water security and can help smooth out challenging seasonal variability of river flows. Yet, reservoirs also cause flow fragmentation in rivers and can lead to flooding of upstream areas, thereby displacing existing land-uses and ecosystems. The anticipated population growth, land use and climate change in many regions globally suggest a critical need to assess the potential for appropriate reservoir capacity that can balance rising demands with long-term water security. In this research, we assessed exploitable reservoir potential under climate change and human development constraints by deriving storage-yield relationships for 235 river basins globally. The storage-yield relationships map the amount of storage capacity required to meet a given water demand based on a 30-year inflow sequence. Runoff data is simulated with an ensemble of Global Hydrological Models (GHMs) for each of five bias-corrected general circulation models (GCMs) under four climate change pathways. These data are used to define future 30-year inflows in each river basin for time period between 2010 and 2080. The calculated capacity is then combined with geographical information of environmental and human development exclusion zones to further limit the storage capacity expansion potential in each basin. We investigated the reliability of reservoir potentials across different climate change scenarios and Shared Socioeconomic Pathways (SSPs) to identify river basins where reservoir expansion will be particularly challenging. Preliminary results suggest large disparities in reservoir potential across basins: some basins have already approached exploitable reserves, while some others display abundant potential. Exclusions zones pose significant impact on the amount of actual exploitable storage and firm yields worldwide: 30% of reservoir potential would be unavailable because of land occupation by environmental and human development. Results from this study will help decision makers to understand the reliability of infrastructure systems particularly sensitive to future water availability.
Unified Ultrasonic/Eddy-Current Data Acquisition
NASA Technical Reports Server (NTRS)
Chern, E. James; Butler, David W.
1993-01-01
Imaging station for detecting cracks and flaws in solid materials developed combining both ultrasonic C-scan and eddy-current imaging. Incorporation of both techniques into one system eliminates duplication of computers and of mechanical scanners; unifies acquisition, processing, and storage of data; reduces setup time for repetitious ultrasonic and eddy-current scans; and increases efficiency of system. Same mechanical scanner used to maneuver either ultrasonic or eddy-current probe over specimen and acquire point-by-point data. For ultrasonic scanning, probe linked to ultrasonic pulser/receiver circuit card, while, for eddy-current imaging, probe linked to impedance-analyzer circuit card. Both ultrasonic and eddy-current imaging subsystems share same desktop-computer controller, containing dedicated plug-in circuit boards for each.
Integrated Systems Health Management for Intelligent Systems
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Melcher, Kevin
2011-01-01
The implementation of an integrated system health management (ISHM) capability is fundamentally linked to the management of data, information, and knowledge (DIaK) with the purposeful objective of determining the health of a system. Management implies storage, distribution, sharing, maintenance, processing, reasoning, and presentation. ISHM is akin to having a team of experts who are all individually and collectively observing and analyzing a complex system, and communicating effectively with each other in order to arrive at an accurate and reliable assessment of its health. In this chapter, concepts, procedures, and approaches are presented as a foundation for implementing an ISHM capability relevant to intelligent systems. The capability stresses integration of DIaK from all elements of a system, emphasizing an advance toward an on-board, autonomous capability. Both ground-based and on-board ISHM capabilities are addressed. The information presented is the result of many years of research, development, and maturation of technologies, and of prototype implementations in operational systems.
Management of information in distributed biomedical collaboratories.
Keator, David B
2009-01-01
Organizing and annotating biomedical data in structured ways has gained much interest and focus in the last 30 years. Driven by decreases in digital storage costs and advances in genetics sequencing, imaging, electronic data collection, and microarray technologies, data is being collected at an alarming rate. The specialization of fields in biology and medicine demonstrates the need for somewhat different structures for storage and retrieval of data. For biologists, the need for structured information and integration across a number of domains drives development. For clinical researchers and hospitals, the need for a structured medical record accessible to, ideally, any medical practitioner who might require it during the course of research or patient treatment, patient confidentiality, and security are the driving developmental factors. Scientific data management systems generally consist of a few core services: a backend database system, a front-end graphical user interface, and an export/import mechanism or data interchange format to both get data into and out of the database and share data with collaborators. The chapter introduces some existing databases, distributed file systems, and interchange languages used within the biomedical research and clinical communities for scientific data management and exchange.
Real-Time Data Streaming and Storing Structure for the LHD's Fusion Plasma Experiments
NASA Astrophysics Data System (ADS)
Nakanishi, Hideya; Ohsuna, Masaki; Kojima, Mamoru; Imazu, Setsuo; Nonomura, Miki; Emoto, Masahiko; Yoshida, Masanobu; Iwata, Chie; Ida, Katsumi
2016-02-01
The LHD data acquisition and archiving system, i.e., LABCOM system, has been fully equipped with high-speed real-time acquisition, streaming, and storage capabilities. To deal with more than 100 MB/s continuously generated data at each data acquisition (DAQ) node, DAQ tasks have been implemented as multitasking and multithreaded ones in which the shared memory plays the most important role for inter-process fast and massive data handling. By introducing a 10-second time chunk named “subshot,” endless data streams can be stored into a consecutive series of fixed length data blocks so that they will soon become readable by other processes even while the write process is continuing. Real-time device and environmental monitoring are also implemented in the same way with further sparse resampling. The central data storage has been separated into two layers to be capable of receiving multiple 100 MB/s inflows in parallel. For the frontend layer, high-speed SSD arrays are used as the GlusterFS distributed filesystem which can provide max. 2 GB/s throughput. Those design optimizations would be informative for implementing the next-generation data archiving system in big physics, such as ITER.
General consumer communication tools for improved image management and communication in medicine.
Rosset, Chantal; Rosset, Antoine; Ratib, Osman
2005-12-01
We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.
Ground System for Solar Dynamics Observatory (SDO) Mission
NASA Technical Reports Server (NTRS)
Tann, Hun K.; Silva, Christopher J.; Pages, Raymond J.
2005-01-01
NASA s Goddard Space Flight Center (GSFC) has recently completed its Critical Design Review (CDR) of a new dual Ka and S-band ground system for the Solar Dynamics Observatory (SDO) Mission. SDO, the flagship mission under the new Living with a Star Program Office, is one of GSFC s most recent large-scale in-house missions. The observatory is scheduled for launch in August 2008 from the Kennedy Space Center aboard an Atlas-5 expendable launch vehicle. Unique to this mission is an extremely challenging science data capture requirement. The mission is required to capture 99.99% of available science over 95% of all observation opportunities. Due to the continuous, high volume (150 Mbps) science data rate, no on-board storage of science data will be implemented on this mission. With the observatory placed in a geo-synchronous orbit at 36,000 kilometers within view of dedicated ground stations, the ground system will in effect implement a "real-time" science data pipeline with appropriate data accounting, data storage, data distribution, data recovery, and automated system failure detection and correction to keep the science data flowing continuously to three separate Science Operations Centers (SOCs). Data storage rates of approx. 45 Tera-bytes per month are expected. The Mission Operations Center (MOC) will be based at GSFC and is designed to be highly automated. Three SOCs will share in the observatory operations, each operating their own instrument. Remote operations of a multi-antenna ground station in White Sands, New Mexico from the MOC is part of the design baseline.
Towards building a team of intelligent robots
NASA Technical Reports Server (NTRS)
Varanasi, Murali R.; Mehrotra, R.
1987-01-01
Topics addressed include: collision-free motion planning of multiple robot arms; two-dimensional object recognition; and pictorial databases (storage and sharing of the representations of three-dimensional objects).
Patil, Sunil; Lu, Hui; Saunders, Catherine L; Potoglou, Dimitris; Robinson, Neil
2016-11-01
To assess the public's preferences regarding potential privacy threats from devices or services storing health-related personal data. A pan-European survey based on a stated-preference experiment for assessing preferences for electronic health data storage, access, and sharing. We obtained 20 882 survey responses (94 606 preferences) from 27 EU member countries. Respondents recognized the benefits of storing electronic health information, with 75.5%, 63.9%, and 58.9% agreeing that storage was important for improving treatment quality, preventing epidemics, and reducing delays, respectively. Concerns about different levels of access by third parties were expressed by 48.9% to 60.6% of respondents.On average, compared to devices or systems that only store basic health status information, respondents preferred devices that also store identification data (coefficient/relative preference 95% CI = 0.04 [0.00-0.08], P = 0.034) and information on lifelong health conditions (coefficient = 0.13 [0.08 to 0.18], P < 0.001), but there was no evidence of this for devices with information on sensitive health conditions such as mental and sexual health and addictions (coefficient = -0.03 [-0.09 to 0.02], P = 0.24). Respondents were averse to their immediate family (coefficient = -0.05 [-0.05 to -0.01], P = 0.011) and home care nurses (coefficient = -0.06 [-0.11 to -0.02], P = 0.004) viewing this data, and strongly averse to health insurance companies (coefficient = -0.43 [-0.52 to 0.34], P < 0.001), private sector pharmaceutical companies (coefficient = -0.82 [-0.99 to -0.64], P < 0.001), and academic researchers (coefficient = -0.53 [-0.66 to -0.40], P < 0.001) viewing the data. Storing more detailed electronic health data was generally preferred, but respondents were averse to wider access to and sharing of this information. When developing frameworks for the use of electronic health data, policy makers should consider approaches that both highlight the benefits to the individual and minimize the perception of privacy risks. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
A Cloud-based Infrastructure and Architecture for Environmental System Research
NASA Astrophysics Data System (ADS)
Wang, D.; Wei, Y.; Shankar, M.; Quigley, J.; Wilson, B. E.
2016-12-01
The present availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization and service-oriented architecture provide a great opportunity to enable data and computing infrastructure sharing between closely related research activities. By taking advantage of these approaches, along with the world-class high computing and data infrastructure located at Oak Ridge National Laboratory, a cloud-based infrastructure and architecture has been developed to efficiently deliver essential data and informatics service and utilities to the environmental system research community, and will provide unique capabilities that allows terrestrial ecosystem research projects to share their software utilities (tools), data and even data submission workflow in a straightforward fashion. The infrastructure will minimize large disruptions from current project-based data submission workflows for better acceptances from existing projects, since many ecosystem research projects already have their own requirements or preferences for data submission and collection. The infrastructure will eliminate scalability problems with current project silos by provide unified data services and infrastructure. The Infrastructure consists of two key components (1) a collection of configurable virtual computing environments and user management systems that expedite data submission and collection from environmental system research community, and (2) scalable data management services and system, originated and development by ORNL data centers.
Symbiosis of executive and selective attention in working memory
Vandierendonck, André
2014-01-01
The notion of working memory (WM) was introduced to account for the usage of short-term memory resources by other cognitive tasks such as reasoning, mental arithmetic, language comprehension, and many others. This collaboration between memory and other cognitive tasks can only be achieved by a dedicated WM system that controls task coordination. To that end, WM models include executive control. Nevertheless, other attention control systems may be involved in coordination of memory and cognitive tasks calling on memory resources. The present paper briefly reviews the evidence concerning the role of selective attention in WM activities. A model is proposed in which selective attention control is directly linked to the executive control part of the WM system. The model assumes that apart from storage of declarative information, the system also includes an executive WM module that represents the current task set. Control processes are automatically triggered when particular conditions in these modules are met. As each task set represents the parameter settings and the actions needed to achieve the task goal, it will depend on the specific settings and actions whether selective attention control will have to be shared among the active tasks. Only when such sharing is required, task performance will be affected by the capacity limits of the control system involved. PMID:25152723
Symbiosis of executive and selective attention in working memory.
Vandierendonck, André
2014-01-01
The notion of working memory (WM) was introduced to account for the usage of short-term memory resources by other cognitive tasks such as reasoning, mental arithmetic, language comprehension, and many others. This collaboration between memory and other cognitive tasks can only be achieved by a dedicated WM system that controls task coordination. To that end, WM models include executive control. Nevertheless, other attention control systems may be involved in coordination of memory and cognitive tasks calling on memory resources. The present paper briefly reviews the evidence concerning the role of selective attention in WM activities. A model is proposed in which selective attention control is directly linked to the executive control part of the WM system. The model assumes that apart from storage of declarative information, the system also includes an executive WM module that represents the current task set. Control processes are automatically triggered when particular conditions in these modules are met. As each task set represents the parameter settings and the actions needed to achieve the task goal, it will depend on the specific settings and actions whether selective attention control will have to be shared among the active tasks. Only when such sharing is required, task performance will be affected by the capacity limits of the control system involved.
Development of climate data storage and processing model
NASA Astrophysics Data System (ADS)
Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.
2016-11-01
We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.
NASA Astrophysics Data System (ADS)
García-Barberena, Javier; Olcoz, Asier; Sorbet, Fco. Javier
2017-06-01
CSP technologies are essential to allow large shares of renewables into the grid due to their unique ability to cope with the large variability of the energy resource by means of technically and economically feasible thermal energy storage (TES) systems. However, there is still the need and sought to achieve technological breakthroughs towards cost reductions and increased efficiencies. For this, research on advanced power cycles, like the Decoupled Solar Combined Cycle (DSCC) is, are regarded as a key objective. The DSCC concept is, basically, a Combined Brayton-Rankine cycle in which the bottoming cycle is decoupled from the operation of the topping cycle by means of an intermediate storage system. According to this concept, one or several solar towers driving a solar air receiver and a Gas Turbine (Brayton cycle) feed through their exhaust gasses a single storage system and bottoming cycle. This general concept benefits from a large flexibility in its design. On the one hand, different possible schemes related to number and configuration of solar towers, storage systems media and configuration, bottoming cycles, etc. are possible. On the other, within a specific scheme a large number of design parameters can be optimized, including the solar field size, the operating temperatures and pressures of the receiver, the power of the Brayton and Rankine cycles, the storage capacity and others. Heretofore, DSCC plants have been analyzed by means of simple steady-state models with pre-stablished operating parameters in the power cycles. In this work, a detailed transient simulation model for DSCC plants has been developed and is used to analyze different DSCC plant schemes. For each of the analyzed plant schemes, a sensitivity analysis and selection of the main design parameters is carried out. Results show that an increase in annual solar to electric efficiency of 30% (from 12.91 to 16.78) can be achieved by using two bottoming Rankine cycles at two different temperatures, enabling low temperature heat recovery from the receiver and Gas Turbine exhaust gasses.
NASA Astrophysics Data System (ADS)
Harden, Jennifer W.; Loiesel, Julie; Ryals, Rebecca; Lawrence, Corey; Blankinship, Joseph; Phillips, Claire; Bond-Lamberty, Ben; Todd-Brown, Katherine; Vargas, Rodrigo; Hugelius, Gustaf; Nave, Luke; Malhotra, Avni; Silver, Whendee; Sanderman, Jon
2017-04-01
A number of diverse approaches and sciences can contribute to a robust understanding of the I. state, II. vulnerabilities, and III. opportunities for soil carbon in context of its potential contributions to the atmospheric C budget. Soil state refers to the current C stock of a given site, region, or ecosystem/landuse type. Soil vulnerabilities refers to the forms and bioreactivity of C stocks, which determine how soil C might respond to climate, disturbance, and landuse perturbations. Opportunities refer to the potential for soils in their current state to increase capacity for and rate of C storage under future conditions, thereby impacting atmospheric C budgets. In order to capture the state, vulnerability, and opportunities for soil C, a robust C accounting scheme must include at least three science needs: (1) a user-friendly and dynamic database with transparent, shared coding in which data layers of solid, liquid, and gaseous phases share relational metadata and allow for changes over time (2) a framework to characterize the capacity and reactivity of different soil types based on climate, historic, and landscape factors (3) a framework to characterize landuse practices and their impact on physical state, capacity/reactivity, and potential for C change. In order to transfer our science information to practicable implementations for land policies, societal and social needs must also include: (1) metrics for landowners and policy experts to recognize conditions of vulnerability or opportunity (2)communication schemes for accessing salient outcomes of the science. Importantly, there stands an opportunity for contributions of data, model code, and conceptual frameworks in which scientists, educators, and decision-makers can become citizens of a shared, scrutinized database that contributes to a dynamic, improved understanding of our soil system.
Extended outlook: description, utilization, and daily applications of cloud technology in radiology.
Gerard, Perry; Kapadia, Neil; Chang, Patricia T; Acharya, Jay; Seiler, Michael; Lefkovitz, Zvi
2013-12-01
The purpose of this article is to discuss the concept of cloud technology, its role in medical applications and radiology, the role of the radiologist in using and accessing these vast resources of information, and privacy concerns and HIPAA compliance strategies. Cloud computing is the delivery of shared resources, software, and information to computers and other devices as a metered service. This technology has a promising role in the sharing of patient medical information and appears to be particularly suited for application in radiology, given the field's inherent need for storage and access to large amounts of data. The radiology cloud has significant strengths, such as providing centralized storage and access, reducing unnecessary repeat radiologic studies, and potentially allowing radiologic second opinions more easily. There are significant cost advantages to cloud computing because of a decreased need for infrastructure and equipment by the institution. Private clouds may be used to ensure secure storage of data and compliance with HIPAA. In choosing a cloud service, there are important aspects, such as disaster recovery plans, uptime, and security audits, that must be considered. Given that the field of radiology has become almost exclusively digital in recent years, the future of secure storage and easy access to imaging studies lies within cloud computing technology.
NASA Astrophysics Data System (ADS)
Farroha, Bassam S.; Farroha, Deborah L.
2011-06-01
The new corporate approach to efficient processing and storage is migrating from in-house service-center services to the newly coined approach of Cloud Computing. This approach advocates thin clients and providing services by the service provider over time-shared resources. The concept is not new, however the implementation approach presents a strategic shift in the way organizations provision and manage their IT resources. The requirements on some of the data sets targeted to be run on the cloud vary depending on the data type, originator, user, and confidentiality level. Additionally, the systems that fuse such data would have to deal with the classifying the product and clearing the computing resources prior to allowing new application to be executed. This indicates that we could end up with a multi-level security system that needs to follow specific rules and can send the output to a protected network and systems in order not to have data spill or contaminated resources. The paper discusses these requirements and potential impact on the cloud architecture. Additionally, the paper discusses the unexpected advantages of the cloud framework providing a sophisticated environment for information sharing and data mining.
Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T
2015-01-01
To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.
Concierge: Personal Database Software for Managing Digital Research Resources
Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro
2007-01-01
This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800
Huang, Xuezhen; Zhang, Xi; Jiang, Hongrui
2014-02-15
To study the fundamental energy storage mechanism of photovoltaically self-charging cells (PSCs) without involving light-responsive semiconductor materials such as Si powder and ZnO nanowires, we fabricate a two-electrode PSC with the dual functions of photocurrent output and energy storage by introducing a PVDF film dielectric on the counterelectrode of a dye-sensitized solar cell. A layer of ultrathin Au film used as a quasi-electrode establishes a shared interface for the I - /I 3 - redox reaction and for the contact between the electrolyte and the dielectric for the energy storage, and prohibits recombination during the discharging period because of its discontinuity. PSCs with a 10-nm-thick PVDF provide a steady photocurrent output and achieve a light-to-electricity conversion efficiency ( η) of 3.38%, and simultaneously offer energy storage with a charge density of 1.67 C g -1 . Using this quasi-electrode design, optimized energy storage structures may be used in PSCs for high energy storage density.
Autonomous Docking Based on Infrared System for Electric Vehicle Charging in Urban Areas
Pérez, Joshué; Nashashibi, Fawzi; Lefaudeux, Benjamin; Resende, Paulo; Pollard, Evangeline
2013-01-01
Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative to the classic public transportation systems. However, there are still some problems to be solved related to energy storage, electric charging and autonomy. In this paper, we present an autonomous docking system for electric vehicles recharging based on an embarked infrared camera performing infrared beacons detection installed in the infrastructure. A visual servoing system coupled with an automatic controller allows the vehicle to dock accurately to the recharging booth in a street parking area. The results show good behavior of the implemented system, which is currently deployed as a real prototype system in the city of Paris. PMID:23429581
Autonomous docking based on infrared system for electric vehicle charging in urban areas.
Pérez, Joshué; Nashashibi, Fawzi; Lefaudeux, Benjamin; Resende, Paulo; Pollard, Evangeline
2013-02-21
Electric vehicles are progressively introduced in urban areas, because of their ability to reduce air pollution, fuel consumption and noise nuisance. Nowadays, some big cities are launching the first electric car-sharing projects to clear traffic jams and enhance urban mobility, as an alternative to the classic public transportation systems. However, there are still some problems to be solved related to energy storage, electric charging and autonomy. In this paper, we present an autonomous docking system for electric vehicles recharging based on an embarked infrared camera performing infrared beacons detection installed in the infrastructure. A visual servoing system coupled with an automatic controller allows the vehicle to dock accurately to the recharging booth in a street parking area. The results show good behavior of the implemented system, which is currently deployed as a real prototype system in the city of Paris.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindsay, Haile; Garcia-Santos, Norma; Saverot, Pierre
2012-07-01
The U.S. Nuclear Regulatory Commission (NRC) was established in 1974 with the mission to license and regulate the civilian use of nuclear materials for commercial, industrial, academic, and medical uses in order to protect public health and safety, and the environment, and promote the common defense and security. Currently, approximately half (∼49%) of the workforce at the NRC has been with the Agency for less than six years. As part of the Agency's mission, the NRC has partial responsibility for the oversight of the transportation and storage of radioactive materials. The NRC has experienced a significant level of expertise leavingmore » the Agency due to staff attrition. Factors that contribute to this attrition include retirement of the experienced nuclear workforce and mobility of staff within or outside the Agency. Several knowledge management (KM) initiatives have been implemented within the Agency, with one of them including the formation of a Division of Spent Fuel Storage and Transportation (SFST) KM team. The team, which was formed in the fall of 2008, facilitates capturing, transferring, and documenting regulatory knowledge for staff to effectively perform their safety oversight of transportation and storage of radioactive materials, regulated under Title 10 of the Code of Federal Regulations (10 CFR) Part 71 and Part 72. In terms of KM, the SFST goal is to share critical information among the staff to reduce the impact from staff's mobility and attrition. KM strategies in place to achieve this goal are: (1) development of communities of practice (CoP) (SFST Qualification Journal and the Packaging and Storing Radioactive Material) in the on-line NRC Knowledge Center (NKC); (2) implementation of a SFST seminar program where the seminars are recorded and placed in the Agency's repository, Agency-wide Documents Access and Management System (ADAMS); (3) meeting of technical discipline group programs to share knowledge within specialty areas; (4) development of written guidance to capture 'administrative and technical' knowledge (e.g., office instructions (OIs), generic communications (e.g., bulletins, generic letters, regulatory issue summary), standard review plans (SRPs), interim staff guidance (ISGs)); (5) use of mentoring strategies for experienced staff to train new staff members; (6) use of Microsoft SharePoint portals in capturing, transferring, and documenting knowledge for staff across the Division from Division management and administrative assistants to the project managers, inspectors, and technical reviewers; and (7) development and implementation of a Division KM Plan. A discussion and description of the successes and challenges of implementing these KM strategies at the NRC/SFST will be provided. (authors)« less
Gupta, Vijayalaxmi; Holets-Bondar, Lesya; Roby, Katherine F; Enders, George; Tash, Joseph S
2015-01-01
Collection and processing of tissues to preserve space flight effects from animals after return to Earth is challenging. Specimens must be harvested with minimal time after landing to minimize postflight readaptation alterations in protein expression/translation, posttranslational modifications, and expression, as well as changes in gene expression and tissue histological degradation after euthanasia. We report the development of a widely applicable strategy for determining the window of optimal species-specific and tissue-specific posteuthanasia harvest that can be utilized to integrate into multi-investigator Biospecimen Sharing Programs. We also determined methods for ISS-compatible long-term tissue storage (10 months at -80°C) that yield recovery of high quality mRNA and protein for western analysis after sample return. Our focus was reproductive tissues. The time following euthanasia where tissues could be collected and histological integrity was maintained varied with tissue and species ranging between 1 and 3 hours. RNA quality was preserved in key reproductive tissues fixed in RNAlater up to 40 min after euthanasia. Postfixation processing was also standardized for safe shipment back to our laboratory. Our strategy can be adapted for other tissues under NASA's Biospecimen Sharing Program or similar multi-investigator tissue sharing opportunities.
Storage battery market: profiles and trade opportunities
NASA Astrophysics Data System (ADS)
Stonfer, D.
1985-04-01
The export market for domestically produced storage batteries is a modest one, typically averaging 6 to 7% of domestic industry shipments. Exports in 1984 totalled about $167 million. Canada and Mexico were the largest export markets for US storage batteries in 1984, accounting for slightly more than half of the total. The United Kingdom, Saudi Arabia, and the Netherlands round out the top five export markets. Combined, these five markets accounted for two-thirds of all US exports of storage batteries in 1984. On a regional basis, the North American (Canada), Central American, and European markets accounted for three-quarters of total storage battery exports. Lead-acid batteries accounted for 42% of total battery exports. Battery parts followed lead-acid batteries with a 29% share. Nicad batteries accounted for 16% of the total while other batteries accounted for 13%.
Vergauwe, Evie; Barrouillet, Pierre; Camos, Valérie
2009-07-01
Examinations of interference between visual and spatial materials in working memory have suggested domain- and process-based fractionations of visuo-spatial working memory. The present study examined the role of central time-based resource sharing in visuo-spatial working memory and assessed its role in obtained interference patterns. Visual and spatial storage were combined with both visual and spatial on-line processing components in computer-paced working memory span tasks (Experiment 1) and in a selective interference paradigm (Experiment 2). The cognitive load of the processing components was manipulated to investigate its impact on concurrent maintenance for both within-domain and between-domain combinations of processing and storage components. In contrast to both domain- and process-based fractionations of visuo-spatial working memory, the results revealed that recall performance was determined by the cognitive load induced by the processing of items, rather than by the domain to which those items pertained. These findings are interpreted as evidence for a time-based resource-sharing mechanism in visuo-spatial working memory.
A Collaborative Data Scientist Framework for both Primary and Secondary Education
NASA Astrophysics Data System (ADS)
Branch, B. D.
2011-12-01
The earth science data educational pipeline may be dependent on K-20 outcomes. Thus, a challenge for earth science and space informatics education or generational knowledge transfer consideration may be a non-existing or cost prohibitive pedagogical earth science reality. Such may require a technological infrastructure, a validated assessment system, and collaboration among stakeholders of primary and secondary education. Moreover, the K-20 paradigms may engage separate science and technology preparation standards when fundamental informatics requires an integrated pedagogical approach. In simple terms, a collaborative earth science training program for a subset of disciplines may a pragmatics means for formal data scientist training that is sustainable as technology evolves and data-sharing policy becomes a norm of data literacy. As the Group Earth Observation Systems of Systems (GEOSS) has a 10-work plan, educational stakeholders may find funding avenues if government can see earth science data training as a valuable job skill and societal need. This proposed framework suggested that ontological literacy, database management and storage management and data sharing capability are fundamental informatics concepts of this proposed framework where societal engagement is incited. Here all STEM disciplines could incite an integrated approach to mature such as learning metrics in their matriculation and assessment systems. The NSF's Earth Cube and Europe's WISE may represent best cased for such framework implementation.
NASA Astrophysics Data System (ADS)
Chen, Chia-Chin; Maier, Joachim
2018-05-01
In the version of this Perspective originally published, in the sentence "It is worthy of note that the final LiF-free situation characterized by MnO taking up the holes and the (F- containing) MnO surface taking up the lithium ions is also a subcase of the job-sharing concept23.", the word `holes' should have been `electrons'. This has now been corrected.
NASA Astrophysics Data System (ADS)
Xiong, Ting; He, Zhiwen
2017-06-01
Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.
Solving data-at-rest for the storage and retrieval of files in ad hoc networks
NASA Astrophysics Data System (ADS)
Knobler, Ron; Scheffel, Peter; Williams, Jonathan; Gaj, Kris; Kaps, Jens-Peter
2013-05-01
Based on current trends for both military and commercial applications, the use of mobile devices (e.g. smartphones and tablets) is greatly increasing. Several military applications consist of secure peer to peer file sharing without a centralized authority. For these military applications, if one or more of these mobile devices are lost or compromised, sensitive files can be compromised by adversaries, since COTS devices and operating systems are used. Complete system files cannot be stored on a device, since after compromising a device, an adversary can attack the data at rest, and eventually obtain the original file. Also after a device is compromised, the existing peer to peer system devices must still be able to access all system files. McQ has teamed with the Cryptographic Engineering Research Group at George Mason University to develop a custom distributed file sharing system to provide a complete solution to the data at rest problem for resource constrained embedded systems and mobile devices. This innovative approach scales very well to a large number of network devices, without a single point of failure. We have implemented the approach on representative mobile devices as well as developed an extensive system simulator to benchmark expected system performance based on detailed modeling of the network/radio characteristics, CONOPS, and secure distributed file system functionality. The simulator is highly customizable for the purpose of determining expected system performance for other network topologies and CONOPS.
NASA Astrophysics Data System (ADS)
Cardille, J. A.; Gonzales, R.; Parrott, L.; Bai, J.
2009-12-01
How should researchers store and share data? For most of history, scientists with results and data to share have been mostly limited to books and journal articles. In recent decades, the advent of personal computers and shared data formats has made it feasible, though often cumbersome, to transfer data between individuals or among small groups. Meanwhile, the use of automatic samplers, simulation models, and other data-production techniques has increased greatly. The result is that there is more and more data to store, and a greater expectation that they will be available at the click of a button. In 10 or 20 years, will we still send emails to each other to learn about what data exist? The development and widespread familiarity with virtual globes like Google Earth and NASA WorldWind has created the potential, in just the last few years, to revolutionize the way we share data, search for and search through data, and understand the relationship between individual projects in research networks, where sharing and dissemination of knowledge is encouraged. For the last two years, we have been building the GeoSearch application, a cutting-edge online resource for the storage, sharing, search, and retrieval of data produced by research networks. Linking NASA’s WorldWind globe platform, the data browsing toolkit prefuse, and SQL databases, GeoSearch’s version 1.0 enables flexible searches and novel geovisualizations of large amounts of related scientific data. These data may be submitted to the database by individual researchers and processed by GeoSearch’s data parser. Ultimately, data from research groups gathered in a research network would be shared among users via the platform. Access is not limited to the scientists themselves; administrators can determine which data can be presented publicly and which require group membership. Under the auspices of the Canada’s Sustainable Forestry Management Network of Excellence, we have created a moderate-sized database of ecological measurements in forests; we expect to extend the approach to a Quebec lake research network encompassing decades of lake measurements. In this session, we will describe and present four related components of the new system: GeoSearch’s globe-based searching and display of scientific data; prefuse-based visualization of social connections among members of a scientific research network; geolocation of research projects using Google Spreadsheets, KML, and Google Earth/Maps; and collaborative construction of a geolocated database of research articles. Each component is designed to have applications for scientists themselves as well as the general public. Although each implementation is in its infancy, we believe they could be useful to other researcher networks.
Irrigation infrastructure and water appropriation rules for food security
NASA Astrophysics Data System (ADS)
Gohar, Abdelaziz A.; Amer, Saud A.; Ward, Frank A.
2015-01-01
In the developing world's irrigated areas, water management and planning is often motivated by the need for lasting food security. Two important policy measures to address this need are improving the flexibility of water appropriation rules and developing irrigation storage infrastructure. Little research to date has investigated the performance of these two policy measures in a single analysis while maintaining a basin wide water balance. This paper examines impacts of storage capacity and water appropriation rules on total economic welfare in irrigated agriculture, while maintaining a water balance. The application is to a river basin in northern Afghanistan. A constrained optimization framework is developed to examine economic consequences on food security and farm income resulting from each policy measure. Results show that significant improvements in both policy aims can be achieved through expanding existing storage capacity to capture up to 150 percent of long-term average annual water supplies when added capacity is combined with either a proportional sharing of water shortages or unrestricted water trading. An important contribution of the paper is to show how the benefits of storage and a changed water appropriation system operate under a variable climate. Results show that the hardship of droughts can be substantially lessened, with the largest rewards taking place in the most difficult periods. Findings provide a comprehensive framework for addressing future water scarcity, rural livelihoods, and food security in the developing world's irrigated regions.
Methods to assess geological CO2 storage capacity: Status and best practice
Heidug, Wolf; Brennan, Sean T.; Holloway, Sam; Warwick, Peter D.; McCoy, Sean; Yoshimura, Tsukasa
2013-01-01
To understand the emission reduction potential of carbon capture and storage (CCS), decision makers need to understand the amount of CO2 that can be safely stored in the subsurface and the geographical distribution of storage resources. Estimates of storage resources need to be made using reliable and consistent methods. Previous estimates of CO2 storage potential for a range of countries and regions have been based on a variety of methodologies resulting in a correspondingly wide range of estimates. Consequently, there has been uncertainty about which of the methodologies were most appropriate in given settings, and whether the estimates produced by these methods were useful to policy makers trying to determine the appropriate role of CCS. In 2011, the IEA convened two workshops which brought together experts for six national surveys organisations to review CO2 storage assessment methodologies and make recommendations on how to harmonise CO2 storage estimates worldwide. This report presents the findings of these workshops and an internationally shared guideline for quantifying CO2 storage resources.
The UARS and open data system concept and analysis study. Executive summary
NASA Technical Reports Server (NTRS)
Mittal, M.; Nebb, J.; Woodward, H.
1983-01-01
Alternative concepts for a common design for the UARS and OPEN Central Data Handling Facility (CDHF) are offered. The designs are consistent with requirements shared by UARS and OPEN and the data storage and data processing demands of these missions. Because more detailed information is available for UARS, the design approach was to size the system and to select components for a UARS CDHF, but in a manner that does not optimize the CDHF at the expense of OPEN. Costs for alternative implementations of the UARS designs are presented showing that the system design does not restrict the implementation to a single manufacturer. Processing demands on the alternative UARS CDHF implementations are discussed. With this information at hand together with estimates for OPEN processing demands, it is shown that any shortfall in system capability for OPEN support can be remedied by either component upgrades or array processing attachments rather than a system redesign.
Application of real-time database to LAMOST control system
NASA Astrophysics Data System (ADS)
Xu, Lingzhe; Xu, Xinqi
2004-09-01
The QNX based real time database is one of main features for Large sky Area Multi-Object fiber Spectroscopic Telescope's (LAMOST) control system, which serves as a storage and platform for data flow, recording and updating timely various status of moving components in the telescope structure as well as environmental parameters around it. The database joins harmonically in the administration of the Telescope Control System (TCS). The paper presents methodology and technique tips in designing the EMPRESS database GUI software package, such as the dynamic creation of control widgets, dynamic query and share memory. The seamless connection between EMPRESS and the graphical development tool of QNX"s Photon Application Builder (PhAB) has been realized, and so have the Windows look and feel yet under Unix-like operating system. In particular, the real time feature of the database is analyzed that satisfies the needs of the control system.
NASA Astrophysics Data System (ADS)
Sindrilaru, Elvin A.; Peters, Andreas J.; Adde, Geoffray M.; Duellmann, Dirk
2017-10-01
CERN has been developing and operating EOS as a disk storage solution successfully for over 6 years. The CERN deployment provides 135 PB and stores 1.2 billion replicas distributed over two computer centres. Deployment includes four LHC instances, a shared instance for smaller experiments and since last year an instance for individual user data as well. The user instance represents the backbone of the CERNBOX service for file sharing. New use cases like synchronisation and sharing, the planned migration to reduce AFS usage at CERN and the continuous growth has brought EOS to new challenges. Recent developments include the integration and evaluation of various technologies to do the transition from a single active in-memory namespace to a scale-out implementation distributed over many meta-data servers. The new architecture aims to separate the data from the application logic and user interface code, thus providing flexibility and scalability to the namespace component. Another important goal is to provide EOS as a CERN-wide mounted filesystem with strong authentication making it a single storage repository accessible via various services and front- ends (/eos initiative). This required new developments in the security infrastructure of the EOS FUSE implementation. Furthermore, there were a series of improvements targeting the end-user experience like tighter consistency and latency optimisations. In collaboration with Seagate as Openlab partner, EOS has a complete integration of OpenKinetic object drive cluster as a high-throughput, high-availability, low-cost storage solution. This contribution will discuss these three main development projects and present new performance metrics.
Charging system and method for multicell storage batteries
Cox, Jay A.
1978-01-01
A battery-charging system includes a first charging circuit connected in series with a plurality of battery cells for controlled current charging. A second charging circuit applies a controlled voltage across each individual cell for equalization of the cells to the fully charged condition. This controlled voltage is determined at a level above the fully charged open-circuit voltage but at a sufficiently low level to prevent corrosion of cell components by electrochemical reaction. In this second circuit for cell equalization, a transformer primary receives closely regulated, square-wave voltage which is coupled to a plurality of equal secondary coil windings. Each secondary winding is connected in parallel to each cell of a series-connected pair of cells through half-wave rectifiers and a shared, intermediate conductor.
NASA Astrophysics Data System (ADS)
Ramirez Camargo, Luis; Dorner, Wolfgang
2016-04-01
The yearly cumulated technical energy generation potential of grid-connected roof-top photovoltaic power plants is significantly larger than the demand of domestic buildings in sparsely populated municipalities in central Europe. However, an energy balance with cumulated annual values does not deliver the right picture about the actual potential for photovoltaics since these run on a highly variable energy source as solar radiation. The mismatch between the periods of generation and demand creates hard limitations for the deployment of the theoretical energy generation potential of roof-top photovoltaics. The actual penetration of roof-top photovoltaic is restricted by the energy quality requirements of the grid and/or the available storage capacity for the electricity production beyond the coverage of own demands. In this study we evaluate in how far small-scale storage systems can contribute to increment the grid-connected roof-top photovoltaic penetration in domestic buildings at a municipal scale. To accomplish this, we calculate, in a first step, the total technical roof-top photovoltaic energy generation potential of a municipality in a high spatiotemporal resolution using a procedure that relies on geographic information systems. Posteriorly, we constrain the set of potential photovoltaic plants to the ones that would be necessary to cover the total yearly demand of the municipality. We assume that photovoltaic plants with the highest yearly yield are the ones that should be installed. For this sub-set of photovoltaic plants we consider five scenarios: 1) no storage 2) one 7 kWh battery is installed in every building with a roof-top photovoltaic plant 3) one 10 kWh battery is installed in every building with a roof-top photovoltaic plant 4) one 7 kWh battery is installed in every domestic building in the municipality 5) one 10 kWh battery is installed in every domestic building in the municipality. Afterwards we evaluate the energy balance of the municipality using a series of indicators. These indicators include: a) the total photovoltaic installed capacity, b) the total storage installed capacity, c) the output variability, d) the total unfulfilled demand, e) total excess energy, f) total properly supplied energy, g) the loss of power supply probability, h) the amount of hours of supply higher than the highest demand in a year, i) the number of hours, when supply is 1.5. times higher than the highest demand in a year, and j) the additional storage energy capacity and power required to store all excess energy generated by the photovoltaic installations. The comparison of the proposed indicators serves to quantify the contribution that household-sized small-scale storage systems would make to the energy balance of the studied municipality. Increased installed energy storage capacity allows a higher roof-top photovoltaic share and improves energy utilization, variability and reliability indicators. The proposed methodology serves also to determine the amount of storage capacity with the highest positive impact on the local energy balance.
Thermal energy storage for CSP (Concentrating Solar Power)
NASA Astrophysics Data System (ADS)
Py, Xavier; Sadiki, Najim; Olives, Régis; Goetz, Vincent; Falcoz, Quentin
2017-07-01
The major advantage of concentrating solar power before photovoltaic is the possibility to store thermal energy at large scale allowing dispatchability. Then, only CSP solar power plants including thermal storage can be operated 24 h/day using exclusively the solar resource. Nevertheless, due to a too low availability in mined nitrate salts, the actual mature technology of the two tanks molten salts cannot be applied to achieve the expected international share in the power production for 2050. Then alternative storage materials are under studies such as natural rocks and recycled ceramics made from industrial wastes. The present paper is a review of those alternative approaches.
Energy storage at the threshold: Smart mobility and the grid of the future
NASA Astrophysics Data System (ADS)
Crabtree, George
2018-01-01
Energy storage is poised to drive transformations in transportation and the electricity grid that personalize access to mobility and energy services, not unlike the transformation of smart phones that personalized access to people and information. Storage will work with other emerging technologies such as electric vehicles, ride-sharing, self-driving and connected cars in transportation and with renewable generation, distributed energy resources and smart energy management on the grid to create mobility and electricity as services matched to customer needs replacing the conventional one-size-fits-all approach. This survey outlines the prospects, challenges and impacts of the coming mobility and electricity transformations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cambria, Erik; Chattopadhyay, Anupam; Linn, Eike
Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. Here, we chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering outmore » noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.« less
Cambria, Erik; Chattopadhyay, Anupam; Linn, Eike; ...
2017-05-27
Not unlike the concern over diminishing fossil fuel, information technology is bringing its own share of future worries. Here, we chose to look closely into one concern in this paper, namely the limited amount of data storage. By a simple extrapolatory analysis, it is shown that we are on the way to exhaust our storage capacity in less than two centuries with current technology and no recycling. This can be taken as a note of caution to expand research initiative in several directions: firstly, bringing forth innovative data analysis techniques to represent, learn, and aggregate useful knowledge while filtering outmore » noise from data; secondly, tap onto the interplay between storage and computing to minimize storage allocation; thirdly, explore ingenious solutions to expand storage capacity. Throughout this paper, we delve deeper into the state-of-the-art research and also put forth novel propositions in all of the abovementioned directions, including space- and time-efficient data representation, intelligent data aggregation, in-memory computing, extra-terrestrial storage, and data curation. The main aim of this paper is to raise awareness on the storage limitation we are about to face if current technology is adopted and the storage utilization growth rate persists. In the manuscript, we propose some storage solutions and a better utilization of storage capacity through a global DIKW hierarchy.« less
Information Requirements for Integrating Spatially Discrete, Feature-Based Earth Observations
NASA Astrophysics Data System (ADS)
Horsburgh, J. S.; Aufdenkampe, A. K.; Lehnert, K. A.; Mayorga, E.; Hsu, L.; Song, L.; Zaslavsky, I.; Valentine, D. L.
2014-12-01
Several cyberinfrastructures have emerged for sharing observational data collected at densely sampled and/or highly instrumented field sites. These include the CUAHSI Hydrologic Information System (HIS), the Critical Zone Observatory Integrated Data Management System (CZOData), the Integrated Earth Data Applications (IEDA) and EarthChem system, and the Integrated Ocean Observing System (IOOS). These systems rely on standard data encodings and, in some cases, standard semantics for classes of geoscience data. Their focus is on sharing data on the Internet via web services in domain specific encodings or markup languages. While they have made progress in making data available, it still takes investigators significant effort to discover and access datasets from multiple repositories because of inconsistencies in the way domain systems describe, encode, and share data. Yet, there are many scenarios that require efficient integration of these data types across different domains. For example, understanding a soil profile's geochemical response to extreme weather events requires integration of hydrologic and atmospheric time series with geochemical data from soil samples collected over various depth intervals from soil cores or pits at different positions on a landscape. Integrated access to and analysis of data for such studies are hindered because common characteristics of data, including time, location, provenance, methods, and units are described differently within different systems. Integration requires syntactic and semantic translations that can be manual, error-prone, and lossy. We report information requirements identified as part of our work to define an information model for a broad class of earth science data - i.e., spatially-discrete, feature-based earth observations resulting from in-situ sensors and environmental samples. We sought to answer the question: "What information must accompany observational data for them to be archivable and discoverable within a publication system as well as interpretable once retrieved from such a system for analysis and (re)use?" We also describe development of multiple functional schemas (i.e., physical implementations for data storage, transfer, and archival) for the information model that capture the requirements reported here.
Software for Sharing and Management of Information
NASA Technical Reports Server (NTRS)
Chen, James R.; Wolfe, Shawn R.; Wragg, Stephen D.
2003-01-01
DIAMS is a set of computer programs that implements a system of collaborative agents that serve multiple, geographically distributed users communicating via the Internet. DIAMS provides a user interface as a Java applet that runs on each user s computer and that works within the context of the user s Internet-browser software. DIAMS helps all its users to manage, gain access to, share, and exchange information in databases that they maintain on their computers. One of the DIAMS agents is a personal agent that helps its owner find information most relevant to current needs. It provides software tools and utilities for users to manage their information repositories with dynamic organization and virtual views. Capabilities for generating flexible hierarchical displays are integrated with capabilities for indexed- query searching to support effective access to information. Automatic indexing methods are employed to support users queries and communication between agents. The catalog of a repository is kept in object-oriented storage to facilitate sharing of information. Collaboration between users is aided by matchmaker agents and by automated exchange of information. The matchmaker agents are designed to establish connections between users who have similar interests and expertise.
General consumer communication tools for improved image management and communication in medicine
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Rosset, Antoine; McCoy, J. Michael
2005-04-01
We elected to explore emerging consumer technologies that can be adopted to improve and facilitate image and data communication in medical and clinical environment. The wide adoption of new communication paradigm such as instant messaging, chatting and direct emailing can be integrated in specific applications. The increasing capacity of portable and hand held devices such as iPod music players offer an attractive alternative for data storage that exceeds the capabilities of traditional offline storage media such as CD or even DVD. We adapted medical image display and manipulation software called OSIRIX to integrate different innovative technologies facilitating the communication and data transfer between remote users. We integrated email and instant messaging features to the program allowing users to instantaneously email an image or a set of images that are displayed on the screen. Using iChat instant messaging application from Apple a user can share the content of his screen with a remote correspondent and communicate in real time using voice and video. To provide convenient mechanism for exchange of large data sets the program can store the data in DICOM format on CD or DVD, but was also extended to use the large storage capacity of iPod hard disks as well as Apple"s online storage service "dot Mac" that users can subscribe to benefit from scalable secure storage that accessible from anywhere on the internet. The adoption of these innovative technologies is likely to change the architecture of traditional picture archiving and communication systems and provide more flexible and efficient means of communication.
XML-BSPM: an XML format for storing Body Surface Potential Map recordings.
Bond, Raymond R; Finlay, Dewar D; Nugent, Chris D; Moore, George
2010-05-14
The Body Surface Potential Map (BSPM) is an electrocardiographic method, for recording and displaying the electrical activity of the heart, from a spatial perspective. The BSPM has been deemed more accurate for assessing certain cardiac pathologies when compared to the 12-lead ECG. Nevertheless, the 12-lead ECG remains the most popular ECG acquisition method for non-invasively assessing the electrical activity of the heart. Although data from the 12-lead ECG can be stored and shared using open formats such as SCP-ECG, no open formats currently exist for storing and sharing the BSPM. As a result, an innovative format for storing BSPM datasets has been developed within this study. The XML vocabulary was chosen for implementation, as opposed to binary for the purpose of human readability. There are currently no standards to dictate the number of electrodes and electrode positions for recording a BSPM. In fact, there are at least 11 different BSPM electrode configurations in use today. Therefore, in order to support these BSPM variants, the XML-BSPM format was made versatile. Hence, the format supports the storage of custom torso diagrams using SVG graphics. This diagram can then be used in a 2D coordinate system for retaining electrode positions. This XML-BSPM format has been successfully used to store the Kornreich-117 BSPM dataset and the Lux-192 BSPM dataset. The resulting file sizes were in the region of 277 kilobytes for each BSPM recording and can be deemed suitable for example, for use with any telemonitoring application. Moreover, there is potential for file sizes to be further reduced using basic compression algorithms, i.e. the deflate algorithm. Finally, these BSPM files have been parsed and visualised within a convenient time period using a web based BSPM viewer. This format, if widely adopted could promote BSPM interoperability, knowledge sharing and data mining. This work could also be used to provide conceptual solutions and inspire existing formats such as DICOM, SCP-ECG and aECG to support the storage of BSPMs. In summary, this research provides initial ground work for creating a complete BSPM management system.
Design of the transfer line from booster to storage ring at 3 GeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayar, C., E-mail: cafer.bayar@cern.ch; Ciftci, A. K., E-mail: abbas.kenan.ciftci@cern.ch
The Synchrotron Booster Ring accelerates the e-beam up to 3 GeV and particles are transported from booster to storage ring by transfer line. In this study, two options are considered, the first one is a long booster which shares the same tunnel with storage ring and the second one is a compact booster. As a result, two transfer line are designed based on booster options. The optical design is constrained by the e-beam Twiss parameters entering and leaving the transfer line. Twiss parameters in the extraction point of booster are used for the entrance of transfer line and are matchedmore » in the exit of transfer line to the injection point of the storage ring.« less
Cost analysis of concepts for a demand oriented biogas supply for flexible power generation.
Hahn, Henning; Ganagin, Waldemar; Hartmann, Kilian; Wachendorf, Michael
2014-10-01
With the share of intermittent renewable energies within the electricity system rising, balancing services from dispatchable power plants are of increasing importance. Highlighting the importance of the need to keeping fuel costs for flexible power generation to a minimum, the study aims to identify favourable biogas plant configurations, supplying biogas on demand. A cost analysis of five configurations based on biogas storing and flexible biogas production concepts has been carried out. Results show that additional flexibility costs for a biogas supply of 8h per day range between 2€ and 11€MWh(-1) and for a 72h period without biogas demand from 9€ to 19€MWh(-1). While biogas storage concepts were identified as favourable short term supply configurations, flexible biogas production concepts profit from reduced storage requirements at plants with large biogas production capacities or for periods of several hours without biogas demand. Copyright © 2014 Elsevier Ltd. All rights reserved.
Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun
2012-01-01
Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.
Advanced control for ground source heat pump systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Patrick; Gehl, Anthony C.; Liu, Xiaobing
Ground source heat pumps (GSHP), also known as geothermal heat pumps (GHP), are proven advanced HVAC systems that utilize clean and renewable geothermal energy, as well as the massive thermal storage capacity of the ground, to provide space conditioning and water heating for both residential and commercial buildings. GSHPs have higher energy efficiencies than conventional HVAC systems. It is estimated, if GSHPs achieve a 10% market share in the US, in each year, 0.6 Quad Btu primary energy consumption can be saved and 36 million tons carbon emissions can be avoided (Liu et al. 2017). However, the current market sharemore » of GSHPs is less than 1%. The foremost barrier preventing wider adoption of GSHPs is their high installation costs. To enable wider adoption of GSHPs, the costeffectiveness of GSHP applications must be improved.« less
Bio and health informatics meets cloud : BioVLab as an example.
Chae, Heejoon; Jung, Inuk; Lee, Hyungro; Marru, Suresh; Lee, Seong-Whan; Kim, Sun
2013-01-01
The exponential increase of genomic data brought by the advent of the next or the third generation sequencing (NGS) technologies and the dramatic drop in sequencing cost have driven biological and medical sciences to data-driven sciences. This revolutionary paradigm shift comes with challenges in terms of data transfer, storage, computation, and analysis of big bio/medical data. Cloud computing is a service model sharing a pool of configurable resources, which is a suitable workbench to address these challenges. From the medical or biological perspective, providing computing power and storage is the most attractive feature of cloud computing in handling the ever increasing biological data. As data increases in size, many research organizations start to experience the lack of computing power, which becomes a major hurdle in achieving research goals. In this paper, we review the features of publically available bio and health cloud systems in terms of graphical user interface, external data integration, security and extensibility of features. We then discuss about issues and limitations of current cloud systems and conclude with suggestion of a biological cloud environment concept, which can be defined as a total workbench environment assembling computational tools and databases for analyzing bio/medical big data in particular application domains.
Complementing hydropower with PV and wind: optimal energy mix in a fully renewable Switzerland
NASA Astrophysics Data System (ADS)
Dujardin, Jérôme; Kahl, Annelen; Kruyt, Bert; Lehning, Michael
2017-04-01
Like several other countries, Switzerland plans to phase out its nuclear power production and will replace most or all of it by renewables. Switzerland has the chance to benefit from a large hydropower potential and has already exploited almost all of it. Currently about 60% of the Swiss electricity consumption is covered by hydropower, which will eventually leave a gap of about 40% to the other renewables mainly composed of photovoltaics (PV) and wind. With its high flexibility, storage hydropower will play a major role in the future energy mix, providing valuable power and energy balance. Our work focuses on the interplay between PV, wind and storage hydropower, to analyze the dynamics of this complex system and to identify the best PV-wind mixing ratio. Given the current electricity consumption and the currently installed pumping capacity of the storage hydropower plants, it appears that the Swiss hydropower system can completely alleviate the intermittency of PV and wind. However, some seasonal mismatch between production and demand will remain, but we show that oversizing the production from PV and wind or enlarging the reservoir capacity can be a solution to keep it to an acceptable level or even eliminate it. We found that PV, wind and hydropower performs the best together when the share of PV in the solar - wind mix is between 20 and 60%. These findings are quantitatively specific for Switzerland but qualitatively transferable to similar mountainous environments with abundant hydropower resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, C.W.; Giraud, K.M.
Newcomer countries expected to develop new nuclear power programs by 2030 are being encouraged by the International Atomic Energy Agency to explore the use of shared facilities for spent fuel storage and geologic disposal. Multinational underground nuclear parks (M-UNPs) are an option for sharing such facilities. Newcomer countries with suitable bedrock conditions could volunteer to host M-UNPs. M-UNPs would include back-end fuel cycle facilities, in open or closed fuel cycle configurations, with sufficient capacity to enable M-UNP host countries to provide for-fee waste management services to partner countries, and to manage waste from the M-UNP power reactors. M-UNP potential advantagesmore » include: the option for decades of spent fuel storage; fuel-cycle policy flexibility; increased proliferation resistance; high margin of physical security against attack; and high margin of containment capability in the event of beyond-design-basis accidents, thereby reducing the risk of Fukushima-like radiological contamination of surface lands. A hypothetical M-UNP in crystalline rock with facilities for small modular reactors, spent fuel storage, reprocessing, and geologic disposal is described using a room-and-pillar reference-design cavern. Underground construction cost is judged tractable through use of modern excavation technology and careful site selection. (authors)« less
Minimum information required for a DMET experiment reporting.
Kumuthini, Judit; Mbiyavanga, Mamana; Chimusa, Emile R; Pathak, Jyotishman; Somervuo, Panu; Van Schaik, Ron Hn; Dolzan, Vita; Mizzi, Clint; Kalideen, Kusha; Ramesar, Raj S; Macek, Milan; Patrinos, George P; Squassina, Alessio
2016-09-01
To provide pharmacogenomics reporting guidelines, the information and tools required for reporting to public omic databases. For effective DMET data interpretation, sharing, interoperability, reproducibility and reporting, we propose the Minimum Information required for a DMET Experiment (MIDE) reporting. MIDE provides reporting guidelines and describes the information required for reporting, data storage and data sharing in the form of XML. The MIDE guidelines will benefit the scientific community with pharmacogenomics experiments, including reporting pharmacogenomics data from other technology platforms, with the tools that will ease and automate the generation of such reports using the standardized MIDE XML schema, facilitating the sharing, dissemination, reanalysis of datasets through accessible and transparent pharmacogenomics data reporting.
Optimizing carbon storage and biodiversity protection in tropical agricultural landscapes.
Gilroy, James J; Woodcock, Paul; Edwards, Felicity A; Wheeler, Charlotte; Medina Uribe, Claudia A; Haugaasen, Torbjørn; Edwards, David P
2014-07-01
With the rapidly expanding ecological footprint of agriculture, the design of farmed landscapes will play an increasingly important role for both carbon storage and biodiversity protection. Carbon and biodiversity can be enhanced by integrating natural habitats into agricultural lands, but a key question is whether benefits are maximized by including many small features throughout the landscape ('land-sharing' agriculture) or a few large contiguous blocks alongside intensive farmland ('land-sparing' agriculture). In this study, we are the first to integrate carbon storage alongside multi-taxa biodiversity assessments to compare land-sparing and land-sharing frameworks. We do so by sampling carbon stocks and biodiversity (birds and dung beetles) in landscapes containing agriculture and forest within the Colombian Chocó-Andes, a zone of high global conservation priority. We show that woodland fragments embedded within a matrix of cattle pasture hold less carbon per unit area than contiguous primary or advanced secondary forests (>15 years). Farmland sites also support less diverse bird and dung beetle communities than contiguous forests, even when farmland retains high levels of woodland habitat cover. Landscape simulations based on these data suggest that land-sparing strategies would be more beneficial for both carbon storage and biodiversity than land-sharing strategies across a range of production levels. Biodiversity benefits of land-sparing are predicted to be similar whether spared lands protect primary or advanced secondary forests, owing to the close similarity of bird and dung beetle communities between the two forest classes. Land-sparing schemes that encourage the protection and regeneration of natural forest blocks thus provide a synergy between carbon and biodiversity conservation, and represent a promising strategy for reducing the negative impacts of agriculture on tropical ecosystems. However, further studies examining a wider range of ecosystem services will be necessary to fully understand the links between land-allocation strategies and long-term ecosystem service provision. © 2014 John Wiley & Sons Ltd.
Parallel-Vector Algorithm For Rapid Structural Anlysis
NASA Technical Reports Server (NTRS)
Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.
1993-01-01
New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.
A Workflow-based Intelligent Network Data Movement Advisor with End-to-end Performance Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Michelle M.; Wu, Chase Q.
2013-11-07
Next-generation eScience applications often generate large amounts of simulation, experimental, or observational data that must be shared and managed by collaborative organizations. Advanced networking technologies and services have been rapidly developed and deployed to facilitate such massive data transfer. However, these technologies and services have not been fully utilized mainly because their use typically requires significant domain knowledge and in many cases application users are even not aware of their existence. By leveraging the functionalities of an existing Network-Aware Data Movement Advisor (NADMA) utility, we propose a new Workflow-based Intelligent Network Data Movement Advisor (WINDMA) with end-to-end performance optimization formore » this DOE funded project. This WINDMA system integrates three major components: resource discovery, data movement, and status monitoring, and supports the sharing of common data movement workflows through account and database management. This system provides a web interface and interacts with existing data/space management and discovery services such as Storage Resource Management, transport methods such as GridFTP and GlobusOnline, and network resource provisioning brokers such as ION and OSCARS. We demonstrate the efficacy of the proposed transport-support workflow system in several use cases based on its implementation and deployment in DOE wide-area networks.« less
NASA Technical Reports Server (NTRS)
Easley, W. C.; Tanguy, J. S.
1986-01-01
An upgrade of the transport systems research vehicle (TSRV) experimental flight system retained the original monochrome display system. The original host computer was replaced with a Norden 11/70, a new digital autonomous terminal access communication (DATAC) data bus was installed for data transfer between display system and host, while a new data interface method was required. The new display data interface uses four split phase bipolar (SPBP) serial busses. The DATAC bus uses a shared interface ram (SIR) for intermediate storage of its data transfer. A display interface unit (DIU) was designed and configured to read from and write to the SIR to properly convert the data from parallel to SPBP serial and vice versa. It is found that separation of data for use by each SPBP bus and synchronization of data tranfer throughout the entire experimental flight system are major problems which require solution in DIU design. The techniques used to accomplish these new data interface requirements are described.
Optimisation of the Management of Higher Activity Waste in the UK - 13537
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Ciara; Buckley, Matthew
2013-07-01
The Upstream Optioneering project was created in the Nuclear Decommissioning Authority (UK) to support the development and implementation of significant opportunities to optimise activities across all the phases of the Higher Activity Waste management life cycle (i.e. retrieval, characterisation, conditioning, packaging, storage, transport and disposal). The objective of the Upstream Optioneering project is to work in conjunction with other functions within NDA and the waste producers to identify and deliver solutions to optimise the management of higher activity waste. Historically, optimisation may have occurred on aspects of the waste life cycle (considered here to include retrieval, conditioning, treatment, packaging, interimmore » storage, transport to final end state, which may be geological disposal). By considering the waste life cycle as a whole, critical analysis of assumed constraints may lead to cost savings for the UK Tax Payer. For example, it may be possible to challenge the requirements for packaging wastes for disposal to deliver an optimised waste life cycle. It is likely that the challenges faced in the UK are shared in other countries. It is therefore likely that the opportunities identified may also apply elsewhere, with the potential for sharing information to enable value to be shared. (authors)« less
Incorporating Brokers within Collaboration Environments
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; de Torcy, A.
2013-12-01
A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.
Vaccarino, Anthony L; Dharsee, Moyez; Strother, Stephen; Aldridge, Don; Arnott, Stephen R; Behan, Brendan; Dafnas, Costas; Dong, Fan; Edgecombe, Kenneth; El-Badrawi, Rachad; El-Emam, Khaled; Gee, Tom; Evans, Susan G; Javadi, Mojib; Jeanson, Francis; Lefaivre, Shannon; Lutz, Kristen; MacPhee, F Chris; Mikkelsen, Jordan; Mikkelsen, Tom; Mirotchnick, Nicholas; Schmah, Tanya; Studzinski, Christa M; Stuss, Donald T; Theriault, Elizabeth; Evans, Kenneth R
2018-01-01
Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute's "Brain-CODE" is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care.
Vaccarino, Anthony L.; Dharsee, Moyez; Strother, Stephen; Aldridge, Don; Arnott, Stephen R.; Behan, Brendan; Dafnas, Costas; Dong, Fan; Edgecombe, Kenneth; El-Badrawi, Rachad; El-Emam, Khaled; Gee, Tom; Evans, Susan G.; Javadi, Mojib; Jeanson, Francis; Lefaivre, Shannon; Lutz, Kristen; MacPhee, F. Chris; Mikkelsen, Jordan; Mikkelsen, Tom; Mirotchnick, Nicholas; Schmah, Tanya; Studzinski, Christa M.; Stuss, Donald T.; Theriault, Elizabeth; Evans, Kenneth R.
2018-01-01
Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute’s “Brain-CODE” is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care. PMID:29875648
AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments
NASA Astrophysics Data System (ADS)
Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.
2015-09-01
AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.
Hernando, M Elena; Pascual, Mario; Salvador, Carlos H; García-Sáez, Gema; Rodríguez-Herrero, Agustín; Martínez-Sarriegui, Iñaki; Gómez, Enrique J
2008-09-01
The growing availability of continuous data from medical devices in diabetes management makes it crucial to define novel information technology architectures for efficient data storage, data transmission, and data visualization. The new paradigm of care demands the sharing of information in interoperable systems as the only way to support patient care in a continuum of care scenario. The technological platforms should support all the services required by the actors involved in the care process, located in different scenarios and managing diverse information for different purposes. This article presents basic criteria for defining flexible and adaptive architectures that are capable of interoperating with external systems, and integrating medical devices and decision support tools to extract all the relevant knowledge to support diabetes care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huertas-Hernando, Daniel; Farahmand, Hossein; Holttinen, Hannele
2016-06-20
Hydro power is one of the most flexible sources of electricity production. Power systems with considerable amounts of flexible hydro power potentially offer easier integration of variable generation, e.g., wind and solar. However, there exist operational constraints to ensure mid-/long-term security of supply while keeping river flows and reservoirs levels within permitted limits. In order to properly assess the effective available hydro power flexibility and its value for storage, a detailed assessment of hydro power is essential. Due to the inherent uncertainty of the weather-dependent hydrological cycle, regulation constraints on the hydro system, and uncertainty of internal load as wellmore » as variable generation (wind and solar), this assessment is complex. Hence, it requires proper modeling of all the underlying interactions between hydro power and the power system, with a large share of other variable renewables. A summary of existing experience of wind integration in hydro-dominated power systems clearly points to strict simulation methodologies. Recommendations include requirements for techno-economic models to correctly assess strategies for hydro power and pumped storage dispatch. These models are based not only on seasonal water inflow variations but also on variable generation, and all these are in time horizons from very short term up to multiple years, depending on the studied system. Another important recommendation is to include a geographically detailed description of hydro power systems, rivers' flows, and reservoirs as well as grid topology and congestion.« less
NREL Testing Erigo's and EaglePicher's Microgrid Energy Storage System |
EaglePicher's Microgrid Energy Storage System NREL researchers are testing an energy storage system for a contains three independently controllable energy storage technologies. Photo of energy storage system hardware in a laboratory Photo by Dennis Schroeder Microgrids-and effective storage systems supporting them
Patterns of Storage, Use, and Disposal of Opioids Among Cancer Outpatients
de la Cruz, Maxine; Rodriguez, Eden Mae; Thames, Jessica; Wu, Jimin; Chisholm, Gary; Liu, Diane; Frisbee-Hume, Susan; Yennurajalingam, Sriram; Hui, David; Cantu, Hilda; Marin, Alejandra; Gayle, Vicki; Shinn, Nancy; Xu, Angela; Williams, Janet; Bruera, Eduardo
2014-01-01
Purpose. Improper storage, use, and disposal of prescribed opioids can lead to diversion or accidental poisoning. Our objective was to determine the patterns of storage, utilization, and disposal of opioids among cancer outpatients. Patients and Methods. We surveyed 300 adult cancer outpatients receiving opioids in our supportive care center and collected information regarding opioid use, storage, and disposal, along with scores on the CAGE (cut down, annoyed, guilty, eye-opener) alcoholism screening questionnaire. Unsafe use was defined as sharing or losing opioids; unsafe storage was defined as storing opioids in plain sight. Results. The median age was 57 years. CAGE was positive in 58 of 300 patients (19%), and 26 (9%) had a history of illicit drug use. Fifty-six (19%) stored opioids in plain sight, 208 (69%) kept opioids hidden but unlocked, and only 28 (9%) locked their opioids. CAGE-positive patients (p = .007) and those with a history of illicit drug use (p = .0002) or smoking (p = .03) were more likely to lock their opioids. Seventy-eight (26%) reported unsafe use by sharing (9%) or losing (17%) their opioids. Patients who were never married or single (odds ratio: 2.92; 95% confidence interval: 1.48–5.77; p = .006), were CAGE positive (40% vs. 21%; p = .003), or had a history of illicit drug use (42% vs. 23%; p = .031) were more likely to use opioids unsafely. Overall, 223 of 300 patients (74%) were unaware of proper opioid disposal methods, and 138 (46%) had unused opioids at home. Conclusion. A large proportion of cancer patients improperly and unsafely use, store, and dispose of opioids, highlighting the need for establishment of easily accessed patient education and drug take-back programs. PMID:24868100
Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila
2015-11-01
Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage requirements to participate in sophisticated analyses based on federated research networks. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Winter electricity supply and seasonal storage deficit in the Swiss Alps
NASA Astrophysics Data System (ADS)
Manso, Pedro; Monay, Blaise; Dujardin, Jérôme; Schaefli, Bettina; Schleiss, Anton
2017-04-01
Switzerland electricity production depends at 60% on hydropower, most of the remainder coming from nuclear power plants. The ongoing energy transition foresees an increase in renewable electricity production of solar photovoltaic, wind and geothermal origin to replace part of nuclear production; hydropower, in its several forms, will continue to provide the backbone and the guarantee of the instantaneous and permanent stability of the electric system. One of the key elements of any future portfolio of electricity mix with higher shares of intermittent energy sources like wind and solar are fast energy storage and energy deployment solutions. Hydropower schemes with pumping capabilities are eligible for storage at different time scales, whereas high-head storage hydropower schemes have already a cornerstone role in today's grid operation. These hydropower storage schemes have also been doing what can be labelled as "seasonal energy storage" in different extents, storing abundant flows in the wet season (summer) to produce electricity in the dry (winter) alpine season. Some of the existing reservoirs are however under sized with regards to the available water inflows and either spill over or operate as "run-of-the-river" which is economically suboptimal. Their role in seasonal energy transfer could increase through storage capacity increase (by dam heightening, by new storage dams in the same catchment). Inversely, other reservoirs that already store most of the wet season inflow might not fill up in the future in case inflows decrease due to climate changes; these reservoirs might then have extra storage capacity available to store energy from sources like solar and wind, if water pumping capacity is added or increased. The present work presents a comprehensive methodology for the identification of the seasonal storage deficit per catchment considering todays and future hydrological conditions with climate change, applied to several landmark case studies in Switzerland. In some cases additional storage would allow mitigating negative impacts of climate change. In one of the tested cases the decrease in inflows is such that the reservoir will not fill up in the future; this reservoir will become a priority location for pumping capacity increase, for short-term or seasonal storage of excess solar/wind energy. Considering that the present average rate of glacier mass loss at the country scale is equivalent to the Grande Dixence reservoir per year (the largest Swiss reservoir, approx. 380 hm3), increasing artificial water storage might become mandatory to maintain the same level of security electricity supply in the future.
The SERI solar energy storage program
NASA Technical Reports Server (NTRS)
Copeland, R. J.; Wright, J. D.; Wyman, C. E.
1980-01-01
In support of the DOE thermal and chemical energy storage program, the solar energy storage program (SERI) provides research on advanced technologies, systems analyses, and assessments of thermal energy storage for solar applications in support of the Thermal and Chemical Energy Storage Program of the DOE Division of Energy Storage Systems. Currently, research is in progress on direct contact latent heat storage and thermochemical energy storage and transport. Systems analyses are being performed of thermal energy storage for solar thermal applications, and surveys and assessments are being prepared of thermal energy storage in solar applications. A ranking methodology for comparing thermal storage systems (performance and cost) is presented. Research in latent heat storage and thermochemical storage and transport is reported.
Eighteen Years of Safe Storage and Counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moren, Richard J.; Morton, M.
2016-03-28
The purpose of this paper is to share the status and condition of the six reactor buildings at the Hanford Site in Washington State that are in this SAFSTOR condition (for between 4 and 18 years as of summer of 2016).
Silvestre, Julio; Reddy, Akhila; de la Cruz, Maxine; Wu, Jimin; Liu, Diane; Bruera, Eduardo; Todd, Knox H
2017-12-01
Approximately 75% of prescription opioid abusers obtain the drug from an acquaintance, which may be a consequence of improper opioid storage, use, disposal, and lack of patient education. We aimed to determine the opioid storage, use, and disposal patterns in patients presenting to the emergency department (ED) of a comprehensive cancer center. We surveyed 113 patients receiving opioids for at least 2 months upon presenting to the ED and collected information regarding opioid use, storage, and disposal. Unsafe storage was defined as storing opioids in plain sight, and unsafe use was defined as sharing or losing opioids. The median age was 53 years, 55% were female, 64% were white, and 86% had advanced cancer. Of those surveyed, 36% stored opioids in plain sight, 53% kept them hidden but unlocked, and only 15% locked their opioids. However, 73% agreed that they would use a lockbox if given one. Patients who reported that others had asked them for their pain medications (p = 0.004) and those who would use a lockbox if given one (p = 0.019) were more likely to keep them locked. Some 13 patients (12%) used opioids unsafely by either sharing (5%) or losing (8%) them. Patients who reported being prescribed more pain pills than required (p = 0.032) were more likely to practice unsafe use. Most (78%) were unaware of proper opioid disposal methods, 6% believed they were prescribed more medication than required, and 67% had unused opioids at home. Only 13% previously received education about safe disposal of opioids. Overall, 77% (87) of patients reported unsafe storage, unsafe use, or possessed unused opioids at home. Many cancer patients presenting to the ED improperly and unsafely store, use, or dispose of opioids, thus highlighting a need to investigate the impact of patient education on such practices.
Name It! Store It! Protect It!: A Systems Approach to Managing Data in Research Core Facilities.
DeVries, Matthew; Fenchel, Matthew; Fogarty, R E; Kim, Byong-Do; Timmons, Daniel; White, A Nicole
2017-12-01
As the capabilities of technology increase, so do the production of data and the need for data management. The need for data storage at many academic institutions is increasing exponentially. Technology is expanding rapidly, and institutions are recognizing the need to incorporate data management that can be available for future data sharing as a critical component of institutional services. The establishment of a process to manage the surge in data storage is complex and often hindered by not having a plan. Simple file naming-nomenclature-is also becoming ever more important to leave an established understanding of the contents in a file. This is especially the case as research experiences turnover from research projects and personnel. The indexing of files consistently also helps to identify past work. Finally, the protection of the data contents is becoming increasing challenging. As the genomic field expands, and medicine becomes more personalized, the identification of methods to protect the contents of data in both short- and long-term storage needs to be established so as not to risk the potential of revealing identifiable information. This is often something we do not consider in a nonclinical research environment. The need for establishing basic guidelines for institutions is critical, as individual research laboratories are unable to handle the scope of data storage required for their own research. In addition to the immediate needs for establishing guidelines on data storage and file naming and how to protect information, the recognition of the need for specialized support for data management supporting research cores and laboratories at academic institutions is becoming a critical component of institutional services. Here, we outline some case studies and methods that you may be able to adopt at your own institution.
Proceedings of the DOE chemical energy storage and hydrogen energy systems contracts review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Sessions were held on electrolysis-based hydrogen storage systems, hydrogen production, hydrogen storage systems, hydrogen storage materials, end-use applications and system studies, chemical heat pump/chemical energy storage systems, systems studies and assessment, thermochemical hydrogen production cycles, advanced production concepts, and containment materials. (LHK)
Spacecraft cryogenic gas storage systems
NASA Technical Reports Server (NTRS)
Rysavy, G.
1971-01-01
Cryogenic gas storage systems were developed for the liquid storage of oxygen, hydrogen, nitrogen, and helium. Cryogenic storage is attractive because of the high liquid density and low storage pressure of cryogens. This situation results in smaller container sizes, reduced container-strength levels, and lower tankage weights. The Gemini and Apollo spacecraft used cryogenic gas storage systems as standard spacecraft equipment. In addition to the Gemini and Apollo cryogenic gas storage systems, other systems were developed and tested in the course of advancing the state of the art. All of the cryogenic storage systems used, developed, and tested to date for manned-spacecraft applications are described.
Applications integration in a hybrid cloud computing environment: modelling and platform
NASA Astrophysics Data System (ADS)
Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang
2013-08-01
With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.
Unlocking the potential of smart grid technologies with behavioral science
Sintov, Nicole D.; Schultz, P. Wesley
2015-01-01
Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizing the impact of smart grid technologies. In this paper, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings. PMID:25914666
Unlocking the potential of smart grid technologies with behavioral science.
Sintov, Nicole D; Schultz, P Wesley
2015-01-01
Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizing the impact of smart grid technologies. In this paper, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings.
Unlocking the potential of smart grid technologies with behavioral science
Sintov, Nicole D.; Schultz, P. Wesley
2015-04-09
Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizingmore » the impact of smart grid technologies. In this study, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings.« less
Unlocking the potential of smart grid technologies with behavioral science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sintov, Nicole D.; Schultz, P. Wesley
Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizingmore » the impact of smart grid technologies. In this study, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings.« less
Lessons Learned in Deploying the World s Largest Scale Lustre File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillow, David A; Fuller, Douglas; Wang, Feiyi
2010-01-01
The Spider system at the Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) is the world's largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF's diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF's diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing themore » file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.« less
Stand-alone digital data storage control system including user control interface
NASA Technical Reports Server (NTRS)
Wright, Kenneth D. (Inventor); Gray, David L. (Inventor)
1994-01-01
A storage control system includes an apparatus and method for user control of a storage interface to operate a storage medium to store data obtained by a real-time data acquisition system. Digital data received in serial format from the data acquisition system is first converted to a parallel format and then provided to the storage interface. The operation of the storage interface is controlled in accordance with instructions based on user control input from a user. Also, a user status output is displayed in accordance with storage data obtained from the storage interface. By allowing the user to control and monitor the operation of the storage interface, a stand-alone, user-controllable data storage system is provided for storing the digital data obtained by a real-time data acquisition system.
Overvoorde, P J; Chao, W S; Grimes, H D
1997-06-20
Photoaffinity labeling of a soybean cotyledon membrane fraction identified a sucrose-binding protein (SBP). Subsequent studies have shown that the SBP is a unique plasma membrane protein that mediates the linear uptake of sucrose in the presence of up to 30 mM external sucrose when ectopically expressed in yeast. Analysis of the SBP-deduced amino acid sequence indicates it lacks sequence similarity with other known transport proteins. Data presented here, however, indicate that the SBP shares significant sequence and structural homology with the vicilin-like seed storage proteins that organize into homotrimers. These similarities include a repeated sequence that forms the basis of the reiterated domain structure characteristic of the vicilin-like protein family. In addition, analytical ultracentrifugation and nonreducing SDS-polyacrylamide gel electrophoresis demonstrate that the SBP appears to be organized into oligomeric complexes with a Mr indicative of the existence of SBP homotrimers and homodimers. The structural similarity shared by the SBP and vicilin-like proteins provides a novel framework to explore the mechanistic basis of SBP-mediated sucrose uptake. Expression of the maize Glb protein (a vicilin-like protein closely related to the SBP) in yeast demonstrates that a closely related vicilin-like protein is unable to mediate sucrose uptake. Thus, despite sequence and structural similarities shared by the SBP and the vicilin-like protein family, the SBP is functionally divergent from other members of this group.
NASA Technical Reports Server (NTRS)
Wilhite, Larry D.; Lee, S. C.; Lollar, Louis F.
1989-01-01
The design and implementation of the real-time data acquisition and processing system employed in the AMPERES project is described, including effective data structures for efficient storage and flexible manipulation of the data by the knowledge-based system (KBS), the interprocess communication mechanism required between the data acquisition system and the KBS, and the appropriate data acquisition protocols for collecting data from the sensors. Sensor data are categorized as critical or noncritical data on the basis of the inherent frequencies of the signals and the diagnostic requirements reflected in their values. The critical data set contains 30 analog values and 42 digital values and is collected every 10 ms. The noncritical data set contains 240 analog values and is collected every second. The collected critical and noncritical data are stored in separate circular buffers. Buffers are created in shared memory to enable other processes, i.e., the fault monitoring and diagnosis process and the user interface process, to freely access the data sets.
Environmental impacts of high penetration renewable energy scenarios for Europe
NASA Astrophysics Data System (ADS)
Berrill, Peter; Arvesen, Anders; Scholz, Yvonne; Gils, Hans Christian; Hertwich, Edgar G.
2016-01-01
The prospect of irreversible environmental alterations and an increasingly volatile climate pressurises societies to reduce greenhouse gas emissions, thereby mitigating climate change impacts. As global electricity demand continues to grow, particularly if considering a future with increased electrification of heat and transport sectors, the imperative to decarbonise our electricity supply becomes more urgent. This letter implements outputs of a detailed power system optimisation model into a prospective life cycle analysis framework in order to present a life cycle analysis of 44 electricity scenarios for Europe in 2050, including analyses of systems based largely on low-carbon fossil energy options (natural gas, and coal with carbon capture and storage (CCS)) as well as systems with high shares of variable renewable energy (VRE) (wind and solar). VRE curtailments and impacts caused by extra energy storage and transmission capabilities necessary in systems based on VRE are taken into account. The results show that systems based largely on VRE perform much better regarding climate change and other impact categories than the investigated systems based on fossil fuels. The climate change impacts from Europe for the year 2050 in a scenario using primarily natural gas are 1400 Tg CO2-eq while in a scenario using mostly coal with CCS the impacts are 480 Tg CO2-eq. Systems based on renewables with an even mix of wind and solar capacity generate impacts of 120-140 Tg CO2-eq. Impacts arising as a result of wind and solar variability do not significantly compromise the climate benefits of utilising these energy resources. VRE systems require more infrastructure leading to much larger mineral resource depletion impacts than fossil fuel systems, and greater land occupation impacts than systems based on natural gas. Emissions and resource requirements from wind power are smaller than from solar power.
Thermal Storage Applications Workshop. Volume 2: Contributed Papers
NASA Technical Reports Server (NTRS)
1979-01-01
The solar thermal and the thermal and thermochemical energy storage programs are described as well as the technology requirements for both external (electrical) and internal (thermal, chemical) modes for energy storage in solar power plants. Specific technical issues addressed include thermal storage criteria for solar power plants interfacing with utility systems; optimal dispatch of storage for solar plants in a conventional electric grid; thermal storage/temperature tradeoffs for solar total energy systems; the value of energy storage for direct-replacement solar thermal power plants; systems analysis of storage in specific solar thermal power applications; the value of seasonal storage of solar energy; criteria for selection of the thermal storage system for a 10 MW(2) solar power plant; and the need for specific requirements by storage system development teams.
NASA Astrophysics Data System (ADS)
Zeyringer, Marianne; Price, James; Fais, Birgit; Li, Pei-Hao; Sharp, Ed
2018-05-01
The design of cost-effective power systems with high shares of variable renewable energy (VRE) technologies requires a modelling approach that simultaneously represents the whole energy system combined with the spatiotemporal and inter-annual variability of VRE. Here, we soft-link a long-term energy system model, which explores new energy system configurations from years to decades, with a high spatial and temporal resolution power system model that captures VRE variability from hours to years. Applying this methodology to Great Britain for 2050, we find that VRE-focused power system design is highly sensitive to the inter-annual variability of weather and that planning based on a single year can lead to operational inadequacy and failure to meet long-term decarbonization objectives. However, some insights do emerge that are relatively stable to weather-year. Reinforcement of the transmission system consistently leads to a decrease in system costs while electricity storage and flexible generation, needed to integrate VRE into the system, are generally deployed close to demand centres.
Low delay and area efficient soft error correction in arbitration logic
Sugawara, Yutaka
2013-09-10
There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.
Thermal energy storage devices, systems, and thermal energy storage device monitoring methods
Tugurlan, Maria; Tuffner, Francis K; Chassin, David P.
2016-09-13
Thermal energy storage devices, systems, and thermal energy storage device monitoring methods are described. According to one aspect, a thermal energy storage device includes a reservoir configured to hold a thermal energy storage medium, a temperature control system configured to adjust a temperature of the thermal energy storage medium, and a state observation system configured to provide information regarding an energy state of the thermal energy storage device at a plurality of different moments in time.
Open systems storage platforms
NASA Technical Reports Server (NTRS)
Collins, Kirby
1992-01-01
The building blocks for an open storage system includes a system platform, a selection of storage devices and interfaces, system software, and storage applications CONVEX storage systems are based on the DS Series Data Server systems. These systems are a variant of the C3200 supercomputer with expanded I/O capabilities. These systems support a variety of medium and high speed interfaces to networks and peripherals. System software is provided in the form of ConvexOS, a POSIX compliant derivative of 4.3BSD UNIX. Storage applications include products such as UNITREE and EMASS. With the DS Series of storage systems, Convex has developed a set of products which provide open system solutions for storage management applications. The systems are highly modular, assembled from off the shelf components with industry standard interfaces. The C Series system architecture provides a stable base, with the performance and reliability of a general purpose platform. This combination of a proven system architecture with a variety of choices in peripherals and application software allows wide flexibility in configurations, and delivers the benefits of open systems to the mass storage world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Gunasekaran, Raghul; Ma, Xiaosong
2016-01-01
Inter-application I/O contention and performance interference have been recognized as severe problems. In this work, we demonstrate, through measurement from Titan (world s No. 3 supercomputer), that high I/O variance co-exists with the fact that individual storage units remain under-utilized for the majority of the time. This motivates us to propose AID, a system that performs automatic application I/O characterization and I/O-aware job scheduling. AID analyzes existing I/O traffic and batch job history logs, without any prior knowledge on applications or user/developer involvement. It identifies the small set of I/O-intensive candidates among all applications running on a supercomputer and subsequentlymore » mines their I/O patterns, using more detailed per-I/O-node traffic logs. Based on such auto- extracted information, AID provides online I/O-aware scheduling recommendations to steer I/O-intensive applications away from heavy ongoing I/O activities. We evaluate AID on Titan, using both real applications (with extracted I/O patterns validated by contacting users) and our own pseudo-applications. Our results confirm that AID is able to (1) identify I/O-intensive applications and their detailed I/O characteristics, and (2) significantly reduce these applications I/O performance degradation/variance by jointly evaluating out- standing applications I/O pattern and real-time system l/O load.« less
Mapping the developmental constraints on working memory span performance.
Bayliss, Donna M; Jarrold, Christopher; Baddeley, Alan D; Gunn, Deborah M; Leigh, Eleanor
2005-07-01
This study investigated the constraints underlying developmental improvements in complex working memory span performance among 120 children of between 6 and 10 years of age. Independent measures of processing efficiency, storage capacity, rehearsal speed, and basic speed of processing were assessed to determine their contribution to age-related variance in complex span. Results showed that developmental improvements in complex span were driven by 2 age-related but separable factors: 1 associated with general speed of processing and 1 associated with storage ability. In addition, there was an age-related contribution shared between working memory, processing speed, and storage ability that was important for higher level cognition. These results pose a challenge for models of complex span performance that emphasize the importance of processing speed alone.
NASA Astrophysics Data System (ADS)
Tilmant, Amaury; Marques, Guilherme
2016-04-01
Among the environmental impacts caused by dams, the alteration of flow regimes is one of the most critical to river ecosystems given its influence in long river reaches and its continuous pattern. Provided it is technically feasible, the reoperation of hydroelectric reservoir systems can, in principle, mitigate the impacts on degraded freshwater ecosystems by recovering some of the natural flow regime. The typical approach to implement hydropower-to-environment water transfers focuses on the reoperation of the dam located immediately upstream of the environmentally sensitive area, meaning that only one power station will bear the brunt of the benefits forgone for the power sector. By ignoring the contribution of upstream infrastructures to the alteration of the flow regime, the opportunity cost associated with the restoration of a flow regime is not equitably distributed among the power companies in the river basin, therefore slowing the establishment of environmental flow programs. Yet, there is no criterion, nor institutional mechanisms, to ensure a fair distribution of the opportunity cost among power stations. This paper addresses this issue by comparing four rules to redistribute the costs faced by the power sector when environmental flows must be implemented in a multireservoir system. The rules are based on the the installed capacity of the power plants, the live storage capacity of the reservoirs, the ratio between the incremental flows and the live storage capacity, and the extent of the storage services; that is, the volume of water effectively transferred by each reservoir. The analysis is carried out using the Parana River Basin (Brazil) as a case study.
A Support Database System for Integrated System Health Management (ISHM)
NASA Technical Reports Server (NTRS)
Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John
2007-01-01
The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between system elements to provide the logical context for the database. The historical data archive provides a common repository for sensor data that can be shared between developers and applications. The firmware codebase is used by the developer to organize the intelligent element firmware into atomic units which can be assembled into complete firmware for specific elements.
OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Greiner, Annette; Cholia, Shreyas
Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less
2014-01-01
Background The use of biological samples in research raises a number of ethical issues in relation to consent, storage, export, benefit sharing and re-use of samples. Participant perspectives have been explored in North America and Europe, with only a few studies reported in Africa. The amount of research being conducted in Africa is growing exponentially with volumes of biological samples being exported from the African continent. In order to investigate the perspectives of African research participants, we conducted a study at research sites in the Western Cape and Gauteng, South Africa. Methods Data were collected using a semi-structured questionnaire that captured both quantitative and qualitative information at 6 research sites in South Africa. Interviews were conducted in English and Afrikaans. Data were analysed both quantitatively and qualitatively. Results Our study indicates that while the majority of participants were supportive of providing samples for research, serious concerns were voiced about future use, benefit sharing and export of samples. While researchers view the provision of biosamples as a donation, participants believe that they still have ownership rights and are therefore in favour of benefit sharing. Almost half of the participants expressed a desire to be re-contacted for consent for future use of their samples. Interesting opinions were expressed with respect to export of samples. Conclusions Eliciting participant perspectives is an important part of community engagement in research involving biological sample collection, export, storage and future use. A tiered consent process appears to be more acceptable to participants in this study. Eliciting opinions of researchers and research ethics committee (REC) members would contribute multiple perspectives. Further research is required to interrogate the concept of ownership and the consent process in research involving biological samples. PMID:24447822
Knowledge Management Enablers and Process in Hospital Organizations.
Lee, Hyun-Sook
2017-02-01
This research aimed to investigate the effects of knowledge management enablers, such as organizational structure, leadership, learning, information technology systems, trust, and collaboration, on the knowledge management process of creation, storage, sharing, and application. Using data from self-administered questionnaires in four Korean tertiary hospitals, this survey investigated the main organizational factors affecting the knowledge management process in these organizations. A total of 779 questionnaires were analyzed using SPSS 18.0 and AMOS 18.0. The results showed that organizational factors affect the knowledge management process differently in each hospital organization. From a managerial perspective, the implications of these factors for developing organizational strategies that encourage and foster the knowledge management process are discussed.
A Grid Infrastructure for Supporting Space-based Science Operations
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)
2002-01-01
Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.
NASA Technical Reports Server (NTRS)
1976-01-01
The applicability of energy storage devices to any energy system depends on the performance and cost characteristics of the larger basic system. A comparative assessment of energy storage alternatives for application to IUS which addresses the systems aspects of the overall installation is described. Factors considered include: (1) descriptions of the two no-storage IUS baselines utilized as yardsticks for comparison throughout the study; (2) discussions of the assessment criteria and the selection framework employed; (3) a summary of the rationale utilized in selecting water storage as the primary energy storage candidate for near term application to IUS; (4) discussion of the integration aspects of water storage systems; and (5) an assessment of IUS with water storage in alternative climates.
Where the Cloud Meets the Commons
ERIC Educational Resources Information Center
Ipri, Tom
2011-01-01
Changes presented by cloud computing--shared computing services, applications, and storage available to end users via the Internet--have the potential to seriously alter how libraries provide services, not only remotely, but also within the physical library, specifically concerning challenges facing the typical desktop computing experience.…
USDA-ARS?s Scientific Manuscript database
Rangelands are valued for their capacity to provide diverse suites of ecosystem services, from food production to carbon storage to biological diversity. Although rangelands worldwide share common characteristics, differences among biogeographic regions result in differences in the types of opportun...
ERIC Educational Resources Information Center
Hignite, Karla
2009-01-01
Green information technology (IT) is grabbing more mainstream headlines--and for good reason. Computing, data processing, and electronic file storage collectively account for a significant and growing share of energy consumption in the business world and on higher education campuses. With greater scrutiny of all activities that contribute to an…
The lysosomal storage disease continuum with ageing-related neurodegenerative disease.
Lloyd-Evans, Emyr; Haslett, Luke J
2016-12-01
Lysosomal storage diseases and diseases of ageing share many features both at the physiological level and with respect to the mechanisms that underlie disease pathogenesis. Although the exact pathophysiology is not exactly the same, it is astounding how many similar pathways are altered in all of these diseases. The aim of this review is to provide a summary of the shared disease mechanisms, outlining the similarities and differences and how genetics, insight into rare diseases and functional research has changed our perspective on the causes underlying common diseases of ageing. The lysosome should no longer be considered as just the stomach of the cell or as a suicide bag, it has an emerging role in cellular signalling, nutrient sensing and recycling. The lysosome is of fundamental importance in the pathophysiology of diseases of ageing and by comparing against the LSDs we not only identify common pathways but also therapeutic targets so that ultimately more effective treatments can be developed for all neurodegenerative diseases. Copyright © 2016. Published by Elsevier B.V.
Golberg, Alexander; Linshiz, Gregory; Kravets, Ilia; Stawski, Nina; Hillson, Nathan J; Yarmush, Martin L; Marks, Robert S; Konry, Tania
2014-01-01
We report an all-in-one platform - ScanDrop - for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a "cloud" network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2-4 days for other currently available standard detection methods.
High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.
Simonyan, Vahan; Mazumder, Raja
2014-09-30
The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.
Kravets, Ilia; Stawski, Nina; Hillson, Nathan J.; Yarmush, Martin L.; Marks, Robert S.; Konry, Tania
2014-01-01
We report an all-in-one platform – ScanDrop – for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a “cloud” network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2–4 days for other currently available standard detection methods. PMID:24475107
High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis
Simonyan, Vahan; Mazumder, Raja
2014-01-01
The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953
NASA Astrophysics Data System (ADS)
Lu, Meilian; Yang, Dong; Zhou, Xing
2013-03-01
Based on the analysis of the requirements of conversation history storage in CPM (Converged IP Messaging) system, a Multi-views storage model and access methods of conversation history are proposed. The storage model separates logical views from physical storage and divides the storage into system managed region and user managed region. It simultaneously supports conversation view, system pre-defined view and user-defined view of storage. The rationality and feasibility of multi-view presentation, the physical storage model and access methods are validated through the implemented prototype. It proves that, this proposal has good scalability, which will help to optimize the physical data storage structure and improve storage performance.
Storage systems for solar thermal power
NASA Technical Reports Server (NTRS)
Calogeras, J. E.; Gordon, L. H.
1978-01-01
The development status is reviewed of some thermal energy storage technologies specifically oriented towards providing diurnal heat storage for solar central power systems and solar total energy systems. These technologies include sensible heat storage in caverns and latent heat storage using both active and passive heat exchange processes. In addition, selected thermal storage concepts which appear promising to a variety of advanced solar thermal system applications are discussed.
NASA Technical Reports Server (NTRS)
Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.
1992-01-01
Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.
Ontology-based, Tissue MicroArray oriented, image centered tissue bank
Viti, Federica; Merelli, Ivan; Caprera, Andrea; Lazzari, Barbara; Stella, Alessandra; Milanesi, Luciano
2008-01-01
Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their utility is limited by the lack of data integration with biomolecular information. Results In this work we propose a Tissue MicroArray web oriented system to support researchers in managing bio-samples and, through the use of ontologies, enables tissue sharing aimed at the design of Tissue MicroArray experiments and results evaluation. Indeed, our system provides ontological description both for pre-analysis tissue images and for post-process analysis image results, which is crucial for information exchange. Moreover, working on well-defined terms it is then possible to query web resources for literature articles to integrate both pathology and bioinformatics data. Conclusions Using this system, users associate an ontology-based description to each image uploaded into the database and also integrate results with the ontological description of biosequences identified in every tissue. Moreover, it is possible to integrate the ontological description provided by the user with a full compliant gene ontology definition, enabling statistical studies about correlation between the analyzed pathology and the most commonly related biological processes. PMID:18460177
Walkowiak-Tomczak, Dorota; Czapski, Janusz; Młynarczyk, Karolina
2016-01-01
Elderberries are a source of dietary supplements and bioactive compounds, such as anthocyanins. These dyes are used in food technology. The aim of the study was to assess the changes in colour parameters, anthocyanin contents and sensory attributes in solutions of elderberry juice concentrates during storage in a model system and to determine predictability of sensory attributes of colour in solutions based on regression equations using the response surface methodology. The experiment was carried out according to the 3-level factorial design for three factors. Independent variables included pH, storage time and temperature. Dependent variables were assumed to be the components and colour parameters in the CIE L*a*b* system, pigment contents and sensory attributes. Changes in colour components X, Y, Z and colour parameters L*, a*, b*, C* and h* were most dependent on pH values. Colour lightness L* and tone h* increased with an increase in experimental factors, while the share of the red colour a* and colour saturation C* decreased. The greatest effect on the anthocyanin concentration was recorded for storage time. Sensory attributes deteriorated during storage. The highest correlation coefficients were found between the value of colour tone h* and anthocyanin contents in relation to the assessment of the naturalness and desirability of colour. A high goodness-of-fit of the model to data and high values of R2 for regression equations were obtained for all responses. The response surface method facilitates optimization of experimental factor values in order to obtain a specific attribute of the product, but not in all cases of the experiment. Within the tested range of factors, it is possible to predict changes in anthocyanin content and the sensory attributes of elderberry juice concentrate solutions as food dye, on the basis of the lack of a fit test. The highest stability of dyes and colour of elderberry solutions was found in the samples at pH 3.0, which confirms the advisability of using an anthocyanin preparation to shape the colour of high-acidity food products, such as fruit fillings, beverages,desserts.
High Density Digital Data Storage System
NASA Technical Reports Server (NTRS)
Wright, Kenneth D., II; Gray, David L.; Rowland, Wayne D.
1991-01-01
The High Density Digital Data Storage System was designed to provide a cost effective means for storing real-time data from the field-deployable digital acoustic measurement system. However, the high density data storage system is a standalone system that could provide a storage solution for many other real time data acquisition applications. The storage system has inputs for up to 20 channels of 16-bit digital data. The high density tape recorders presently being used in the storage system are capable of storing over 5 gigabytes of data at overall transfer rates of 500 kilobytes per second. However, through the use of data compression techniques the system storage capacity and transfer rate can be doubled. Two tape recorders have been incorporated into the storage system to produce a backup tape of data in real-time. An analog output is provided for each data channel as a means of monitoring the data as it is being recorded.
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation
Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784
Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.
Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi
2016-01-01
After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.
NASA Technical Reports Server (NTRS)
Ramirez, Eric; Gutheinz, Sandy; Brison, James; Ho, Anita; Allen, James; Ceritelli, Olga; Tobar, Claudia; Nguyen, Thuykien; Crenshaw, Harrel; Santos, Roxann
2008-01-01
Supplier Management System (SMS) allows for a consistent, agency-wide performance rating system for suppliers used by NASA. This version (2.0) combines separate databases into one central database that allows for the sharing of supplier data. Information extracted from the NBS/Oracle database can be used to generate ratings. Also, supplier ratings can now be generated in the areas of cost, product quality, delivery, and audit data. Supplier data can be charted based on real-time user input. Based on these individual ratings, an overall rating can be generated. Data that normally would be stored in multiple databases, each requiring its own log-in, is now readily available and easily accessible with only one log-in required. Additionally, the database can accommodate the storage and display of quality-related data that can be analyzed and used in the supplier procurement decision-making process. Moreover, the software allows for a Closed-Loop System (supplier feedback), as well as the capability to communicate with other federal agencies.
Data management and data enrichment for systems biology projects.
Wittig, Ulrike; Rey, Maja; Weidemann, Andreas; Müller, Wolfgang
2017-11-10
Collecting, curating, interlinking, and sharing high quality data are central to de.NBI-SysBio, the systems biology data management service center within the de.NBI network (German Network for Bioinformatics Infrastructure). The work of the center is guided by the FAIR principles for scientific data management and stewardship. FAIR stands for the four foundational principles Findability, Accessibility, Interoperability, and Reusability which were established to enhance the ability of machines to automatically find, access, exchange and use data. Within this overview paper we describe three tools (SABIO-RK, Excemplify, SEEK) that exemplify the contribution of de.NBI-SysBio services to FAIR data, models, and experimental methods storage and exchange. The interconnectivity of the tools and the data workflow within systems biology projects will be explained. For many years we are the German partner in the FAIRDOM initiative (http://fair-dom.org) to establish a European data and model management service facility for systems biology. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Combining Distributed and Shared Memory Models: Approach and Evolution of the Global Arrays Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nieplocha, Jarek; Harrison, Robert J.; Kumar, Mukul
2002-07-29
Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in the modern computers this characteristic might have a negative impact on performance and scalability. Various techniques, such as code restructuring to increase data reuse and introducing blocking in data accesses, can address the problem and yield performance competitive with message passing[Singh], however at the cost of compromising the ease of use feature. Distributed memory models such as message passing or one-sided communication offer performance and scalability butmore » they compromise the ease-of-use. In this context, the message-passing model is sometimes referred to as?assembly programming for the scientific computing?. The Global Arrays toolkit[GA1, GA2] attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed explicitly by the programmer. This management is achieved by explicit calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be explicitly specified and hence managed. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. This paper describes the characteristics of the Global Arrays programming model, capabilities of the toolkit, and discusses its evolution.« less
Privacy protection for personal health information and shared care records.
Neame, Roderick L B
2014-01-01
The protection of personal information privacy has become one of the most pressing security concerns for record keepers: this will become more onerous with the introduction of the European General Data Protection Regulation (GDPR) in mid-2014. Many institutions, both large and small, have yet to implement the essential infrastructure for data privacy protection and patient consent and control when accessing and sharing data; even more have failed to instil a privacy and security awareness mindset and culture amongst their staff. Increased regulation, together with better compliance monitoring, has led to the imposition of increasingly significant monetary penalties for failure to protect privacy: these too are set to become more onerous under the GDPR, increasing to a maximum of 2% of annual turnover. There is growing pressure in clinical environments to deliver shared patient care and to support this with integrated information. This demands that more information passes between institutions and care providers without breaching patient privacy or autonomy. This can be achieved with relatively minor enhancements of existing infrastructures and does not require extensive investment in inter-operating electronic records: indeed such investments to date have been shown not to materially improve data sharing. REQUIREMENTS FOR PRIVACY: There is an ethical duty as well as a legal obligation on the part of care providers (and record keepers) to keep patient information confidential and to share it only with the authorisation of the patient. To achieve this information storage and retrieval, communication systems must be appropriately configured. There are many components of this, which are discussed in this paper. Patients may consult clinicians anywhere and at any time: therefore, their data must be available for recipient-driven retrieval (i.e. like the World Wide Web) under patient control and kept private: a method for delivering this is outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia J.; Gittinger, Jaxon; Hunt, Warren L.
Slycat™ is a web-based system for performing data analysis and visualization of potentially large quantities of remote, high-dimensional data. Slycat™ specializes in working with ensemble data. An ensemble is a group of related data sets, which typically consists of a set of simulation runs exploring the same problem space. An ensemble can be thought of as a set of samples within a multi-variate domain, where each sample is a vector whose value defines a point in high-dimensional space. To understand and describe the underlying problem being modeled in the simulations, ensemble analysis looks for shared behaviors and common features acrossmore » the group of runs. Additionally, ensemble analysis tries to quantify differences found in any members that deviate from the rest of the group. The Slycat™ system integrates data management, scalable analysis, and visualization. Results are viewed remotely on a user’s desktop via commodity web clients using a multi-tiered hierarchy of computation and data storage, as shown in Figure 1. Our goal is to operate on data as close to the source as possible, thereby reducing time and storage costs associated with data movement. Consequently, we are working to develop parallel analysis capabilities that operate on High Performance Computing (HPC) platforms, to explore approaches for reducing data size, and to implement strategies for staging computation across the Slycat™ hierarchy. Within Slycat™, data and visual analysis are organized around projects, which are shared by a project team. Project members are explicitly added, each with a designated set of permissions. Although users sign-in to access Slycat™, individual accounts are not maintained. Instead, authentication is used to determine project access. Within projects, Slycat™ models capture analysis results and enable data exploration through various visual representations. Although for scientists each simulation run is a model of real-world phenomena given certain conditions, we use the term model to refer to our modeling of the ensemble data, not the physics. Different model types often provide complementary perspectives on data features when analyzing the same data set. Each model visualizes data at several levels of abstraction, allowing the user to range from viewing the ensemble holistically to accessing numeric parameter values for a single run. Bookmarks provide a mechanism for sharing results, enabling interesting model states to be labeled and saved.« less
Enhancing water supply through reservoir reoperation
NASA Astrophysics Data System (ADS)
Rajagopal, S.; Sterle, K. M.; Jose, L.; Coors, S.; Pohll, G.; Singletary, L.
2017-12-01
Snowmelt is a significant contributor to water supply in western U.S. which is stored in reservoirs for use during peak summer demand. The reservoirs were built to satisfy multiple objectives, but primarily to either enhance water supply and/or for flood mitigation. The operating rules for these water supply reservoirs are based on historical assumptions of stationarity of climate, assuming peak snowmelt occurs after April 1 and hence have to let water pass through if it arrived earlier. Using the Truckee River which originates in the eastern Sierra Nevada, has seven reservoirs and is shared between California and Nevada as an example, we show enhanced water storage by altering reservoir operating rules. These results are based on a coupled hydrology (Ground-Surface water Flow, GSFLOW) and water management model (RIverware) developed for the river system. All the reservoirs in the system benefit from altering the reservoir rules, but some benefit more than others. Prosser Creek reservoir for example, historically averaged 76% of capacity, which was lowered to 46% of capacity in the future as climate warms and shifts snowmelt to earlier days of the year. This reduction in storage can be mitigated by altering the reservoir operation rules and the reservoir storage increases to 64-76% of capacity. There are limitations to altering operating rules as reservoirs operated primarily for flood control are required to maintain lower storage to absorb a flood pulse, yet using modeling we show that there are water supply benefits to adopting a more flexible rules of operation. In the future, due to changing climate we anticipate the reservoirs in the western U.S. which were typically capturing spring- summer snowmelt will have to be managed more actively as the water stored in the snowpack becomes more variable. This study presents a framework for understanding, modeling and quantifying the consequences of such a shift in hydrology and water management.
Birds of a Feather - Developments towards shared, regional geological disposal in the EU?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Codee, H.D.K.; Verhoef, E.V.; McCombie, Ch.
2008-07-01
Geological disposal is an essential component of the long-term management of spent fuel, high level and other long-lived radioactive waste. In the EU, all 25 member states generate radioactive waste. Of course, there are large differences in type and quantity between the member states, but all of them need a long-term solution. Even a country with only lightning rods with radium will need a long-term solution for the disposal. The 1600 year half-life of radium does not fit in a solution with a span of control of just a few hundred years. Implementation of a suitable deep repository may, however,more » be difficult or impossible for countries with small volumes of waste, because of the high costs involved. Will economy of scale force these birds of a feather to wait to flock together and share a repository? Implementing a small repository and operating it for very long times is very costly. There are past and current examples of countries being prepared to accept radioactive waste from others if a better environmental solution is thus achieved and if the arrangements are fair for all parties involved. The need for supranational surveillance also points to shared solutions. Although the European Parliament and the Commission have both supported the concept of shared regional repositories in Europe, (national) political and societal constraints have hampered the realization of such facilities up to now. The first step in this staged process was the EC funded project, SAPIERR I. The project (2003 to 2005) studied the feasibility of shared regional storage facilities and geological repositories, for use by European countries. It showed that, if shared regional repositories are to be implemented even some decades ahead, efforts must already be increased now. The next step in the process is to develop a practical implementation strategy and organizational structures to work on shared EU radioactive waste storage and disposal activities. This is addressed in the EC funded project SAPIERR II (2006-2008). The paper gives an update of the SAPIERR II project and describes the progress achieved. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, Garth
Petascale computing infrastructures for scientific discovery make petascale demands on information storage capacity, performance, concurrency, reliability, availability, and manageability. The Petascale Data Storage Institute focuses on the data storage problems found in petascale scientific computing environments, with special attention to community issues such as interoperability, community buy-in, and shared tools. The Petascale Data Storage Institute is a collaboration between researchers at Carnegie Mellon University, National Energy Research Scientific Computing Center, Pacific Northwest National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Los Alamos National Laboratory, University of Michigan, and the University of California at Santa Cruz. Because the Institute focusesmore » on low level files systems and storage systems, its role in improving SciDAC systems was one of supporting application middleware such as data management and system-level performance tuning. In retrospect, the Petascale Data Storage Institute’s most innovative and impactful contribution is the Parallel Log-structured File System (PLFS). Published in SC09, PLFS is middleware that operates in MPI-IO or embedded in FUSE for non-MPI applications. Its function is to decouple concurrently written files into a per-process log file, whose impact (the contents of the single file that the parallel application was concurrently writing) is determined on later reading, rather than during its writing. PLFS is transparent to the parallel application, offering a POSIX or MPI-IO interface, and it shows an order of magnitude speedup to the Chombo benchmark and two orders of magnitude to the FLASH benchmark. Moreover, LANL production applications see speedups of 5X to 28X, so PLFS has been put into production at LANL. Originally conceived and prototyped in a PDSI collaboration between LANL and CMU, it has grown to engage many other PDSI institutes, international partners like AWE, and has a large team at EMC supporting and enhancing it. PLFS is open sourced with a BSD license on sourceforge. Post PDSI funding comes from NNSA and industry sources. Moreover, PLFS has spin out half a dozen or more papers, partnered on research with multiple schools and vendors, and has projects to transparently 1) dis- tribute metadata over independent metadata servers, 2) exploit drastically non-POSIX Hadoop storage for HPC POSIX applications, 3) compress checkpoints on the fly, 4) batch delayed writes for write speed, 5) compress read-back indexes and parallelize their redistribution, 6) double-buffer writes in NAND Flash storage to decouple host blocking during checkpoint from disk write time in the storage system, 7) pack small files into a smaller number of bigger containers. There are two large scale open source Linux software projects that PDSI significantly incubated, though neither were initated in PDSI. These are 1) Ceph, a UCSC parallel object storage research project that has continued to be a vehicle for research, and has become a released part of Linux, and 2) Parallel NFS (pNFS) a portion of the IETF’s NFSv4.1 that brings the core data parallelism found in Lustre, PanFS, PVFS, and Ceph to the industry standard NFS, with released code in Linux 3.0, and its vendor offerings, with products from NetApp, EMC, BlueArc and RedHat. Both are fundamentally supported and advanced by vendor companies now, but were critcally transferred from research demonstration to viable product with funding from PDSI, in part. At this point Lustre remains the primary path to scalable IO in Exascale systems, but both Ceph and pNFS are viable alternatives with different fundamental advantages. Finally, research community building was a big success for PDSI. Through the HECFSIO workshops and HECURA project with NSF PDSI stimulated and helped to steer leveraged funding of over $25M. Through the Petascale (now Parallel) Data Storage Workshop series, www.pdsw.org, colocated with SCxy each year, PDSI created and incubated five offerings of this high-attendance workshop. The workshop has gone on without PDSI support with two more highly successfully workshops, rewriting its organizational structure to be community managed. More than 70 peer reviewed papers have been presented at PDSW workshops.« less
Small PACS implementation using publicly available software
NASA Astrophysics Data System (ADS)
Passadore, Diego J.; Isoardi, Roberto A.; Gonzalez Nicolini, Federico J.; Ariza, P. P.; Novas, C. V.; Omati, S. A.
1998-07-01
Building cost effective PACS solutions is a main concern in developing countries. Hardware and software components are generally much more expensive than in developed countries and also more tightened financial constraints are the main reasons contributing to a slow rate of implementation of PACS. The extensive use of Internet for sharing resources and information has brought a broad number of freely available software packages to an ever-increasing number of users. In the field of medical imaging is possible to find image format conversion packages, DICOM compliant servers for all kinds of service classes, databases, web servers, image visualization, manipulation and analysis tools, etc. This paper describes a PACS implementation for review and storage built on freely available software. It currently integrates four diagnostic modalities (PET, CT, MR and NM), a Radiotherapy Treatment Planning workstation and several computers in a local area network, for image storage, database management and image review, processing and analysis. It also includes a web-based application that allows remote users to query the archive for studies from any workstation and to view the corresponding images and reports. We conclude that the advantage of using this approach is twofold. It allows a full understanding of all the issues involved in the implementation of a PACS and also contributes to keep costs down while enabling the development of a functional system for storage, distribution and review that can prove to be helpful for radiologists and referring physicians.
Lawler, Mark; Maughan, Tim
2017-01-01
The collection, storage and use of genomic and clinical data from patients and healthy individuals is a key component of personalised medicine enterprises such as the Precision Medicine Initiative, the Cancer Moonshot and the 100,000 Genomes Project. In order to maximise the value of this data, it is important to embed a culture within the scientific, medical and patient communities that supports the appropriate sharing of genomic and clinical information. However, this aspiration raises a number of ethical, legal and regulatory challenges that need to be addressed. The Global Alliance for Genomics and Health, a worldwide coalition of researchers, healthcare professionals, patients and industry partners, is developing innovative solutions to support the responsible and effective sharing of genomic and clinical data. This article identifies the challenges that a data sharing culture poses and highlights a series of practical solutions that will benefit patients, researchers and society. PMID:28517986
Lawler, Mark; Maughan, Tim
2017-04-01
The collection, storage and use of genomic and clinical data from patients and healthy individuals is a key component of personalised medicine enterprises such as the Precision Medicine Initiative, the Cancer Moonshot and the 100,000 Genomes Project. In order to maximise the value of this data, it is important to embed a culture within the scientific, medical and patient communities that supports the appropriate sharing of genomic and clinical information. However, this aspiration raises a number of ethical, legal and regulatory challenges that need to be addressed. The Global Alliance for Genomics and Health, a worldwide coalition of researchers, healthcare professionals, patients and industry partners, is developing innovative solutions to support the responsible and effective sharing of genomic and clinical data. This article identifies the challenges that a data sharing culture poses and highlights a series of practical solutions that will benefit patients, researchers and society.
NASA Astrophysics Data System (ADS)
Geressu, Robel; Harou, Julien
2015-04-01
Water use rights are disputed in many transboundary basins. Even when water projects can benefit all, agreeing on cost and benefit sharing can be difficult where stakeholders have conflicting preferences on the designs and use of proposed water infrastructures. This study suggests a combination of many objective optimization and multi-criteria ranking methods to support negotiations regarding designs of new assets. The method allows competing users to assess development options based on their individual perspectives and agree on designs by incorporating coordination strategies into multi-reservoir system designs. We demonstrate a hypothetical negotiation on proposed Blue Nile reservoirs. The result form a set of Pareto-optimal designs i.e., reservoirs, storage capacity and their operating rules, and power trade, cost sharing and/or financing coordination strategies, which maximize benefit to all countries and show which trade-offs are implied by which designs. The approach fulfils decision-maker's desire to understand a) the critical design parameters that affect various objectives and b) how coordination mechanisms would enable them to incur benefits from proposed new dams.
Storage system software solutions for high-end user needs
NASA Technical Reports Server (NTRS)
Hogan, Carole B.
1992-01-01
Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.
DAS: A Data Management System for Instrument Tests and Operations
NASA Astrophysics Data System (ADS)
Frailis, M.; Sartor, S.; Zacchei, A.; Lodi, M.; Cirami, R.; Pasian, F.; Trifoglio, M.; Bulgarelli, A.; Gianotti, F.; Franceschi, E.; Nicastro, L.; Conforti, V.; Zoli, A.; Smart, R.; Morbidelli, R.; Dadina, M.
2014-05-01
The Data Access System (DAS) is a and data management software system, providing a reusable solution for the storage of data acquired both from telescopes and auxiliary data sources during the instrument development phases and operations. It is part of the Customizable Instrument WorkStation system (CIWS-FW), a framework for the storage, processing and quick-look at the data acquired from scientific instruments. The DAS provides a data access layer mainly targeted to software applications: quick-look displays, pre-processing pipelines and scientific workflows. It is logically organized in three main components: an intuitive and compact Data Definition Language (DAS DDL) in XML format, aimed for user-defined data types; an Application Programming Interface (DAS API), automatically adding classes and methods supporting the DDL data types, and providing an object-oriented query language; a data management component, which maps the metadata of the DDL data types in a relational Data Base Management System (DBMS), and stores the data in a shared (network) file system. With the DAS DDL, developers define the data model for a particular project, specifying for each data type the metadata attributes, the data format and layout (if applicable), and named references to related or aggregated data types. Together with the DDL user-defined data types, the DAS API acts as the only interface to store, query and retrieve the metadata and data in the DAS system, providing both an abstract interface and a data model specific one in C, C++ and Python. The mapping of metadata in the back-end database is automatic and supports several relational DBMSs, including MySQL, Oracle and PostgreSQL.
Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.
2016-12-01
Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.
NASA Astrophysics Data System (ADS)
McKenzie, A. W.
Cost and performance of various thermal storage concepts in a liquid metal receiver solar thermal power system application have been evaluated. The objectives of this study are to provide consistently calculated cost and performance data for thermal storage concepts integrated into solar thermal systems. Five alternative storage concepts are evaluated for a 100-MW(e) liquid metal-cooled receiver solar thermal power system for 1, 6, and 15 hours of storage: sodium 2-tank (reference system), molten draw salt 2-tank, sand moving bed, air/rock, and latent heat (phase change) with tube-intensive heat exchange (HX). The results indicate that the all sodium 2-tank thermal storage concept is not cost-effective for storage in excess of 3 or 4 hours; the molten draw salt 2-tank storage concept provides significant cost savings over the reference sodium 2-tank concept; and the air/rock storage concept with pressurized sodium buffer tanks provides the lowest evaluated cost of all storage concepts considered above 6 hours of storage.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Ownership of an underground storage tank or underground storage tank system or facility or property on which an underground storage tank or underground storage tank system is located. 280.220 Section 280.220 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID...
Subcontracted activities related to TES for building heating and cooling
NASA Technical Reports Server (NTRS)
Martin, J.
1980-01-01
The subcontract program elements related to thermal energy storage for building heating and cooling systems are outlined. The following factors are included: subcontracts in the utility load management application area; life and stability testing of packaged low cost energy storage materials; and development of thermal energy storage systems for residential space cooling. Resistance storage heater component development, demonstration of storage heater systems for residential applications, and simulation and evaluation of latent heat thermal energy storage (heat pump systems) are also discussed. Application of thermal energy storage for solar application and twin cities district heating are covered including an application analysis and technology assessment of thermal energy storage.
Carbon Dioxide Emissions Effects of Grid-Scale Electricity Storage in a Decarbonizing Power System
Craig, Michael T.; Jaramillo, Paulina; Hodge, Bri-Mathias
2018-01-03
While grid-scale electricity storage (hereafter 'storage') could be crucial for deeply decarbonizing the electric power system, it would increase carbon dioxide (CO 2) emissions in current systems across the United States. To better understand how storage transitions from increasing to decreasing system CO 2 emissions, we quantify the effect of storage on operational CO 2 emissions as a power system decarbonizes under a moderate and strong CO 2 emission reduction target through 2045. Under each target, we compare the effect of storage on CO 2 emissions when storage participates in only energy, only reserve, and energy and reserve markets. Wemore » conduct our study in the Electricity Reliability Council of Texas (ERCOT) system and use a capacity expansion model to forecast generator fleet changes and a unit commitment and economic dispatch model to quantify system CO 2 emissions with and without storage. We find that storage would increase CO 2 emissions in the current ERCOT system, but would decrease CO 2 emissions in 2025 through 2045 under both decarbonization targets. Storage reduces CO 2 emissions primarily by enabling gas-fired generation to displace coal-fired generation, but also by reducing wind and solar curtailment. We further find that the market in which storage participates drives large differences in the magnitude, but not the direction, of the effect of storage on CO 2 emissions.« less
Carbon Dioxide Emissions Effects of Grid-Scale Electricity Storage in a Decarbonizing Power System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, Michael T.; Jaramillo, Paulina; Hodge, Bri-Mathias
While grid-scale electricity storage (hereafter 'storage') could be crucial for deeply decarbonizing the electric power system, it would increase carbon dioxide (CO 2) emissions in current systems across the United States. To better understand how storage transitions from increasing to decreasing system CO 2 emissions, we quantify the effect of storage on operational CO 2 emissions as a power system decarbonizes under a moderate and strong CO 2 emission reduction target through 2045. Under each target, we compare the effect of storage on CO 2 emissions when storage participates in only energy, only reserve, and energy and reserve markets. Wemore » conduct our study in the Electricity Reliability Council of Texas (ERCOT) system and use a capacity expansion model to forecast generator fleet changes and a unit commitment and economic dispatch model to quantify system CO 2 emissions with and without storage. We find that storage would increase CO 2 emissions in the current ERCOT system, but would decrease CO 2 emissions in 2025 through 2045 under both decarbonization targets. Storage reduces CO 2 emissions primarily by enabling gas-fired generation to displace coal-fired generation, but also by reducing wind and solar curtailment. We further find that the market in which storage participates drives large differences in the magnitude, but not the direction, of the effect of storage on CO 2 emissions.« less
Terrestrial Energy Storage SPS Systems
NASA Technical Reports Server (NTRS)
Brandhorst, Henry W., Jr.
1998-01-01
Terrestrial energy storage systems for the SSP system were evaluated that could maintain the 1.2 GW power level during periods of brief outages from the solar powered satellite (SPS). Short-term outages of ten minutes and long-term outages up to four hours have been identified as "typical" cases where the ground-based energy storage system would be required to supply power to the grid. These brief interruptions in transmission could result from performing maintenance on the solar power satellite or from safety considerations necessitating the power beam be turned off. For example, one situation would be to allow for the safe passage of airplanes through the space occupied by the beam. Under these conditions, the energy storage system needs to be capable of storing 200 MW-hrs and 4.8 GW-hrs, respectively. The types of energy storage systems to be considered include compressed air energy storage, inertial energy storage, electrochemical energy storage, superconducting magnetic energy storage, and pumped hydro energy storage. For each of these technologies, the state-of-the-art in terms of energy and power densities were identified as well as the potential for scaling to the size systems required by the SSP system. Other issues addressed included the performance, life expectancy, cost, and necessary infrastructure and site locations for the various storage technologies.
Brooks, Kriston P; Holladay, Jamelyn D; Simmons, Kevin L; Herling, Darrell R
2014-11-18
An on-board hydride storage system and process are described. The system includes a slurry storage system that includes a slurry reactor and a variable concentration slurry. In one preferred configuration, the storage system stores a slurry containing a hydride storage material in a carrier fluid at a first concentration of hydride solids. The slurry reactor receives the slurry containing a second concentration of the hydride storage material and releases hydrogen as a fuel to hydrogen-power devices and vehicles.
Battery management system with distributed wireless sensors
Farmer, Joseph C.; Bandhauer, Todd M.
2016-02-23
A system for monitoring parameters of an energy storage system having a multiplicity of individual energy storage cells. A radio frequency identification and sensor unit is connected to each of the individual energy storage cells. The radio frequency identification and sensor unit operates to sense the parameter of each individual energy storage cell and provides radio frequency transmission of the parameters of each individual energy storage cell. A management system monitors the radio frequency transmissions from the radio frequency identification and sensor units for monitoring the parameters of the energy storage system.
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)
2002-01-01
This document contains copies of those technical papers received in time for publication prior to the Tenth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Nineteenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center April 15-18, 2002. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the ingest, storage, and management of large volumes of data. The Conference encourages all interested organizations to discuss long-term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long-term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, storage networking with emphasis on IP storage, performance, standards, site reports, and vendor solutions. Tutorials will be available on perpendicular magnetic recording, object based storage, storage virtualization and IP storage.
Energy Storage Laboratory | Energy Systems Integration Facility | NREL
technologies. Key Infrastructure Energy storage system inverter, energy storage system simulators, research Plug-In Vehicles/Mobile Storage The plug-in vehicles/mobile storage hub includes connections for small integration. Key Infrastructure Ample house power, REDB access, charging stations, easy vehicle parking access
He, Qing; Hao, Yinping; Liu, Hui; Liu, Wenyi
2018-01-01
Super-critical carbon dioxide energy-storage (SC-CCES) technology is a new type of gas energy-storage technology. This paper used orthogonal method and variance analysis to make significant analysis on the factors which would affect the thermodynamics characteristics of the SC-CCES system and obtained the significant factors and interactions in the energy-storage process, the energy-release process and the whole energy-storage system. Results have shown that the interactions in the components have little influence on the energy-storage process, the energy-release process and the whole energy-storage process of the SC-CCES system, the significant factors are mainly on the characteristics of the system component itself, which will provide reference for the optimization of the thermal properties of the energy-storage system.
He, Qing; Liu, Hui; Liu, Wenyi
2018-01-01
Super-critical carbon dioxide energy-storage (SC-CCES) technology is a new type of gas energy-storage technology. This paper used orthogonal method and variance analysis to make significant analysis on the factors which would affect the thermodynamics characteristics of the SC-CCES system and obtained the significant factors and interactions in the energy-storage process, the energy-release process and the whole energy-storage system. Results have shown that the interactions in the components have little influence on the energy-storage process, the energy-release process and the whole energy-storage process of the SC-CCES system, the significant factors are mainly on the characteristics of the system component itself, which will provide reference for the optimization of the thermal properties of the energy-storage system. PMID:29634742
Team Leader: Tom Peters--TAP Information Services
ERIC Educational Resources Information Center
Library Journal, 2005
2005-01-01
Tom Peters packs 36 hours of work into the confines of a 24-hour day. Without breaking a sweat, he juggles multiple collaborative projects, which currently include an Illinois academic library shared storage facility; a multistate virtual reference and instruction service for blind and visually impaired individuals (InfoEyes); a virtual meeting…
Policies | High-Performance Computing | NREL
Use Learn about policy governing user accountability, resource use, use by foreign nationals states. Data Security Learn about the data security policy, including data protection, data security retention policy, including project-centric and user-centric data. Shared Storage Usage Learn about a policy
DEVELOPMENT OF THE U.S. EPA HEALTH EFFECTS RESEARCH LABORATORY FROZEN BLOOD CELL REPOSITORY PROGRAM
In previous efforts, we suggested that proper blood cell freezing and storage is necessary in longitudinal studies with reduced between tests error, for specimen sharing between laboratories and for convenient scheduling of assays. e continue to develop and upgrade programs for o...
Grossman, Robert L.; Heath, Allison; Murphy, Mark; Patterson, Maria; Wells, Walt
2017-01-01
Data commons collocate data, storage, and computing infrastructure with core services and commonly used tools and applications for managing, analyzing, and sharing data to create an interoperable resource for the research community. An architecture for data commons is described, as well as some lessons learned from operating several large-scale data commons. PMID:29033693
NREL Tests Energy Storage System to Fill Renewable Gaps | News | NREL
Tests Energy Storage System to Fill Renewable Gaps NREL Tests Energy Storage System to Fill -megawatt energy storage system from Renewable Energy Systems (RES) Americas will assist research that aims to optimize the grid for wind and solar plants. The system arrived at NREL's National Wind Technology
The Materials Data Facility: Data Services to Advance Materials Science Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaiszik, B.; Chard, K.; Pruyne, J.
2016-07-06
With increasingly strict data management requirements from funding agencies and institutions, expanding focus on the challenges of research replicability, and growing data sizes and heterogeneity, new data needs are emerging in the materials community. The materials data facility (MDF) operates two cloudhosted services, data publication and data discovery, with features to promote open data sharing, self-service data publication and curation, and encourage data reuse, layered with powerful data discovery tools. The data publication service simplifies the process of copying data to a secure storage location, assigning data a citable persistent identifier, and recording custom (e.g., material, technique, or instrument specific)andmore » automatically-extractedmetadata in a registrywhile the data discovery service will provide advanced search capabilities (e.g., faceting, free text range querying, and full text search) against the registered data and metadata. TheMDF services empower individual researchers, research projects, and institutions to (I) publish research datasets, regardless of size, from local storage, institutional data stores, or cloud storage, without involvement of thirdparty publishers; (II) build, share, and enforce extensible domain-specific custom metadata schemas; (III) interact with published data and metadata via representational state transfer (REST) application program interfaces (APIs) to facilitate automation, analysis, and feedback; and (IV) access a data discovery model that allows researchers to search, interrogate, and eventually build on existing published data. We describe MDF’s design, current status, and future plans.« less
The Materials Data Facility: Data Services to Advance Materials Science Research
NASA Astrophysics Data System (ADS)
Blaiszik, B.; Chard, K.; Pruyne, J.; Ananthakrishnan, R.; Tuecke, S.; Foster, I.
2016-08-01
With increasingly strict data management requirements from funding agencies and institutions, expanding focus on the challenges of research replicability, and growing data sizes and heterogeneity, new data needs are emerging in the materials community. The materials data facility (MDF) operates two cloud-hosted services, data publication and data discovery, with features to promote open data sharing, self-service data publication and curation, and encourage data reuse, layered with powerful data discovery tools. The data publication service simplifies the process of copying data to a secure storage location, assigning data a citable persistent identifier, and recording custom (e.g., material, technique, or instrument specific) and automatically-extracted metadata in a registry while the data discovery service will provide advanced search capabilities (e.g., faceting, free text range querying, and full text search) against the registered data and metadata. The MDF services empower individual researchers, research projects, and institutions to (I) publish research datasets, regardless of size, from local storage, institutional data stores, or cloud storage, without involvement of third-party publishers; (II) build, share, and enforce extensible domain-specific custom metadata schemas; (III) interact with published data and metadata via representational state transfer (REST) application program interfaces (APIs) to facilitate automation, analysis, and feedback; and (IV) access a data discovery model that allows researchers to search, interrogate, and eventually build on existing published data. We describe MDF's design, current status, and future plans.
The Design of Distributed Micro Grid Energy Storage System
NASA Astrophysics Data System (ADS)
Liang, Ya-feng; Wang, Yan-ping
2018-03-01
Distributed micro-grid runs in island mode, the energy storage system is the core to maintain the micro-grid stable operation. For the problems that it is poor to adjust at work and easy to cause the volatility of micro-grid caused by the existing energy storage structure of fixed connection. In this paper, an array type energy storage structure is proposed, and the array type energy storage system structure and working principle are analyzed. Finally, the array type energy storage structure model is established based on MATLAB, the simulation results show that the array type energy storage system has great flexibility, which can maximize the utilization of energy storage system, guarantee the reliable operation of distributed micro-grid and achieve the function of peak clipping and valley filling.
Telemetry data storage systems technology for the Space Station Freedom era
NASA Technical Reports Server (NTRS)
Dalton, John T.
1989-01-01
This paper examines the requirements and functions of the telemetry-data recording and storage systems, and the data-storage-system technology projected for the Space Station, with particular attention given to the Space Optical Disk Recorder, an on-board storage subsystem based on 160 gigabit erasable optical disk units each capable of operating at 300 M bits per second. Consideration is also given to storage systems for ground transport recording, which include systems for data capture, buffering, processing, and delivery on the ground. These can be categorized as the first in-first out storage, the fast random-access storage, and the slow access with staging. Based on projected mission manifests and data rates, the worst case requirements were developed for these three storage architecture functions. The results of the analysis are presented.
Conceptual design of thermal energy storage systems for near-term electric utility applications
NASA Technical Reports Server (NTRS)
Hall, E. W.
1980-01-01
Promising thermal energy storage systems for midterm applications in conventional electric utilities for peaking power generation are evaluated. Conceptual designs of selected thermal energy storage systems integrated with conventional utilities are considered including characteristics of alternate systems for peaking power generation, viz gas turbines and coal fired cycling plants. Competitive benefit analysis of thermal energy storage systems with alternate systems for peaking power generation and recommendations for development and field test of thermal energy storage with a conventional utility are included. Results indicate that thermal energy storage is only marginally competitive with coal fired cycling power plants and gas turbines for peaking power generation.
Manufacturing Competitiveness and Supply Chain Analyses for Hydrogen Refueling Stations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayyas, Ahmad T; Garland, Nancy
This slide deck was presented in the monthly FCTO webinar series (May 2017). The goal of this presentation was to share our latest results and remarks on the manufacturing competitiveness analysis of the hydrogen refueling stations (HRS). Manufacturing cost models were developed for major systems in the HRS such as compressors, storage tanks, chillers, heat exchangers, and dispensers. In addition to the cost models, we also discussed important remarks from our analysis for the international trade flows and global supply chain for the hydrogen refueling stations. The last part of the presentation also highlights effect of economies of scale andmore » high production volumes on lowering the cost of the hydrogen at the pump.« less
49 CFR 173.311 - Metal hydride storage systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 2 2013-10-01 2013-10-01 false Metal hydride storage systems. 173.311 Section 173... REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Gases; Preparation and Packaging § 173.311 Metal hydride storage systems. The following packing instruction is applicable to transportable UN Metal hydride storage systems...
49 CFR 173.311 - Metal hydride storage systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 2 2011-10-01 2011-10-01 false Metal hydride storage systems. 173.311 Section 173... REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Gases; Preparation and Packaging § 173.311 Metal hydride storage systems. The following packing instruction is applicable to transportable UN Metal hydride storage systems...
49 CFR 173.311 - Metal hydride storage systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 2 2014-10-01 2014-10-01 false Metal hydride storage systems. 173.311 Section 173... REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Gases; Preparation and Packaging § 173.311 Metal hydride storage systems. The following packing instruction is applicable to transportable UN Metal hydride storage systems...
Online mass storage system detailed requirements document
NASA Technical Reports Server (NTRS)
1976-01-01
The requirements for an online high density magnetic tape data storage system that can be implemented in a multipurpose, multihost environment is set forth. The objective of the mass storage system is to provide a facility for the compact storage of large quantities of data and to make this data accessible to computer systems with minimum operator handling. The results of a market survey and analysis of candidate vendor who presently market high density tape data storage systems are included.
Chemical hydrogen storage material property guidelines for automotive applications
NASA Astrophysics Data System (ADS)
Semelsberger, Troy A.; Brooks, Kriston P.
2015-04-01
Chemical hydrogen storage is the sought after hydrogen storage media for automotive applications because of the expected low pressure operation (<20 atm), moderate temperature operation (<200 °C), system gravimetric capacities (>0.05 kg H2/kgsystem), and system volumetric capacities (>0.05 kg H2/Lsystem). Currently, the primary shortcomings of chemical hydrogen storage are regeneration efficiency, fuel cost and fuel phase (i.e., solid or slurry phase). Understanding the required material properties to meet the DOE Technical Targets for Onboard Hydrogen Storage Systems is a critical knowledge gap in the hydrogen storage research community. This study presents a set of fluid-phase chemical hydrogen storage material property guidelines for automotive applications meeting the 2017 DOE technical targets. Viable material properties were determined using a boiler-plate automotive system design. The fluid-phase chemical hydrogen storage media considered in this study were neat liquids, solutions, and non-settling homogeneous slurries. Material properties examined include kinetics, heats of reaction, fuel-cell impurities, gravimetric and volumetric hydrogen storage capacities, and regeneration efficiency. The material properties, although not exhaustive, are an essential first step in identifying viable chemical hydrogen storage material properties-and most important, their implications on system mass, system volume and system performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Caldwell, Blake A.; Hicks, Susan Elaine
High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges formore » the use of shared infrastructure in HPC environments. This report details current state-of-the-art in virtualization, reconfigurable network enclaving via Software Defined Networking (SDN), and storage architectures and bridging techniques for creating secure enclaves in HPC environments.« less
Sequential data access with Oracle and Hadoop: a performance comparison
NASA Astrophysics Data System (ADS)
Baranowski, Zbigniew; Canali, Luca; Grancher, Eric
2014-06-01
The Hadoop framework has proven to be an effective and popular approach for dealing with "Big Data" and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's "shared nothing" architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions-addressing cost/performance as well as raw performance- based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.
Application of electrochemical energy storage in solar thermal electric generation systems
NASA Technical Reports Server (NTRS)
Das, R.; Krauthamer, S.; Frank, H.
1982-01-01
This paper assesses the status, cost, and performance of existing electrochemical energy storage systems, and projects the cost, performance, and availability of advanced storage systems for application in terrestrial solar thermal electric generation. A 10 MWe solar plant with five hours of storage is considered and the cost of delivered energy is computed for sixteen different storage systems. The results indicate that the five most attractive electrochemical storage systems use the following battery types: zinc-bromine (Exxon), iron-chromium redox (NASA/Lewis Research Center, LeRC), sodium-sulfur (Ford), sodium-sulfur (Dow), and zinc-chlorine (Energy Development Associates, EDA).
Capacity value of energy storage considering control strategies.
Shi, Nian; Luo, Yi
2017-01-01
In power systems, energy storage effectively improves the reliability of the system and smooths out the fluctuations of intermittent energy. However, the installed capacity value of energy storage cannot effectively measure the contribution of energy storage to the generator adequacy of power systems. To achieve a variety of purposes, several control strategies may be utilized in energy storage systems. The purpose of this paper is to study the influence of different energy storage control strategies on the generation adequacy. This paper presents the capacity value of energy storage to quantitatively estimate the contribution of energy storage on the generation adequacy. Four different control strategies are considered in the experimental method to study the capacity value of energy storage. Finally, the analysis of the influence factors on the capacity value under different control strategies is given.
Behavioral Reference Model for Pervasive Healthcare Systems.
Tahmasbi, Arezoo; Adabi, Sahar; Rezaee, Ali
2016-12-01
The emergence of mobile healthcare systems is an important outcome of application of pervasive computing concepts for medical care purposes. These systems provide the facilities and infrastructure required for automatic and ubiquitous sharing of medical information. Healthcare systems have a dynamic structure and configuration, therefore having an architecture is essential for future development of these systems. The need for increased response rate, problem limited storage, accelerated processing and etc. the tendency toward creating a new generation of healthcare system architecture highlight the need for further focus on cloud-based solutions for transfer data and data processing challenges. Integrity and reliability of healthcare systems are of critical importance, as even the slightest error may put the patients' lives in danger; therefore acquiring a behavioral model for these systems and developing the tools required to model their behaviors are of significant importance. The high-level designs may contain some flaws, therefor the system must be fully examined for different scenarios and conditions. This paper presents a software architecture for development of healthcare systems based on pervasive computing concepts, and then models the behavior of described system. A set of solutions are then proposed to improve the design's qualitative characteristics including, availability, interoperability and performance.
Rosenberg, Amy F
2016-10-01
UF Health's participation in a mentored quality-improvement impact program for health professionals as part of an ASHP initiative-"Strategies for Ensuring the Safe Use of Insulin Pens in the Hospital"-is described. ASHP invited hospitals to participate in its initiative at a time when UF Health was evaluating the risks and benefits of insulin pen use due to external reports of safety concerns and making a commitment to continue insulin pen use and optimize safeguards. Improvement opportunities in insulin pen best practices and staff education on insulin pen preparation and injection technique were identified and implemented. The storage of insulin pens for patients with contact isolation precautions was identified as a problem in certain patient care areas, and a practical solution was devised. Other process improvements included implementation of barcode medication administration, with scanning of insulin pens designated for specific patients to avoid inadvertent and intentional sharing of pens among multiple patients. Mentored calls with teams at other hospitals conducted as part of the program provided the opportunity to share experiences and solutions to improve insulin pen use. Participating with a knowledgeable mentor and other hospital teams struggling with the same issues and concerns related to safe insulin pen use facilitated problem solving. Discussing challenges and sharing ideas for solutions to safety concerns with other hospitals identified new process enhancements, which have the potential to improve the safety of insulin pen use at UF Health. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Carbon dioxide emissions effects of grid-scale electricity storage in a decarbonizing power system
NASA Astrophysics Data System (ADS)
Craig, Michael T.; Jaramillo, Paulina; Hodge, Bri-Mathias
2018-01-01
While grid-scale electricity storage (hereafter ‘storage’) could be crucial for deeply decarbonizing the electric power system, it would increase carbon dioxide (CO2) emissions in current systems across the United States. To better understand how storage transitions from increasing to decreasing system CO2 emissions, we quantify the effect of storage on operational CO2 emissions as a power system decarbonizes under a moderate and strong CO2 emission reduction target through 2045. Under each target, we compare the effect of storage on CO2 emissions when storage participates in only energy, only reserve, and energy and reserve markets. We conduct our study in the Electricity Reliability Council of Texas (ERCOT) system and use a capacity expansion model to forecast generator fleet changes and a unit commitment and economic dispatch model to quantify system CO2 emissions with and without storage. We find that storage would increase CO2 emissions in the current ERCOT system, but would decrease CO2 emissions in 2025 through 2045 under both decarbonization targets. Storage reduces CO2 emissions primarily by enabling gas-fired generation to displace coal-fired generation, but also by reducing wind and solar curtailment. We further find that the market in which storage participates drives large differences in the magnitude, but not the direction, of the effect of storage on CO2 emissions.
NASA Astrophysics Data System (ADS)
Seitz, M.; Hübner, S.; Johnson, M.
2016-05-01
Direct steam generation enables the implementation of a higher steam temperature for parabolic trough concentrated solar power plants. This leads to much better cycle efficiencies and lower electricity generating costs. For a flexible and more economic operation of such a power plant, it is necessary to develop thermal energy storage systems for the extension of the production time of the power plant. In the case of steam as the heat transfer fluid, it is important to use a storage material that uses latent heat for the storage process. This leads to a minimum of exergy losses during the storage process. In the case of a concentrating solar power plant, superheated steam is needed during the discharging process. This steam cannot be superheated by the latent heat storage system. Therefore, a sensible molten salt storage system is used for this task. In contrast to the state-of-the-art thermal energy storages within the concentrating solar power area of application, a storage system for a direct steam generation plant consists of a latent and a sensible storage part. Thus far, no partial load behaviors of sensible and latent heat storage systems have been analyzed in detail. In this work, an optimized fin structure was developed in order to minimize the costs of the latent heat storage. A complete system simulation of the power plant process, including the solar field, power block and sensible and latent heat energy storage calculates the interaction between the solar field, the power block and the thermal energy storage system.
NASA Technical Reports Server (NTRS)
Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
This report contains copies of nearly all of the technical papers and viewgraphs presented at the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Application. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include the following: magnetic disk and tape technologies; optical disk and tape; software storage and file management systems; and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
NASA Technical Reports Server (NTRS)
Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
This report contains copies of nearly all of the technical papers and viewgraphs presented at the National Space Science Data Center (NSSDC) Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990s.
Comparison of advanced thermal and electrical storage for parabolic dish solar thermal power systems
NASA Astrophysics Data System (ADS)
Fujita, T.; Birur, G. C.; Schredder, J. M.; Bowyer, J. M.; Awaya, H. I.
Parabolic dish solar concentrator cluster concepts are explored, with attention given to thermal storage systems coupled to Stirling and Brayton cycle power conversion devices. Sensible heat storage involving molten salt (NaOH), liquid sodium, and solid cordierite bricks are considered for 1500 F thermal storage systems. Latent heat storage with NaF-MgF2 phase change materials are explored in terms of passive, active, and direct contact designs. Comparisons are made of the effectiveness of thermal storage relative to redox, Na-S, Zn-Cl, and Zn-Br battery storage systems. Molten lead trickling down through a phase change eutectic, the NaF-MgF2, formed the direct contact system. Heat transport in all systems is effected through Inconel pipes. Using a cost goal of 120-150 mills/kWh as the controlling parameter, sensible heat systems with molten salts transport with either Stirling or Brayton engines, or latent heat systems with Stirling engines, and latent heat-Brayton engine with direct contact were favored in the analyses. Battery storage systems, however, offered the most flexibility of applications.
Comparison of advanced thermal and electrical storage for parabolic dish solar thermal power systems
NASA Technical Reports Server (NTRS)
Fujita, T.; Birur, G. C.; Schredder, J. M.; Bowyer, J. M.; Awaya, H. I.
1982-01-01
Parabolic dish solar concentrator cluster concepts are explored, with attention given to thermal storage systems coupled to Stirling and Brayton cycle power conversion devices. Sensible heat storage involving molten salt (NaOH), liquid sodium, and solid cordierite bricks are considered for 1500 F thermal storage systems. Latent heat storage with NaF-MgF2 phase change materials are explored in terms of passive, active, and direct contact designs. Comparisons are made of the effectiveness of thermal storage relative to redox, Na-S, Zn-Cl, and Zn-Br battery storage systems. Molten lead trickling down through a phase change eutectic, the NaF-MgF2, formed the direct contact system. Heat transport in all systems is effected through Inconel pipes. Using a cost goal of 120-150 mills/kWh as the controlling parameter, sensible heat systems with molten salts transport with either Stirling or Brayton engines, or latent heat systems with Stirling engines, and latent heat-Brayton engine with direct contact were favored in the analyses. Battery storage systems, however, offered the most flexibility of applications.
40 CFR 280.230 - Operating an underground storage tank or underground storage tank system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... underground storage tank or underground storage tank system. (a) Operating an UST or UST system prior to...) Operating an UST or UST system after foreclosure. The following provisions apply to a holder who, through..., the purchaser must decide whether to operate or close the UST or UST system in accordance with...
40 CFR 280.230 - Operating an underground storage tank or underground storage tank system.
Code of Federal Regulations, 2011 CFR
2011-07-01
... underground storage tank or underground storage tank system. (a) Operating an UST or UST system prior to...) Operating an UST or UST system after foreclosure. The following provisions apply to a holder who, through..., the purchaser must decide whether to operate or close the UST or UST system in accordance with...
40 CFR 280.230 - Operating an underground storage tank or underground storage tank system.
Code of Federal Regulations, 2014 CFR
2014-07-01
... underground storage tank or underground storage tank system. (a) Operating an UST or UST system prior to...) Operating an UST or UST system after foreclosure. The following provisions apply to a holder who, through..., the purchaser must decide whether to operate or close the UST or UST system in accordance with...
40 CFR 280.230 - Operating an underground storage tank or underground storage tank system.
Code of Federal Regulations, 2012 CFR
2012-07-01
... underground storage tank or underground storage tank system. (a) Operating an UST or UST system prior to...) Operating an UST or UST system after foreclosure. The following provisions apply to a holder who, through..., the purchaser must decide whether to operate or close the UST or UST system in accordance with...
40 CFR 280.230 - Operating an underground storage tank or underground storage tank system.
Code of Federal Regulations, 2013 CFR
2013-07-01
... underground storage tank or underground storage tank system. (a) Operating an UST or UST system prior to...) Operating an UST or UST system after foreclosure. The following provisions apply to a holder who, through..., the purchaser must decide whether to operate or close the UST or UST system in accordance with...
75 FR 27463 - List of Approved Spent Fuel Storage Casks: NUHOMS® HD System Revision 1; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-17
... Fuel Storage Casks: NUHOMS[supreg] HD System Revision 1; Correction AGENCY: Nuclear Regulatory... fuel storage casks to add revision 1 to the NUHOMS HD spent fuel storage cask system. This action is... Federal Register on May 7, 2010 (75 FR 25120), that proposes to amend the regulations that govern storage...
National Storage Laboratory: a collaborative research project
NASA Astrophysics Data System (ADS)
Coyne, Robert A.; Hulen, Harry; Watson, Richard W.
1993-01-01
The grand challenges of science and industry that are driving computing and communications have created corresponding challenges in information storage and retrieval. An industry-led collaborative project has been organized to investigate technology for storage systems that will be the future repositories of national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and provider of applications. The expected result is the creation of a National Storage Laboratory to serve as a prototype and demonstration facility. It is expected that this prototype will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte-class files at gigabit-per-second data rates. Specifically, the collaboration expects to make significant advances in hardware, software, and systems technology in four areas of need, (1) network-attached high performance storage; (2) multiple, dynamic, distributed storage hierarchies; (3) layered access to storage system services; and (4) storage system management.
Electrochemical energy storage systems for solar thermal applications
NASA Technical Reports Server (NTRS)
Krauthamer, S.; Frank, H.
1980-01-01
Existing and advanced electrochemical storage and inversion/conversion systems that may be used with terrestrial solar-thermal power systems are evaluated. The status, cost and performance of existing storage systems are assessed, and the cost, performance, and availability of advanced systems are projected. A prime consideration is the cost of delivered energy from plants utilizing electrochemical storage. Results indicate that the five most attractive electrochemical storage systems are the: iron-chromium redox (NASA LeRC), zinc-bromine (Exxon), sodium-sulfur (Ford), sodium-sulfur (Dow), and zinc-chlorine (EDA).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidelberg, S T; Fitzgerald, K J; Richmond, G H
2006-01-24
There has been substantial development of the Lustre parallel filesystem prior to the configuration described below for this milestone. The initial Lustre filesystems that were deployed were directly connected to the cluster interconnect, i.e. Quadrics Elan3. That is, the clients (OSSes) and Meta-data Servers (MDS) were all directly connected to the cluster's internal high speed interconnect. This configuration serves a single cluster very well, but does not provide sharing of the filesystem among clusters. LLNL funded the development of high-efficiency ''portals router'' code by CFS (the company that develops Lustre) to enable us to move the Lustre servers to amore » GigE-connected network configuration, thus making it possible to connect to the servers from several clusters. With portals routing available, here is what changes: (1) another storage-only cluster is deployed to front the Lustre storage devices (these become the Lustre OSSes and MDS), (2) this ''Lustre cluster'' is attached via GigE connections to a large GigE switch/router cloud, (3) a small number of compute-cluster nodes are designated as ''gateway'' or ''portal router'' nodes, and (4) the portals router nodes are GigE-connected to the switch/router cloud. The Lustre configuration is then changed to reflect the new network paths. A typical example of this is a compute cluster and a related visualization cluster: the compute cluster produces the data (writes it to the Lustre filesystem), and the visualization cluster consumes some of the data (reads it from the Lustre filesystem). This process can be expanded by aggregating several collections of Lustre backend storage resources into one or more ''centralized'' Lustre filesystems, and then arranging to have several ''client'' clusters mount these centralized filesystems. The ''client clusters'' can be any combination of compute, visualization, archiving, or other types of cluster. This milestone demonstrates the operation and performance of a scaled-down version of such a large, centralized, shared Lustre filesystem concept.« less
Smartphone-coupled rhinolaryngoscopy at the point of care
NASA Astrophysics Data System (ADS)
Mink, Jonah; Bolton, Frank J.; Sebag, Cathy M.; Peterson, Curtis W.; Assia, Shai; Levitz, David
2018-02-01
Rhinolaryngoscopy remains difficult to perform in resource-limited settings due to the high cost of purchasing and maintaining equipment as well as the need for specialists to interpret exam findings. While the lack of expertise can be obviated by adopting telemedicine-based approaches, the capture, storage, and sharing of images/video is not a common native functionality of medical devices. Most rhinolaryngoscopy systems consist of an endoscope that interfaces with the patient's naso/oropharynx, and a tower of modules that record video/images. However, these expensive and bulky modules can be replaced by a smartphone that can fulfill the same functions but at a lower cost. To demonstrate this, a commercially available rhinolaryngoscope was coupled to a smartphone using a 3D-printed adapter. Software developed for other clinical applications was repurposed for ENT use, including an application that controls image and video capture, a HIPAA-compliant image/video storage and transfer cloud database, and customized software features developed to improve practitioner competency. Audio recording capabilities to assess speech pathology were also integrated into the smartphone rhinolaryngoscope system. The illumination module coupled onto the endoscope remained unchanged. The spatial resolution of the rhinolaryngoscope system was defined by the fiber diameter of endoscope fiber bundle, rather than the smartphone camera. The mobile rhinolaryngoscope system was used with appropriate patients by a general practitioner in an office setting. The general practitioner then consulted with an ENT specialist via the HIPAA compliant cloud database and workflow modules on difficult cases. These results suggest the smartphone-based rhinolaryngoscope holds promise for use in low-resource settings.
The state of energy storage in electric utility systems and its effect on renewable energy resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, N S
1994-08-01
This report describes the state of the art of electric energy storage technologies and discusses how adding intermittent renewable energy technologies (IRETs) to a utility network affects the benefits from storage dispatch. Load leveling was the mode of storage dispatch examined in the study. However, the report recommended that other modes be examined in the future for kilowatt and kilowatt-hour optimization of storage. The motivation to install storage with IRET generation can arise from two considerations: reliability and enhancement of the value of energy. Because adding storage increases cost, reliability-related storage is attractive only if the accruing benefits exceed themore » cost of storage installation. The study revealed that the operation of storage should not be guided by the output of the IRET but rather by system marginal costs. Consequently, in planning studies to quantify benefits, storage should not be considered as an entity belonging to the system and not as a component of IRETS. The study also indicted that because the infusion of IRET energy tends to reduce system marginal cost, the benefits from load leveling (value of energy) would be reduced. However, if a system has storage, particularly if the storage is underutilized, its dispatch can be reoriented to enhance the benefits of IRET integration.« less
Rossi, Elena; Rosa, Manuela; Rossi, Lorenzo; Priori, Alberto; Marceglia, Sara
2014-12-01
The web-based systems available for multi-centre clinical trials do not combine clinical data collection (Electronic Health Records, EHRs) with signal processing storage and analysis tools. However, in pathophysiological research, the correlation between clinical data and signals is crucial for uncovering the underlying neurophysiological mechanisms. A specific example is the investigation of the mechanisms of action for Deep Brain Stimulation (DBS) used for Parkinson's Disease (PD); the neurosignals recorded from the DBS target structure and clinical data must be investigated. The aim of this study is the development and testing of a new system dedicated to a multi-centre study of Parkinson's Disease that integrates biosignal analysis tools and data collection in a shared and secure environment. We designed a web-based platform (WebBioBank) for managing the clinical data and biosignals of PD patients treated with DBS in different clinical research centres. Homogeneous data collection was ensured in the different centres (Operative Units, OUs). The anonymity of the data was preserved using unique identifiers associated with patients (ID BAC). The patients' personal details and their equivalent ID BACs were archived inside the corresponding OU and were not uploaded on the web-based platform; data sharing occurred using the ID BACs. The system allowed researchers to upload different signal processing functions (in a .dll extension) onto the web-based platform and to combine them to define dedicated algorithms. Four clinical research centres used WebBioBank for 1year. The clinical data from 58 patients treated using DBS were managed, and 186 biosignals were uploaded and classified into 4 categories based on the treatment (pharmacological and/or electrical). The user's satisfaction mean score exceeded the satisfaction threshold. WebBioBank enabled anonymous data sharing for a clinical study conducted at multiple centres and demonstrated the capabilities of the signal processing chain configuration as well as its effectiveness and efficiency for integrating the neurophysiological results with clinical data in multi-centre studies, which will allow the future collection of homogeneous data in large cohorts of patients. Copyright © 2014 Elsevier Inc. All rights reserved.
Performance Evaluation of Peer-to-Peer Progressive Download in Broadband Access Networks
NASA Astrophysics Data System (ADS)
Shibuya, Megumi; Ogishi, Tomohiko; Yamamoto, Shu
P2P (Peer-to-Peer) file sharing architectures have scalable and cost-effective features. Hence, the application of P2P architectures to media streaming is attractive and expected to be an alternative to the current video streaming using IP multicast or content delivery systems because the current systems require expensive network infrastructures and large scale centralized cache storage systems. In this paper, we investigate the P2P progressive download enabling Internet video streaming services. We demonstrated the capability of the P2P progressive download in both laboratory test network as well as in the Internet. Through the experiments, we clarified the contribution of the FTTH links to the P2P progressive download in the heterogeneous access networks consisting of FTTH and ADSL links. We analyzed the cause of some download performance degradation occurred in the experiment and discussed about the effective methods to provide the video streaming service using P2P progressive download in the current heterogeneous networks.
Booly: a new data integration platform.
Do, Long H; Esteves, Francisco F; Karten, Harvey J; Bier, Ethan
2010-10-13
Data integration is an escalating problem in bioinformatics. We have developed a web tool and warehousing system, Booly, that features a simple yet flexible data model coupled with the ability to perform powerful comparative analysis, including the use of Boolean logic to merge datasets together, and an integrated aliasing system to decipher differing names of the same gene or protein. Furthermore, Booly features a collaborative sharing system and a public repository so that users can retrieve new datasets while contributors can easily disseminate new content. We illustrate the uses of Booly with several examples including: the versatile creation of homebrew datasets, the integration of heterogeneous data to identify genes useful for comparing avian and mammalian brain architecture, and generation of a list of Food and Drug Administration (FDA) approved drugs with possible alternative disease targets. The Booly paradigm for data storage and analysis should facilitate integration between disparate biological and medical fields and result in novel discoveries that can then be validated experimentally. Booly can be accessed at http://booly.ucsd.edu.
Booly: a new data integration platform
2010-01-01
Background Data integration is an escalating problem in bioinformatics. We have developed a web tool and warehousing system, Booly, that features a simple yet flexible data model coupled with the ability to perform powerful comparative analysis, including the use of Boolean logic to merge datasets together, and an integrated aliasing system to decipher differing names of the same gene or protein. Furthermore, Booly features a collaborative sharing system and a public repository so that users can retrieve new datasets while contributors can easily disseminate new content. Results We illustrate the uses of Booly with several examples including: the versatile creation of homebrew datasets, the integration of heterogeneous data to identify genes useful for comparing avian and mammalian brain architecture, and generation of a list of Food and Drug Administration (FDA) approved drugs with possible alternative disease targets. Conclusions The Booly paradigm for data storage and analysis should facilitate integration between disparate biological and medical fields and result in novel discoveries that can then be validated experimentally. Booly can be accessed at http://booly.ucsd.edu. PMID:20942966
NASA Astrophysics Data System (ADS)
Morabito, A.; Steimes, J.; Bontems, O.; Zohbi, G. Al; Hendrick, P.
2017-04-01
Its maturity makes pumped hydro energy storage (PHES) the most used technology in energy storage. Micro-hydro plants (<100 kW) are globally emerging due to further increases in the share of renewable electricity production such as wind and solar power. This paper presents the design of a micro-PHES developed in Froyennes, Belgium, using a pump as turbine (PaT) coupled with a variable frequency driver (VFD). The methods adopted for the selection of the most suitable pump for pumping and reverse mode are compared and discussed. Controlling and monitoring the PaT performances represent a compulsory design phase in the analysis feasibility of PaT coupled with VFD in micro PHES plant. This study aims at answering technical research aspects of µ-PHES site used with reversible pumps.
CERNBox + EOS: end-user storage for science
NASA Astrophysics Data System (ADS)
Mascetti, L.; Gonzalez Labrador, H.; Lamanna, M.; Mościcki, JT; Peters, AJ
2015-12-01
CERNBox is a cloud synchronisation service for end-users: it allows syncing and sharing files on all major mobile and desktop platforms (Linux, Windows, MacOSX, Android, iOS) aiming to provide offline availability to any data stored in the CERN EOS infrastructure. The successful beta phase of the service confirmed the high demand in the community for an easily accessible cloud storage solution such as CERNBox. Integration of the CERNBox service with the EOS storage back-end is the next step towards providing “sync and share” capabilities for scientific and engineering use-cases. In this report we will present lessons learnt in offering the CERNBox service, key technical aspects of CERNBox/EOS integration and new, emerging usage possibilities. The latter includes the ongoing integration of “sync and share” capabilities with the LHC data analysis tools and transfer services.
Black start research of the wind and storage system based on the dual master-slave control
NASA Astrophysics Data System (ADS)
Leng, Xue; Shen, Li; Hu, Tian; Liu, Li
2018-02-01
Black start is the key to solving the problem of large-scale power failure, while the introduction of new renewable clean energy as a black start power supply was a new hotspot. Based on the dual master-slave control strategy, the wind and storage system was taken as the black start reliable power, energy storage and wind combined to ensure the stability of the micorgrid systems, to realize the black start. In order to obtain the capacity ratio of the storage in the small system based on the dual master-slave control strategy, and the black start constraint condition of the wind and storage combined system, obtain the key points of black start of wind storage combined system, but also provide reference and guidance for the subsequent large-scale wind and storage combined system in black start projects.
Capacity value of energy storage considering control strategies
Luo, Yi
2017-01-01
In power systems, energy storage effectively improves the reliability of the system and smooths out the fluctuations of intermittent energy. However, the installed capacity value of energy storage cannot effectively measure the contribution of energy storage to the generator adequacy of power systems. To achieve a variety of purposes, several control strategies may be utilized in energy storage systems. The purpose of this paper is to study the influence of different energy storage control strategies on the generation adequacy. This paper presents the capacity value of energy storage to quantitatively estimate the contribution of energy storage on the generation adequacy. Four different control strategies are considered in the experimental method to study the capacity value of energy storage. Finally, the analysis of the influence factors on the capacity value under different control strategies is given. PMID:28558027
21 CFR 864.9900 - Cord blood processing system and storage container.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Cord blood processing system and storage container... Manufacture Human Cells, Tissues, and Cellular and Tissue-Based Products (HCT/Ps) § 864.9900 Cord blood processing system and storage container. (a) Identification. A cord blood processing system and storage...
40 CFR 1066.985 - Fuel storage system leak test procedure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Fuel storage system leak test... Refueling Emission Test Procedures for Motor Vehicles § 1066.985 Fuel storage system leak test procedure. (a... conditions. (3) Leak test equipment must have the ability to pressurize fuel storage systems to at least 4.1...
21 CFR 864.9900 - Cord blood processing system and storage container.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Cord blood processing system and storage container... Manufacture Human Cells, Tissues, and Cellular and Tissue-Based Products (HCT/Ps) § 864.9900 Cord blood processing system and storage container. (a) Identification. A cord blood processing system and storage...
21 CFR 864.9900 - Cord blood processing system and storage container.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Cord blood processing system and storage container... Manufacture Human Cells, Tissues, and Cellular and Tissue-Based Products (HCT/Ps) § 864.9900 Cord blood processing system and storage container. (a) Identification. A cord blood processing system and storage...
21 CFR 864.9900 - Cord blood processing system and storage container.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Cord blood processing system and storage container... Manufacture Human Cells, Tissues, and Cellular and Tissue-Based Products (HCT/Ps) § 864.9900 Cord blood processing system and storage container. (a) Identification. A cord blood processing system and storage...
21 CFR 864.9900 - Cord blood processing system and storage container.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Cord blood processing system and storage container... Manufacture Human Cells, Tissues, and Cellular and Tissue-Based Products (HCT/Ps) § 864.9900 Cord blood processing system and storage container. (a) Identification. A cord blood processing system and storage...
Policy-based Distributed Data Management
NASA Astrophysics Data System (ADS)
Moore, R. W.
2009-12-01
The analysis and understanding of climate variability and change builds upon access to massive collections of observational and simulation data. The analyses involve distributed computing, both at the storage systems (which support data subsetting) and at compute engines (for assimilation of observational data into simulations). The integrated Rule Oriented Data System (iRODS) organizes the distributed data into collections to facilitate enforcement of management policies, support remote data processing, and enable development of reference collections. Currently at RENCI, the iRODS data grid is being used to manage ortho-photos and lidar data for the State of North Carolina, provide a unifying storage environment for engagement centers across the state, support distributed access to visualizations of weather data, and is being explored to manage and disseminate collections of ensembles of meteorological and hydrological model results. In collaboration with the National Climatic Data Center, an iRODS data grid is being established to support data transmission from NCDC to ORNL, and to integrate NCDC archives with ORNL compute services. To manage the massive data transfers, parallel I/O streams are used between High Performance Storage System tape archives and the supercomputers at ORNL. Further, we are exploring the movement and management of large RADAR and in situ datasets to be used for data mining between RENCI and NCDC, and for the distributed creation of decision support and climate analysis tools. The iRODS data grid supports all phases of the scientific data life cycle, from management of data products for a project, to sharing of data between research institutions, to publication of data in a digital library, to preservation of data for use in future research projects. Each phase is characterized by a broader user community, with higher expectations for more detailed descriptions and analysis mechanisms for manipulating the data. The higher usage requirements are enforced by management policies that define the required metadata, the required data formats, and the required analysis tools. The iRODS policy based data management system automates the creation of the community chosen data products, validates integrity and authenticity assessment criteria, and enforces management policies across all accesses of the system.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
Survey of Mass Storage Systems
1975-09-01
software that Pre- cision Instruments can provide. System Name: IBM 3850 Mass Storage System Manufacturer and Location: International Business Machines...34 Datamation, pp. 52-58, October 1973. 15 17. International Business Machines, IBM 3850 Mass Storage System Facts Folder, White Plains, NY, n.d. 18... International Business Machines, Introduction to the IBM 3850 Mass Storage System (MSS), White Plains, NY, n.d. 19. International Business Machines
LVFS: A Big Data File Storage Bridge for the HPC Community
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.
2015-12-01
Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.
Chemical hydrogen storage material property guidelines for automotive applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semelsberger, Troy; Brooks, Kriston P.
2015-04-01
Chemical hydrogen storage is the sought after hydrogen storage media for automotive applications because of the expected low pressure operation (<20 atm), moderate temperature operation (<200 C), system gravimetric capacities (>0.05 kg H2/kg system), and system volumetric capacities (>0.05 kg H2/L system). Currently, the primary shortcomings of chemical hydrogen storage are regeneration efficiency, fuel cost and fuel phase (i.e., solid or slurry phase). Understanding the required material properties to meet the DOE Technical Targets for Onboard Hydrogen Storage Systems is a critical knowledge gap in the hydrogen storage research community. This study presents a set of fluid-phase chemical hydrogen storagemore » material property guidelines for automotive applications meeting the 2017 DOE technical targets. Viable material properties were determined using a boiler-plate automotive system design. The fluid phase chemical hydrogen storage media considered in this study were neat liquids, solutions, and non-settling homogeneous slurries. Material properties examined include kinetics, heats of reaction, fuel-cell impurities, gravimetric and volumetric hydrogen storage capacities, and regeneration efficiency. The material properties, although not exhaustive, are an essential first step in identifying viable chemical hydrogen storage material propertiesdand most important, their implications on system mass, system volume and system performance.« less
PVMirror: A New Concept for Tandem Solar Cells and Hybrid Solar Converters
Yu, Zhengshan J.; Fisher, Kathryn C.; Wheelwright, Brian M.; ...
2015-08-25
As the solar electricity market has matured, energy conversion efficiency and storage have joined installed system cost as significant market drivers. In response, manufacturers of flatplate silicon photovoltaic (PV) cells have pushed cell efficiencies above 25%—nearing the 29.4% detailed-balance efficiency limit— and both solar thermal and battery storage technologies have been deployed at utility scale. This paper introduces a new tandem solar collector employing a “PVMirror” that has the potential to both increase energy conversion efficiency and provide thermal storage. A PVMirror is a concentrating mirror, spectrum splitter, and light-to-electricity converter all in one: It consists of a curved arrangementmore » of PV cells that absorb part of the solar spectrum and reflect the remainder to their shared focus, at which a second solar converter is placed. A strength of the design is that the solar converter at the focus can be of a radically different technology than the PV cells in the PVMirror; another is that the PVMirror converts a portion of the diffuse light to electricity in addition to the direct light. Here, we consider two case studies—a PV cell located at the focus of the PVMirror to form a four-terminal PV–PV tandem, and a thermal receiver located at the focus to form a PV–CSP (concentrating solar thermal power) tandem—and compare the outdoor energy outputs to those of competing technologies. PVMirrors can outperform (idealized) monolithic PV–PV tandems that are under concentration, and they can also generate nearly as much energy as silicon flat-plate PV while simultaneously providing the full energy storage benefit of CSP.« less
Test report : Raytheon / KTech RK30 Energy Storage System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, David Martin; Schenkman, Benjamin L.; Borneo, Daniel R.
2013-10-01
The Department of Energy Office of Electricity (DOE/OE), Sandia National Laboratories (SNL) and the Base Camp Integration Lab (BCIL) partnered together to incorporate an energy storage system into a microgrid configured Forward Operating Base to reduce the fossil fuel consumption and to ultimately save lives. Energy storage vendors will be sending their systems to SNL Energy Storage Test Pad (ESTP) for functional testing and then to the BCIL for performance evaluation. The technologies that will be tested are electro-chemical energy storage systems comprising of lead acid, lithium-ion or zinc-bromide. Raytheon/KTech has developed an energy storage system that utilizes zinc-bromide flowmore » batteries to save fuel on a military microgrid. This report contains the testing results and some limited analysis of performance of the Raytheon/KTech Zinc-Bromide Energy Storage System.« less
The Third NASA Goddard Conference on Mass Storage Systems and Technologies
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)
1993-01-01
This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction.
Renewable Energy Systems for Forward Operating Bases: A Simulations-Based Optimization Approach
2010-08-01
07. C-8 ENERGY STORAGE MODELS Two types of energy storage were compared in these simulations: lead-acid batteries and molten salt storage...of charge: 80% The initial state of charge used for the molten salt storage system is slightly higher than that used for the lead-acid battery ...cost for lead-acid batteries was assumed to be $630/kWh. MOLTEN SALT STORAGE Domestic installed cost for the molten salt storage system was
Secure count query on encrypted genomic data.
Hasan, Mohammad Zahidul; Mahdi, Md Safiur Rahman; Sadat, Md Nazmus; Mohammed, Noman
2018-05-01
Human genomic information can yield more effective healthcare by guiding medical decisions. Therefore, genomics research is gaining popularity as it can identify potential correlations between a disease and a certain gene, which improves the safety and efficacy of drug treatment and can also develop more effective prevention strategies [1]. To reduce the sampling error and to increase the statistical accuracy of this type of research projects, data from different sources need to be brought together since a single organization does not necessarily possess required amount of data. In this case, data sharing among multiple organizations must satisfy strict policies (for instance, HIPAA and PIPEDA) that have been enforced to regulate privacy-sensitive data sharing. Storage and computation on the shared data can be outsourced to a third party cloud service provider, equipped with enormous storage and computation resources. However, outsourcing data to a third party is associated with a potential risk of privacy violation of the participants, whose genomic sequence or clinical profile is used in these studies. In this article, we propose a method for secure sharing and computation on genomic data in a semi-honest cloud server. In particular, there are two main contributions. Firstly, the proposed method can handle biomedical data containing both genotype and phenotype. Secondly, our proposed index tree scheme reduces the computational overhead significantly for executing secure count query operation. In our proposed method, the confidentiality of shared data is ensured through encryption, while making the entire computation process efficient and scalable for cutting-edge biomedical applications. We evaluated our proposed method in terms of efficiency on a database of Single-Nucleotide Polymorphism (SNP) sequences, and experimental results demonstrate that the execution time for a query of 50 SNPs in a database of 50,000 records is approximately 5 s, where each record contains 500 SNPs. And, it requires 69.7 s to execute the query on the same database that also includes phenotypes. Copyright © 2018 Elsevier Inc. All rights reserved.
Working Memory in Children: A Time-Constrained Functioning Similar to Adults
ERIC Educational Resources Information Center
Portrat, Sophie; Camos, Valerie; Barrouillet, Pierre
2009-01-01
Within the time-based resource-sharing (TBRS) model, we tested a new conception of the relationships between processing and storage in which the core mechanisms of working memory (WM) are time constrained. However, our previous studies were restricted to adults. The current study aimed at demonstrating that these mechanisms are present and…
40 CFR 60.482-1 - Standards: General.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...
40 CFR 60.482-1 - Standards: General.
Code of Federal Regulations, 2012 CFR
2012-07-01
... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...
40 CFR 60.482-1 - Standards: General.
Code of Federal Regulations, 2013 CFR
2013-07-01
... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...
40 CFR 60.482-1 - Standards: General.
Code of Federal Regulations, 2014 CFR
2014-07-01
... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... are shared among two or more batch process units that are subject to this subpart may be monitored at... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...
40 CFR 60.482-1 - Standards: General.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operations. An owner or operator may monitor at any time during the specified monitoring period (e.g., month... shared among two or more batch process units that are subject to this subpart may be monitored at the... conducted annually, monitoring events must be separated by at least 120 calendar days. (g) If the storage...
Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"
ERIC Educational Resources Information Center
Romiszowski, Alexander J.
2012-01-01
"Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This…
Migrating Educational Data and Services to Cloud Computing: Exploring Benefits and Challenges
ERIC Educational Resources Information Center
Lahiri, Minakshi; Moseley, James L.
2013-01-01
"Cloud computing" is currently the "buzzword" in the Information Technology field. Cloud computing facilitates convenient access to information and software resources as well as easy storage and sharing of files and data, without the end users being aware of the details of the computing technology behind the process. This…
Cryptography for Big Data Security
2015-07-13
Cryptography for Big Data Security Book Chapter for Big Data: Storage, Sharing, and Security (3S) Distribution A: Public Release Ariel Hamlin1 Nabil...Email: arkady@ll.mit.edu ii Contents 1 Cryptography for Big Data Security 1 1.1 Introduction...48 Chapter 1 Cryptography for Big Data Security 1.1 Introduction With the amount
Precategorical Acoustic Storage and the Perception of Speech
ERIC Educational Resources Information Center
Frankish, Clive
2008-01-01
Theoretical accounts of both speech perception and of short term memory must consider the extent to which perceptual representations of speech sounds might survive in relatively unprocessed form. This paper describes a novel version of the serial recall task that can be used to explore this area of shared interest. In immediate recall of digit…
Gigwa-Genotype investigator for genome-wide analyses.
Sempéré, Guilhem; Philippe, Florian; Dereeper, Alexis; Ruiz, Manuel; Sarah, Gautier; Larmande, Pierre
2016-06-06
Exploring the structure of genomes and analyzing their evolution is essential to understanding the ecological adaptation of organisms. However, with the large amounts of data being produced by next-generation sequencing, computational challenges arise in terms of storage, search, sharing, analysis and visualization. This is particularly true with regards to studies of genomic variation, which are currently lacking scalable and user-friendly data exploration solutions. Here we present Gigwa, a web-based tool that provides an easy and intuitive way to explore large amounts of genotyping data by filtering it not only on the basis of variant features, including functional annotations, but also on genotype patterns. The data storage relies on MongoDB, which offers good scalability properties. Gigwa can handle multiple databases and may be deployed in either single- or multi-user mode. In addition, it provides a wide range of popular export formats. The Gigwa application is suitable for managing large amounts of genomic variation data. Its user-friendly web interface makes such processing widely accessible. It can either be simply deployed on a workstation or be used to provide a shared data portal for a given community of researchers.
Wronski, Zbigniew S; Varin, Robert A; Czujko, Tom
2009-07-01
In this study we discuss a process of mechanical activation employed in place of chemical or thermal activation to improve the mobility and reactivity of hydrogen atoms and ions in nanomaterials for energy applications: rechargeable batteries and hydrogen storage for fuel cell systems. Two materials are discussed. Both are used or intended for use in power sources. One is nickel hydroxide, Ni(OH)2, which converts to oxyhydroxide in the positive Ni electrode of rechargeable metal hydride batteries. The other is a complex hydride, Mg(AIH4)2, intended for use in reversible, solid-state hydrogen storage for fuel cells. The feature shared by these unlikely materials (hydroxide and hydride) is a sheet-like hexagonal crystal structure. The mechanical activation was conducted in high-energy ball mills. We discuss and demonstrate that the mechanical excitation of atoms and ions imparted on these powders stems from the same class of phenomena. These are (i) proliferation of structural defects, in particular stacking faults in a sheet-like structure of hexagonal crystals, and (ii) possible fragmentation of a faulted structure into a mosaic of layered nanocrystals. The hydrogen atoms bonded in such nanocrystals may be inserted and abstracted more easily from OH- hydroxyl group in Ni(OH)2 and AlH4- hydride complex in Mg(AlH4)2 during hydrogen charge and discharge reactions. However, the effects of mechanical excitation imparted on these powders are different. While the Ni(OH)2 powder is greatly activated for cycling in batteries, the Mg(AlH4)2 complex hydride phase is greatly destabilized for use in reversible hydrogen storage. Such a "synchronic" view of the structure-property relationship in respect to materials involved in hydrogen energy storage and conversion is supported in experiments employing X-ray diffraction (XRD), differential scanning calorimetry (DSC) and direct imaging of the structure with a high-resolution transmission-electron microscope (HREM), as well as in property characterization.
High-performance mass storage system for workstations
NASA Technical Reports Server (NTRS)
Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.
1993-01-01
Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).
Han, Yafeng; Shen, Bo; Hu, Huajin; ...
2015-01-12
Ice-storage air-conditioning is a technique that uses ice for thermal energy storage. Replacing existing air conditioning systems with ice storage has the advantage of shifting the load from on-peak times to off-peak times that often have excess generation. However, increasing the use of ice-storage faces significant challenges in China. One major barrier is the inefficiency in the current electricity tariff structure. There is a lack of effective incentive mechanism that induces ice-storage systems from achieving optimal load-shifting results. This study presents an analysis that compares the potential impacts of ice-storage systems on load-shifting under a new credit-based incentive scheme andmore » the existing incentive arrangement in Jiangsu, China. The study indicates that by changing how ice-storage systems are incentivized in Jiangsu, load-shifting results can be improved.« less
The structure of the clouds distributed operating system
NASA Technical Reports Server (NTRS)
Dasgupta, Partha; Leblanc, Richard J., Jr.
1989-01-01
A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data and fault-tolerance.
A digital repository with an extensible data model for biobanking and genomic analysis management.
Izzo, Massimiliano; Mortola, Francesco; Arnulfo, Gabriele; Fato, Marco M; Varesio, Luigi
2014-01-01
Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid.
A digital repository with an extensible data model for biobanking and genomic analysis management
2014-01-01
Motivation Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. Results We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Conclusions Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid. PMID:25077808
Artificial Neural Network with Hardware Training and Hardware Refresh
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor)
2003-01-01
A neural network circuit is provided having a plurality of circuits capable of charge storage. Also provided is a plurality of circuits each coupled to at least one of the plurality of charge storage circuits and constructed to generate an output in accordance with a neuron transfer function. Each of a plurality of circuits is coupled to one of the plurality of neuron transfer function circuits and constructed to generate a derivative of the output. A weight update circuit updates the charge storage circuits based upon output from the plurality of transfer function circuits and output from the plurality of derivative circuits. In preferred embodiments, separate training and validation networks share the same set of charge storage circuits and may operate concurrently. The validation network has a separate transfer function circuits each being coupled to the charge storage circuits so as to replicate the training network s coupling of the plurality of charge storage to the plurality of transfer function circuits. The plurality of transfer function circuits may be constructed each having a transconductance amplifier providing differential currents combined to provide an output in accordance with a transfer function. The derivative circuits may have a circuit constructed to generate a biased differential currents combined so as to provide the derivative of the transfer function.
NASA Astrophysics Data System (ADS)
Welakuh, Davis D. M.; Dikandé, Alain M.
2017-11-01
The storage and subsequent retrieval of coherent pulse trains in the quantum memory (i.e. cavity-dark state) of three-level Λ atoms, are considered for an optical medium in which adiabatic photon transfer occurs under the condition of quantum impedance matching. The underlying mechanism is based on intracavity Electromagnetically-Induced Transparency, by which properties of a cavity filled with three-level Λ-type atoms are manipulated by an external control field. Under the impedance matching condition, we derive analytic expressions that suggest a complete transfer of an input field into the cavity-dark state by varying the mixing angle in a specific way, and its subsequent retrieval at a desired time. We illustrate the scheme by demonstrating the complete transfer and retrieval of a Gaussian, a single hyperbolic-secant and a periodic train of time-entangled hyperbolic-secant input photon pulses in the atom-cavity system. For the time-entangled hyperbolic-secant input field, a total controllability of the periodic evolution of the dark state population is made possible by changing the Rabi frequency of the classical driving field, thus allowing to alternately store and retrieve high-intensity photons from the optically dense Electromagnetically-Induced transparent medium. Such multiplexed photon states, which are expected to allow sharing quantum information among many users, are currently of very high demand for applications in long-distance and multiplexed quantum communication.
Files synchronization from a large number of insertions and deletions
NASA Astrophysics Data System (ADS)
Ellappan, Vijayan; Kumari, Savera
2017-11-01
Synchronization between different versions of files is becoming a major issue that most of the applications are facing. To make the applications more efficient a economical algorithm is developed from the previously used algorithm of “File Loading Algorithm”. I am extending this algorithm in three ways: First, dealing with non-binary files, Second backup is generated for uploaded files and lastly each files are synchronized with insertions and deletions. User can reconstruct file from the former file with minimizing the error and also provides interactive communication by eliminating the frequency without any disturbance. The drawback of previous system is overcome by using synchronization, in which multiple copies of each file/record is created and stored in backup database and is efficiently restored in case of any unwanted deletion or loss of data. That is, to introduce a protocol that user B may use to reconstruct file X from file Y with suitably low probability of error. Synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Drop box synchronize between local copies and cloud backups each time users make changes to local versions. Similarly, synchronization tools are necessary in mobile devices. Specialized synchronization algorithms are used for video and sound editing. Synchronization tools are also capable of performing data duplication.
The CloudBoard Research Platform: an interactive whiteboard for corporate users
NASA Astrophysics Data System (ADS)
Barrus, John; Schwartz, Edward L.
2013-03-01
Over one million interactive whiteboards (IWBs) are sold annually worldwide, predominantly for classroom use with few sales for corporate use. Unmet needs for IWB corporate use were investigated and the CloudBoard Research Platform (CBRP) was developed to investigate and test technology for meeting these needs. The CBRP supports audio conferencing with shared remote drawing activity, casual capture of whiteboard activity for long-term storage and retrieval, use of standard formats such as PDF for easy import of documents via the web and email and easy export of documents. Company RFID badges and key fobs provide secure access to documents at the board and automatic logout occurs after a period of inactivity. Users manage their documents with a web browser. Analytics and remote device management is provided for administrators. The IWB hardware consists of off-the-shelf components (a Hitachi UST Projector, SMART Technologies, Inc. IWB hardware, Mac Mini, Polycom speakerphone, etc.) and a custom occupancy sensor. The three back-end servers provide the web interface, document storage, stroke and audio streaming. Ease of use, security, and robustness sufficient for internal adoption was achieved. Five of the 10 boards installed at various Ricoh sites have been in daily or weekly use for the past year and total system downtime was less than an hour in 2012. Since CBRP was installed, 65 registered users, 9 of whom use the system regularly, have created over 2600 documents.
Role of Pumped Storage Hydro Resources in Electricity Markets and System Operation: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ela, E.; Kirby, B.; Botterud, A.
2013-05-01
The most common form of utility- sized energy storage system is the pumped storage hydro system. Originally, these types of storage systems were economically viable simply because they displace more expensive generating units. However, over time, as those expensive units became more efficient and costs declined, pumped hydro storage units no longer have the operational edge. As a result, in the current electricity market environment, pumped storage hydro plants are struggling. To offset this phenomenon, certain market modifications should be addressed. This paper will introduce some of the challenges faced by pumped storage hydro plants in today's markets and purposemore » some solutions to those problems.« less
Ethical sharing of health data in online platforms - which values should be considered?
Riso, Brígida; Tupasela, Aaro; Vears, Danya F; Felzmann, Heike; Cockbain, Julian; Loi, Michele; Kongsholm, Nana C H; Zullo, Silvia; Rakic, Vojin
2017-08-21
Intensified and extensive data production and data storage are characteristics of contemporary western societies. Health data sharing is increasing with the growth of Information and Communication Technology (ICT) platforms devoted to the collection of personal health and genomic data. However, the sensitive and personal nature of health data poses ethical challenges when data is disclosed and shared even if for scientific research purposes.With this in mind, the Science and Values Working Group of the COST Action CHIP ME 'Citizen's Health through public-private Initiatives: Public health, Market and Ethical perspectives' (IS 1303) identified six core values they considered to be essential for the ethical sharing of health data using ICT platforms. We believe that using this ethical framework will promote respectful scientific practices in order to maintain individuals' trust in research.We use these values to analyse five ICT platforms and explore how emerging data sharing platforms are reconfiguring the data sharing experience from a range of perspectives. We discuss which types of values, rights and responsibilities they entail and enshrine within their philosophy or outlook on what it means to share personal health information. Through this discussion we address issues of the design and the development process of personal health data and patient-oriented infrastructures, as well as new forms of technologically-mediated empowerment.
NASA Astrophysics Data System (ADS)
Chen, Xiaotao; Song, Jie; Liang, Lixiao; Si, Yang; Wang, Le; Xue, Xiaodai
2017-10-01
Large-scale energy storage system (ESS) plays an important role in the planning and operation of smart grid and energy internet. Compressed air energy storage (CAES) is one of promising large-scale energy storage techniques. However, the high cost of the storage of compressed air and the low capacity remain to be solved. This paper proposes a novel non-supplementary fired compressed air energy storage system (NSF-CAES) based on salt cavern air storage to address the issues of air storage and the efficiency of CAES. Operating mechanisms of the proposed NSF-CAES are analysed based on thermodynamics principle. Key factors which has impact on the system storage efficiency are thoroughly explored. The energy storage efficiency of the proposed NSF-CAES system can be improved by reducing the maximum working pressure of the salt cavern and improving inlet air pressure of the turbine. Simulation results show that the electric-to-electric conversion efficiency of the proposed NSF-CAES can reach 63.29% with a maximum salt cavern working pressure of 9.5 MPa and 9 MPa inlet air pressure of the turbine, which is higher than the current commercial CAES plants.
Development of a system for off-peak electrical energy use by air conditioners and heat pumps
NASA Astrophysics Data System (ADS)
Russell, L. D.
1980-05-01
Investigation and evaluation of several alternatives for load management for the TVA system are described. Specific data for the TVA system load characteristics were studied to determine the typical peak and off peak periods for the system. The alternative systems investigated for load management included gaseous energy storage, phase change materials energy storage, zeolite energy storage, variable speed controllers for compressors, and weather sensitive controllers. After investigating these alternatives, system design criteria were established; then, the gaseous and PCM energy storage systems were analyzed. The system design criteria include economic assessment of all alternatives. Handbook data were developed for economic assessment. A liquid/PCM energy storage system was judged feasible.
Ford/BASF/UM Activities in Support of the Hydrogen Storage Engineering Center of Excellence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veenstra, Mike; Purewal, Justin; Xu, Chunchuan
Widespread adoption of hydrogen as a vehicular fuel depends critically on the development of low-cost, on-board hydrogen storage technologies capable of achieving high energy densities and fast kinetics for hydrogen uptake and release. As present-day technologies -- which rely on physical storage methods such as compressed hydrogen -- are incapable of attaining established Department of Energy (DOE) targets, development of materials-based approaches for storing hydrogen have garnered increasing attention. Material-based storage technologies have potential to store hydrogen beyond twice the density of liquid hydrogen. To hasten development of these ‘hydride’ materials, the DOE previously established three centers of excellence formore » materials storage R&D associated with the key classes of materials: metal hydrides, chemical hydrogen, and adsorbents. While these centers made progress in identifying new storage materials, the challenges associated with the engineering of the system around a candidate storage material are in need of further advancement. In 2009 the DOE established the Hydrogen Storage Engineering Center of Excellence with the objective of developing innovative engineering concepts for materials-based hydrogen storage systems. As a partner in the Hydrogen Storage Engineering Center of Excellence, the Ford-UM-BASF team conducted a multi-faceted research program that addresses key engineering challenges associated with the development of materials-based hydrogen storage systems. First, we developed a novel framework that allowed for a material-based hydrogen storage system to be modeled and operated within a virtual fuel cell vehicle. This effort resulted in the ability to assess dynamic operating parameters and interactions between the storage system and fuel cell power plant, including the evaluation of performance throughout various drive cycles. Second, we engaged in cost modeling of various incarnations of the storage systems. This analysis revealed cost gaps and opportunities that identified a storage system that was lower cost than a 700 bar compressed system. Finally, we led the HSECoE efforts devoted to characterizing and enhancing metal organic framework (MOF) storage materials. This report serves as a final documentation of the Ford-UM-BASF project contributions to the HSECoE during the 6-year timeframe of the Center. The activities of the HSECoE have impacted the broader goals of the DOE-EERE and USDRIVE, leading to improved understanding in the engineering of materials-based hydrogen storage systems. This knowledge is a prerequisite to the development of a commercially-viable hydrogen storage system.« less
Energy Storage Systems as a Compliment to Wind Power
NASA Astrophysics Data System (ADS)
Sieling, Jared D.; Niederriter, C. F.; Berg, D. A.
2006-12-01
As Gustavus Adolphus College prepares to install two wind turbines on campus, we are faced with the question of what to do with the excess electricity that is generated. Since the College pays a substantial demand charge, it would seem fiscally responsible to store the energy and use it for peak shaving, instead of selling it to the power company at their avoided cost. We analyzed six currently available systems: hydrogen energy storage, flywheels, pumped hydroelectric storage, battery storage, compressed air storage, and superconducting magnetic energy storage, for energy and financial suitability. Potential wind turbine production is compared to consumption to determine the energy deficit or excess, which is fed into a model for each of the storage systems. We will discuss the advantages and disadvantages of each of the storage systems and their suitability for energy storage and peak shaving in this situation.
Dish Stirling High Performance Thermal Storage FY14Q4 Quad Chart
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andraka, Charles E.
2014-10-01
The goals of this project are to demonstrate the feasibility of significant thermal storage for dish stirling systems to leverage their existing high performance to greater capacity; demonstrate key components of a latent storage and transport system enabling on-dish storage with low energy losses; and provide a technology path to a 25kW e system with 6 hours of storage.
Study on a hypothetical replacement of nuclear electricity by wind power in Sweden
NASA Astrophysics Data System (ADS)
Wagner, F.; Rachlew, E.
2016-05-01
The Swedish electricity supply system benefits strongly from the natural conditions which allow a high share of hydroelectricity. A complete supply is, however, not possible. Up to now, nuclear power is the other workhorse to serve the country with electricity. Thus, electricity production of Sweden is basically CO2 -free and Sweden has reached an environmental status which others in Europe plan to reach in 2050. Furthermore, there is an efficient exchange within the Nordic countries, Nordpol, which can ease possible capacity problems during dry cold years. In this study we investigate to what extent and with what consequences the base load supply of nuclear power can be replaced by intermittent wind power. Such a scenario leads unavoidably to high wind power installations. It is shown that hydroelectricity cannot completely smooth out the fluctuations of wind power and an additional back-up system using fossil fuel is necessary. From the operational dynamics, this system has to be based on gas. The back-up system cannot be replaced by a storage using surplus electricity from wind power. The surplus is too little. To overcome this, further strong extension of wind power is necessary which leads, however, to a reduction of the use of hydroelectricity if the annual consumption is kept constant. In this case one fossil-free energy form is replaced by another, however, more complex one. A mix of wind power at 22.3GW plus a gas based back-up system with 8.6GW producing together 64.8TWh would replace the present infrastructure with 9GW nuclear power producing 63.8TWh electricity. The specific CO2 -emission increases to the double in this case. Pumped storage for the exclusive supply of Sweden does not seem to be a meaningful investment.-1
78 FR 32077 - List of Approved Spent Fuel Storage Casks: MAGNASTOR® System
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-29
... Fuel Storage Casks: MAGNASTOR[supreg] System AGENCY: Nuclear Regulatory Commission. ACTION: Direct... final rule that would have revised its spent fuel storage regulations to include Amendment No. 3 to... All-purpose Storage (MAGNASTOR[supreg]) System listing within the ``List of Approved Spent Fuel...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, Kriston P.; Sprik, Samuel J.; Tamburello, David A.
The U.S. Department of Energy (DOE) has developed a vehicle framework model to simulate fuel cell-based light-duty vehicle operation for various hydrogen storage systems. This transient model simulates the performance of the storage system, fuel cell, and vehicle for comparison to DOE’s Technical Targets using four drive cycles/profiles. Chemical hydrogen storage models have been developed for the Framework model for both exothermic and endothermic materials. Despite the utility of such models, they require that material researchers input system design specifications that cannot be easily estimated. To address this challenge, a design tool has been developed that allows researchers to directlymore » enter kinetic and thermodynamic chemical hydrogen storage material properties into a simple sizing module that then estimates the systems parameters required to run the storage system model. Additionally, this design tool can be used as a standalone executable file to estimate the storage system mass and volume outside of the framework model and compare it to the DOE Technical Targets. These models will be explained and exercised with existing hydrogen storage materials.« less
NASA Technical Reports Server (NTRS)
Blackwell, Kim; Blasso, Len (Editor); Lipscomb, Ann (Editor)
1991-01-01
The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
Valuing the Resilience Provided by Solar and Battery Energy Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLaren, Joyce A; Mullendore, Seth; Laws, Nicholas D
This paper explores the impact of valuing resilience on the economics of photovoltaics (PV) and storage systems for commercial buildings. The analysis presented here illustrates that accounting for the cost of grid power outages can change the breakeven point for PV and storage system investment, and increase the size of systems designed to deliver the greatest economic benefit over time. In other words, valuing resilience can make PV and storage systems economical in cases where they would not be otherwise. As storage costs decrease, and outages occur more frequently, PV and storage are likely to play a larger role inmore » building design and management considerations.« less
Onboard power line conditioning system for an electric or hybrid vehicle
Kajouke, Lateef A.; Perisic, Milun
2016-06-14
A power line quality conditioning system for a vehicle includes an onboard rechargeable direct current (DC) energy storage system and an onboard electrical system coupled to the energy storage system. The energy storage system provides DC energy to drive an electric traction motor of the vehicle. The electrical system operates in a charging mode such that alternating current (AC) energy from a power grid external to the vehicle is converted to DC energy to charge the DC energy storage system. The electrical system also operates in a vehicle-to-grid power conditioning mode such that DC energy from the DC energy storage system is converted to AC energy to condition an AC voltage of the power grid.
Thermal Storage Applications Workshop. Volume 1: Plenary Session Analysis
NASA Technical Reports Server (NTRS)
1979-01-01
The importance of the development of inexpensive and efficient thermal and thermochemical energy storage technology to the solar power program is discussed in a summary of workship discussions held to exchange information and plan for future systems. Topics covered include storage in central power applications such as the 10 MW-e demonstration pilot receiver to be constructed in Barstow, California; storage for small dispersed systems, and problems associated with the development of storage systems for solar power plants interfacing with utility systems.