A distributed parallel storage architecture and its potential application within EOSDIS
NASA Technical Reports Server (NTRS)
Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony
1994-01-01
We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.
Semantics-based distributed I/O with the ParaMEDIC framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balaji, P.; Feng, W.; Lin, H.
2008-01-01
Many large-scale applications simultaneously rely on multiple resources for efficient execution. For example, such applications may require both large compute and storage resources; however, very few supercomputing centers can provide large quantities of both. Thus, data generated at the compute site oftentimes has to be moved to a remote storage site for either storage or visualization and analysis. Clearly, this is not an efficient model, especially when the two sites are distributed over a wide-area network. Thus, we present a framework called 'ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing' which uses application-specific semantic information to convert the generatedmore » data to orders-of-magnitude smaller metadata at the compute site, transfer the metadata to the storage site, and re-process the metadata at the storage site to regenerate the output. Specifically, ParaMEDIC trades a small amount of additional computation (in the form of data post-processing) for a potentially significant reduction in data that needs to be transferred in distributed environments.« less
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
NASA Astrophysics Data System (ADS)
Liu, Lu; Tong, Yibin; Zhao, Zhigang; Zhang, Xuefen
2018-03-01
Large-scale access of distributed residential photovoltaic (PV) in rural areas has solved the voltage problem to a certain extent. However, due to the intermittency of PV and the particularity of rural residents’ power load, the problem of low voltage in the evening peak remains to be resolved. This paper proposes to solve the problem by accessing residential energy storage. Firstly, the influence of access location and capacity of energy storage on voltage distribution in rural distribution network is analyzed. Secondly, the relation between the storage capacity and load capacity is deduced for four typical load and energy storage cases when the voltage deviation meets the demand. Finally, the optimal storage position and capacity are obtained by using PSO and power flow simulation.
Globally distributed software defined storage (proposal)
NASA Astrophysics Data System (ADS)
Shevel, A.; Khoruzhnikov, S.; Grudinin, V.; Sadov, O.; Kairkanov, A.
2017-10-01
The volume of the coming data in HEP is growing. The volume of the data to be held for a long time is growing as well. Large volume of data - big data - is distributed around the planet. The methods, approaches how to organize and manage the globally distributed data storage are required. The distributed storage has several examples for personal needs like own-cloud.org, pydio.com, seafile.com, sparkleshare.org. For enterprise-level there is a number of systems: SWIFT - distributed storage systems (part of Openstack), CEPH and the like which are mostly object storage. When several data center’s resources are integrated, the organization of data links becomes very important issue especially if several parallel data links between data centers are used. The situation in data centers and in data links may vary each hour. All that means each part of distributed data storage has to be able to rearrange usage of data links and storage servers in each data center. In addition, for each customer of distributed storage different requirements could appear. The above topics are planned to be discussed in data storage proposal.
A model for the distributed storage and processing of large arrays
NASA Technical Reports Server (NTRS)
Mehrota, P.; Pratt, T. W.
1983-01-01
A conceptual model for parallel computations on large arrays is developed. The model provides a set of language concepts appropriate for processing arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used to represent arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an Ada package is also described.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2011-08-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2010-09-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
NASA Astrophysics Data System (ADS)
Pizzuto, J. E.; Skalak, K.; Karwan, D. L.
2017-12-01
Transport of suspended sediment and sediment-borne constituents (here termed fluvial particles) through large river systems can be significantly influenced by episodic storage in floodplains and other alluvial deposits. Geomorphologists quantify the importance of storage using sediment budgets, but these data alone are insufficient to determine how storage influences the routing of fluvial particles through river corridors across large spatial scales. For steady state systems, models that combine sediment budget data with "waiting time distributions" (to define how long deposited particles remain stored until being remobilized) and velocities during transport events can provide useful predictions. Limited field data suggest that waiting time distributions are well represented by power laws, extending from <1 to >104 years, while the probability of storage defined by sediment budgets varies from 0.1 km-1 for small drainage basins to 0.001 km-1 for the world's largest watersheds. Timescales of particle delivery from large watersheds are determined by storage rather than by transport processes, with most particles requiring 102 -104 years to reach the basin outlet. These predictions suggest that erosional "signals" induced by climate change, tectonics, or anthropogenic activity will be transformed by storage before delivery to the outlets of large watersheds. In particular, best management practices (BMPs) implemented in upland source areas, designed to reduce the loading of fluvial particles to estuarine receiving waters, will not achieve their intended benefits for centuries (or longer). For transient systems, waiting time distributions cannot be constant, but will vary as portions of transient sediment "pulses" enter and are later released from storage. The delivery of sediment pulses under transient conditions can be predicted by adopting the hypothesis that the probability of erosion of stored particles will decrease with increasing "age" (where age is defined as the elapsed time since deposition). Then, waiting time and age distributions for stored particles become predictions based on the architecture of alluvial storage and the tendency for erosional processes to preferentially remove younger deposits, improving assessment of watershed BMPs and other important applications.
2015-07-01
Reactive kVAR Kilo Watts kW Lithium Ion Li Ion Lithium-Titanate Oxide nLTO Natural gas NG Performance Objectives PO Photovoltaic PV Power ...cloud covered) periods. The demonstration features a large (relative to the overall system power requirements) photovoltaic solar array, whose inverter...microgrid with less expensive power storage instead of large scale energy storage and that the renewable energy with small-scale power storage can
The temporal distribution and carbon storage of large oak wood in streams and floodplain deposits
Richard P. Guyette; Daniel C. Dey; Michael C. Stambaugh
2008-01-01
We used tree-ring dating and 14C dating to document the temporal distribution and carbon storage of oak (Quercus spp.) wood in trees recruited and buried by streams and floodplains in northern Missouri, USA. Frequency distributions indicated that oak wood has been accumulating in Midwest streams continually since at least the...
NASA Astrophysics Data System (ADS)
Schoch, Anna; Blöthe, Jan; Hoffmann, Thomas; Schrott, Lothar
2016-04-01
A large number of sediment budgets have been compiled on different temporal and spatial scales in alpine regions. Detailed sediment budgets based on the quantification of a number of sediment storages (e.g. talus cones, moraine deposits) exist only for a few small scale drainage basins (up to 10² km²). In contrast, large scale sediment budgets (> 10³ km²) consider only long term sediment sinks such as valley fills and lakes. Until now, these studies often neglect small scale sediment storages in the headwaters. However, the significance of these sediment storages have been reported. A quantitative verification whether headwaters function as sediment source regions is lacking. Despite substantial transport energy in mountain environments due to steep gradients and high relief, sediment flux in large river systems is frequently disconnected from alpine headwaters. This leads to significant storage of coarse-grained sediment along the flow path from rockwall source regions to large sedimentary sinks in major alpine valleys. To improve the knowledge on sediment budgets in large scale alpine catchments and to bridge the gap between small and large scale sediment budgets, we apply a multi-method approach comprising investigations on different spatial scales in the Upper Rhone Basin (URB). The URB is the largest inneralpine basin in the European Alps with a size of > 5400 km². It is a closed system with Lake Geneva acting as an ultimate sediment sink for suspended and clastic sediment. We examine the spatial pattern and volumes of sediment storages as well as the morphometry on the local and catchment-wide scale. We mapped sediment storages and bedrock in five sub-regions of the study area (Goms, Lötschen valley, Val d'Illiez, Vallée de la Liène, Turtmann valley) in the field and from high-resolution remote sensing imagery to investigate the spatial distribution of different sediment storage types (e.g. talus deposits, debris flow cones, alluvial fans). These sub-regions cover all three litho-tectonic units of the URB (Helvetic nappes, Penninic nappes, External massifs) and different catchment sizes to capture the inherent variability. Different parameters characterizing topography, surface characteristics, and vegetation cover are analyzed for each storage type. The data is then used in geostatistical models (PCA, stepwise logistic regression) to predict the spatial distribution of sediment storage for the whole URB. We further conduct morphometric analyses of the URB to gain information on the varying degree of glacial imprint and postglacial landscape evolution and their control on the spatial distribution of sediment storage in a large scale drainage basin. Geophysical methods (ground penetrating radar and electrical resistivity tomography) are applied on different sediment storage types on the local scale to estimate mean thicknesses. Additional data from published studies are used to complement our dataset. We integrate the local data in the statistical model on the spatial distribution of sediment storages for the whole URB. Hence, we can extrapolate the stored sediment volumes to the regional scale in order to bridge the gap between small and large scale studies.
NASA Astrophysics Data System (ADS)
Metcalfe, Peter; Beven, Keith; Hankin, Barry; Lamb, Rob
2018-04-01
Enhanced hillslope storage is utilised in natural
flood management in order to retain overland storm run-off and to reduce connectivity between fast surface flow pathways and the channel. Examples include excavated ponds, deepened or bunded accumulation areas, and gullies and ephemeral channels blocked with wooden barriers or debris dams. The performance of large, distributed networks of such measures is poorly understood. Extensive schemes can potentially retain large quantities of run-off, but there are indications that much of their effectiveness can be attributed to desynchronisation of sub-catchment flood waves. Inappropriately sited measures may therefore increase, rather than mitigate, flood risk. Fully distributed hydrodynamic models have been applied in limited studies but introduce significant computational complexity. The longer run times of such models also restrict their use for uncertainty estimation or evaluation of the many potential configurations and storm sequences that may influence the timings and magnitudes of flood waves. Here a simplified overland flow-routing module and semi-distributed representation of enhanced hillslope storage is developed. It is applied to the headwaters of a large rural catchment in Cumbria, UK, where the use of an extensive network of storage features is proposed as a flood mitigation strategy. The models were run within a Monte Carlo framework against data for a 2-month period of extreme flood events that caused significant damage in areas downstream. Acceptable realisations and likelihood weightings were identified using the GLUE uncertainty estimation framework. Behavioural realisations were rerun against the catchment model modified with the addition of the hillslope storage. Three different drainage rate parameters were applied across the network of hillslope storage. The study demonstrates that schemes comprising widely distributed hillslope storage can be modelled effectively within such a reduced complexity framework. It shows the importance of drainage rates from storage features while operating through a sequence of events. We discuss limitations in the simplified representation of overland flow-routing and representation and storage, and how this could be improved using experimental evidence. We suggest ways in which features could be grouped more strategically and thus improve the performance of such schemes.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks.
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-09-18
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.
High-speed data duplication/data distribution: An adjunct to the mass storage equation
NASA Technical Reports Server (NTRS)
Howard, Kevin
1993-01-01
The term 'mass storage' invokes the image of large on-site disk and tape farms which contain huge quantities of low- to medium-access data. Although the cost of such bulk storage is recognized, the cost of the bulk distribution of this data rarely is given much attention. Mass data distribution becomes an even more acute problem if the bulk data is part of a national or international system. If the bulk data distribution is to travel from one large data center to another large data center then fiber-optic cables or the use of satellite channels is feasible. However, if the distribution must be disseminated from a central site to a number of much smaller, and, perhaps varying sites, then cost prohibits the use of fiber-optic cable or satellite communication. Given these cost constraints much of the bulk distribution of data will continue to be disseminated via inexpensive magnetic tape using the various next day postal service options. For non-transmitted bulk data, our working hypotheses are that the desired duplication efficiency of the total bulk data should be established before selecting any particular data duplication system; and, that the data duplication algorithm should be determined before any bulk data duplication method is selected.
NASA Astrophysics Data System (ADS)
Pool, D. R.; Scanlon, B. R.
2017-12-01
There is uncertainty of how storage change in confined and unconfined aquifers would register from space-based platforms, such as the GRACE (Gravity Recovery and Climate Experiment) satellites. To address this concern, superposition groundwater models (MODFLOW) of equivalent storage change in simplified confined and unconfined aquifers of extent, 500 km2 or approximately 5X5 degrees at mid-latitudes, and uniform transmissivity were constructed. Gravity change resulting from the spatial distribution of aquifer storage change for each aquifer type was calculated at the initial GRACE satellite altitude ( 500 km). To approximate real-world conditions, the confined aquifer includes a small region of unconfined conditions at one margin. A uniform storage coefficient (specific yield) was distributed across the unconfined aquifer. For both cases, storage change was produced by 1 year of groundwater withdrawal from identical aquifer-centered well distributions followed by decades of no withdrawal and redistribution of the initial storage loss toward a new steady-state condition. The transient simulated storage loss includes equivalent volumes for both conceptualizations, but spatial distributions differ because of the contrasting aquifer diffusivity (Transmissivity/Storativity). Much higher diffusivity in the confined aquifer results in more rapid storage redistribution across a much larger area than for the unconfined aquifer. After the 1 year of withdrawals, the two simulated storage loss distributions are primarily limited to small regions within the model extent. Gravity change after 1 year observed at the satellite altitude is similar for both aquifers including maximum gravity reductions that are coincident with the aquifer center. With time, the maximum gravity reduction for the confined aquifer case shifts toward the aquifer margin as much as 200 km because of increased storage loss in the unconfined region. Results of the exercise indicate that GRACE observations are largely insensitive to confined or unconfined conditions for most aquifers. Lateral shifts in storage change with time in confined aquifers could be resolved by space-based gravity missions with durations of decades and improved spatial resolution, 1 degree or less ( 100 km), over the GRACE resolution of 3 degrees ( 300 km).
NASA Astrophysics Data System (ADS)
Pfeiffer, Andrew; Wohl, Ellen
2018-01-01
We used 48 reach-scale measurements of large wood and wood-associated sediment and coarse particulate organic matter (CPOM) storage within an 80 km2 catchment to examine spatial patterns of storage relative to stream order. Wood, sediment, and CPOM are not distributed uniformly across the drainage basin. Third- and fourth-order streams (23% of total stream length) disproportionately store wood and coarse and fine sediments: 55% of total wood volume, 78% of coarse sediment, and 49% of fine sediment, respectively. Fourth-order streams store 0.8 m3 of coarse sediment and 0.2 m3 of fine sediment per cubic meter of wood. CPOM storage is highest in first-order streams (60% of storage in 47% of total network stream length). First-order streams can store up to 0.3 m3 of CPOM for each cubic meter of wood. Logjams in third- and fourth-order reaches are primary sediment storage agents, whereas roots in small streams may be more important for storage of CPOM. We propose the large wood particulate storage index to quantify average volume of sediment or CPOM stored by a cubic meter of wood.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-01-01
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications. PMID:26393596
NASA Technical Reports Server (NTRS)
Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary
1996-01-01
We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.
Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen
2015-01-01
Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.
Supporting large scale applications on networks of workstations
NASA Technical Reports Server (NTRS)
Cooper, Robert; Birman, Kenneth P.
1989-01-01
Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.
NASA Astrophysics Data System (ADS)
Zhang, Min; Yang, Feng; Zhang, Dongqing; Tang, Pengcheng
2018-02-01
A large number of electric vehicles are connected to the family micro grid will affect the operation safety of the power grid and the quality of power. Considering the factors of family micro grid price and electric vehicle as a distributed energy storage device, a two stage optimization model is established, and the improved discrete binary particle swarm optimization algorithm is used to optimize the parameters in the model. The proposed control strategy of electric vehicle charging and discharging is of practical significance for the rational control of electric vehicle as a distributed energy storage device and electric vehicle participating in the peak load regulation of power consumption.
Cooperative Management of a Lithium-Ion Battery Energy Storage Network: A Distributed MPC Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Huazhen; Wu, Di; Yang, Tao
2016-12-12
This paper presents a study of cooperative power supply and storage for a network of Lithium-ion energy storage systems (LiBESSs). We propose to develop a distributed model predictive control (MPC) approach for two reasons. First, able to account for the practical constraints of a LiBESS, the MPC can enable a constraint-aware operation. Second, a distributed management can cope with a complex network that integrates a large number of LiBESSs over a complex communication topology. With this motivation, we then build a fully distributed MPC algorithm from an optimization perspective, which is based on an extension of the alternating direction methodmore » of multipliers (ADMM) method. A simulation example is provided to demonstrate the effectiveness of the proposed algorithm.« less
NASA Technical Reports Server (NTRS)
1980-01-01
Twenty-four functional requirements were prepared under six categories and serve to indicate how to integrate dispersed storage generation (DSG) systems with the distribution and other portions of the electric utility system. Results indicate that there are no fundamental technical obstacles to prevent the connection of dispersed storage and generation to the distribution system. However, a communication system of some sophistication is required to integrate the distribution system and the dispersed generation sources for effective control. The large-size span of generators from 10 KW to 30 MW means that a variety of remote monitoring and control may be required. Increased effort is required to develop demonstration equipment to perform the DSG monitoring and control functions and to acquire experience with this equipment in the utility distribution environment.
NASA Astrophysics Data System (ADS)
Wang, B.; Bauer, S.; Pfeiffer, W. T.
2015-12-01
Large scale energy storage will be required to mitigate offsets between electric energy demand and the fluctuating electric energy production from renewable sources like wind farms, if renewables dominate energy supply. Porous formations in the subsurface could provide the large storage capacities required if chemical energy carriers such as hydrogen gas produced during phases of energy surplus are stored. This work assesses the behavior of a porous media hydrogen storage operation through numerical scenario simulation of a synthetic, heterogeneous sandstone formation formed by an anticlinal structure. The structural model is parameterized using data available for the North German Basin as well as data given for formations with similar characteristics. Based on the geological setting at the storage site a total of 15 facies distributions is generated and the hydrological parameters are assigned accordingly. Hydraulic parameters are spatially distributed according to the facies present and include permeability, porosity relative permeability and capillary pressure. The storage is designed to supply energy in times of deficiency on the order of seven days, which represents the typical time span of weather conditions with no wind. It is found that using five injection/extraction wells 21.3 mio sm³ of hydrogen gas can be stored and retrieved to supply 62,688 MWh of energy within 7 days. This requires a ratio of working to cushion gas of 0.59. The retrievable energy within this time represents the demand of about 450000 people. Furthermore it is found that for longer storage times, larger gas volumes have to be used, for higher delivery rates additionally the number of wells has to be increased. The formation investigated here thus seems to offer sufficient capacity and deliverability to be used for a large scale hydrogen gas storage operation.
Benchmarking distributed data warehouse solutions for storing genomic variant information
Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.
2017-01-01
Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require the storage and analysis of variants from thousands of samples can benefit from the scalability and performance of distributed data warehouse solutions. Database URL: https://github.com/ZSI-Bio/variantsdwh PMID:29220442
Efficient numerical simulation of heat storage in subsurface georeservoirs
NASA Astrophysics Data System (ADS)
Boockmeyer, A.; Bauer, S.
2015-12-01
The transition of the German energy market towards renewable energy sources, e.g. wind or solar power, requires energy storage technologies to compensate for their fluctuating production. Large amounts of energy could be stored in georeservoirs such as porous formations in the subsurface. One possibility here is to store heat with high temperatures of up to 90°C through borehole heat exchangers (BHEs) since more than 80 % of the total energy consumption in German households are used for heating and hot water supply. Within the ANGUS+ project potential environmental impacts of such heat storages are assessed and quantified. Numerical simulations are performed to predict storage capacities, storage cycle times, and induced effects. For simulation of these highly dynamic storage sites, detailed high-resolution models are required. We set up a model that accounts for all components of the BHE and verified it using experimental data. The model ensures accurate simulation results but also leads to large numerical meshes and thus high simulation times. In this work, we therefore present a numerical model for each type of BHE (single U, double U and coaxial) that reduces the number of elements and the simulation time significantly for use in larger scale simulations. The numerical model includes all BHE components and represents the temporal and spatial temperature distribution with an accuracy of less than 2% deviation from the fully discretized model. By changing the BHE geometry and using equivalent parameters, the simulation time is reduced by a factor of ~10 for single U-tube BHEs, ~20 for double U-tube BHEs and ~150 for coaxial BHEs. Results of a sensitivity study that quantify the effects of different design and storage formation parameters on temperature distribution and storage efficiency for heat storage using multiple BHEs are then shown. It is found that storage efficiency strongly depends on the number of BHEs composing the storage site, their distance and the cycle time. The temperature distribution is most sensitive to thermal conductivity of both borehole grouting and storage formation while storage efficiency is mainly controlled by the thermal conductivity of the storage formation.
Gerber, Daniel L.; Vossos, Vagelis; Feng, Wei; ...
2017-06-12
Direct current (DC) power distribution has recently gained traction in buildings research due to the proliferation of on-site electricity generation and battery storage, and an increasing prevalence of internal DC loads. The research discussed in this paper uses Modelica-based simulation to compare the efficiency of DC building power distribution with an equivalent alternating current (AC) distribution. The buildings are all modeled with solar generation, battery storage, and loads that are representative of the most efficient building technology. A variety of paramet ric simulations determine how and when DC distribution proves advantageous. These simulations also validate previous studies that use simplermore » approaches and arithmetic efficiency models. This work shows that using DC distribution can be considerably more efficient: a medium sized office building using DC distribution has an expected baseline of 12% savings, but may also save up to 18%. In these results, the baseline simulation parameters are for a zero net energy (ZNE) building that can island as a microgrid. DC is most advantageous in buildings with large solar capacity, large battery capacity, and high voltage distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Daniel L.; Vossos, Vagelis; Feng, Wei
Direct current (DC) power distribution has recently gained traction in buildings research due to the proliferation of on-site electricity generation and battery storage, and an increasing prevalence of internal DC loads. The research discussed in this paper uses Modelica-based simulation to compare the efficiency of DC building power distribution with an equivalent alternating current (AC) distribution. The buildings are all modeled with solar generation, battery storage, and loads that are representative of the most efficient building technology. A variety of paramet ric simulations determine how and when DC distribution proves advantageous. These simulations also validate previous studies that use simplermore » approaches and arithmetic efficiency models. This work shows that using DC distribution can be considerably more efficient: a medium sized office building using DC distribution has an expected baseline of 12% savings, but may also save up to 18%. In these results, the baseline simulation parameters are for a zero net energy (ZNE) building that can island as a microgrid. DC is most advantageous in buildings with large solar capacity, large battery capacity, and high voltage distribution.« less
Integrated Micro-Power System (IMPS) Development at NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Wilt, David; Hepp, Aloysius; Moran, Matt; Jenkins, Phillip; Scheiman, David; Raffaelle, Ryne
2003-01-01
Glenn Research Center (GRC) has a long history of energy related technology developments for large space related power systems, including photovoltaics, thermo-mechanical energy conversion, electrochemical energy storage. mechanical energy storage, power management and distribution and power system design. Recently, many of these technologies have begun to be adapted for small, distributed power system applications or Integrated Micro-Power Systems (IMPS). This paper will describe the IMPS component and system demonstration efforts to date.
A Rich Metadata Filesystem for Scientific Data
ERIC Educational Resources Information Center
Bui, Hoang
2012-01-01
As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…
Implementation of a Campuswide Distributed Mass Storage Service: the Dream Versus Reality
NASA Technical Reports Server (NTRS)
Prahst, Stephen; Armstead, Betty Jo
1996-01-01
In 1990, a technical team at NASA Lewis Research Center, Cleveland, Ohio, began defining a Mass Storage Service to pro- wide long-term archival storage, short-term storage for very large files, distributed Network File System access, and backup services for critical data dw resides on workstations and personal computers. Because of software availability and budgets, the total service was phased in over dm years. During the process of building the service from the commercial technologies available, our Mass Storage Team refined the original vision and learned from the problems and mistakes that occurred. We also enhanced some technologies to better meet the needs of users and system administrators. This report describes our team's journey from dream to reality, outlines some of the problem areas that still exist, and suggests some solutions.
Digital Library Storage using iRODS Data Grids
NASA Astrophysics Data System (ADS)
Hedges, Mark; Blanke, Tobias; Hasan, Adil
Digital repository software provides a powerful and flexible infrastructure for managing and delivering complex digital resources and metadata. However, issues can arise in managing the very large, distributed data files that may constitute these resources. This paper describes an implementation approach that combines the Fedora digital repository software with a storage layer implemented as a data grid, using the iRODS middleware developed by DICE (Data Intensive Cyber Environments) as the successor to SRB. This approach allows us to use Fedoras flexible architecture to manage the structure of resources and to provide application- layer services to users. The grid-based storage layer provides efficient support for managing and processing the underlying distributed data objects, which may be very large (e.g. audio-visual material). The Rule Engine built into iRODS is used to integrate complex workflows at the data level that need not be visible to users, e.g. digital preservation functionality.
Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro
The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question:more » Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms« less
Floodplain dynamics control the age distribution of organic carbon in large rivers
NASA Astrophysics Data System (ADS)
Torres, M. A.; Limaye, A. B. S.; Ganti, V.; West, A. J.; Fischer, W. W.; Lamb, M. P.
2016-12-01
As sediments transit through river systems, they are temporarily stored within floodplains. This storage is important for geochemical cycles because it imparts a certain cadence to weathering processes and organic carbon cycling. However, the time and length scales over which these processes operate are poorly known. To address this, we developed a model for the distribution of storage times in floodplains and used it to make predictions of the age distribution of riverine particulate organic carbon (POC) that can be compared with data from a range of rivers.Using statistics generated from a numerical model of river meandering that accounts for the rates of lateral channel migration and the lengths of channel needed to exchange the sediment flux with the floodplain, we estimated the distribution of sediment storage times. Importantly, this approach consistently yields a heavy-tailed distribution of storage times. This finding, based on comprehensive simulations of a wide range of river conditions, arises because of geometrical constraints that lead to the preferential erosion and reworking of young deposits. To benchmark our model, we compared our results with meteoric 10Be data (a storage time proxy) from Amazonian rivers. Our model correctly predicts observed 10Be concentrations, and consequently appears to capture the correct characteristic timescales associated with floodplain storage. By coupling a simple model of carbon cycling with our floodplain storage model, we are able to make predictions about the radiocarbon content of riverine POC. We observe that floodplains with greater storage times tend to have biospheric POC with a lower radiocarbon content (after correcting bulk ages for contribution from radiocarbon-dead petrogenic carbon). This result confirms that storage plays a key role in setting the age of POC transported by rivers with important implications for the dynamics of the global carbon cycle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Peter
2016-07-26
This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy ofmore » the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.« less
Discussion on joint operation of wind farm and pumped-storage hydroplant
NASA Astrophysics Data System (ADS)
Li, Caifang; Wu, Yichun; Liang, Hao; Li, Miao
2017-12-01
Due to the random fluctuations in wind power, large amounts of grid integration will have a negative impact on grid operation and the consumers. The joint operation with pumped-storage hydroplant with good peak shaving performance can effectively reduce the negative impact on the safety and economic operation of power grid, and improve the utilization of wind power. In addition, joint operation can achieve the optimization of green power and improve the comprehensive economic benefits. Actually, the rational profit distribution of joint operation is the premise of sustainable and stable cooperation. This paper focuses on the profit distribution of joint operation, and applies improved shapely value method, which taking the investments and the contributions of each participant in the cooperation into account, to determine the profit distribution. Moreover, the distribution scheme can provide an effective reference for the actual joint operation of wind farm and pumped-storage hydroplant.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
Hierarchical storage of large volume of multidector CT data using distributed servers
NASA Astrophysics Data System (ADS)
Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David
2006-03-01
Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.
Redox Flow Batteries, Hydrogen and Distributed Storage.
Dennison, C R; Vrubel, Heron; Amstutz, Véronique; Peljo, Pekka; Toghill, Kathryn E; Girault, Hubert H
2015-01-01
Social, economic, and political pressures are causing a shift in the global energy mix, with a preference toward renewable energy sources. In order to realize widespread implementation of these resources, large-scale storage of renewable energy is needed. Among the proposed energy storage technologies, redox flow batteries offer many unique advantages. The primary limitation of these systems, however, is their limited energy density which necessitates very large installations. In order to enhance the energy storage capacity of these systems, we have developed a unique dual-circuit architecture which enables two levels of energy storage; first in the conventional electrolyte, and then through the formation of hydrogen. Moreover, we have begun a pilot-scale demonstration project to investigate the scalability and technical readiness of this approach. This combination of conventional energy storage and hydrogen production is well aligned with the current trajectory of modern energy and mobility infrastructure. The combination of these two means of energy storage enables the possibility of an energy economy dominated by renewable resources.
NASA Technical Reports Server (NTRS)
Kanerva, P.
1986-01-01
To determine the relation of the sparse, distributed memory to other architectures, a broad review of the literature was made. The memory is called a pattern memory because they work with large patterns of features (high-dimensional vectors). A pattern is stored in a pattern memory by distributing it over a large number of storage elements and by superimposing it over other stored patterns. A pattern is retrieved by mathematical or statistical reconstruction from the distributed elements. Three pattern memories are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Xinhua; Zhou, Zhongkang; Chen, Xiaochun; Song, Jishuang; Shi, Maolin
2017-05-01
system is proposed based on NaS battery and lithium ion battery, that the former is the main large scale energy storage technology world-widely used and developed and the latter is a flexible way to have both power and energy capacities. The hybrid energy storage system, which takes advantage of the two complementary technologies to provide large power and energy capacities, is chosen to do an evaluation of econom ical-environmental based on critical excess electricity production (CEEP), CO2 emission, annual total costs calculated on the specific given condition using Energy PLAN software. The result shows that hybrid storage system has strengths in environmental benefits and also can absorb more discarded wind power than single storage system and is a potential way to push forward the application of wind power and even other types of renewable energy resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyer, James M.; Erdman, Bill; Iannucci, Joseph J., Jr.
2005-03-01
This report describes Phase III of a project entitled Innovative Applications of Energy Storage in a Restructured Electricity Marketplace. For this study, the authors assumed that it is feasible to operate an energy storage plant simultaneously for two primary applications: (1) energy arbitrage, i.e., buy-low-sell-high, and (2) to reduce peak loads in utility ''hot spots'' such that the utility can defer their need to upgrade transmission and distribution (T&D) equipment. The benefits from the arbitrage plus T&D deferral applications were estimated for five cases based on the specific requirements of two large utilities operating in the Eastern U.S. A numbermore » of parameters were estimated for the storage plant ratings required to serve the combined application: power output (capacity) and energy discharge duration (energy storage). In addition to estimating the various financial expenditures and the value of electricity that could be realized in the marketplace, technical characteristics required for grid-connected distributed energy storage used for capacity deferral were also explored.« less
Mass storage technology in networks
NASA Astrophysics Data System (ADS)
Ishii, Katsunori; Takeda, Toru; Itao, Kiyoshi; Kaneko, Reizo
1990-08-01
Trends and features of mass storage subsystems in network are surveyed and their key technologies spotlighted. Storage subsystems are becoming increasingly important in new network systems in which communications and data processing are systematically combined. These systems require a new class of high-performance mass-information storage in order to effectively utilize their processing power. The requirements of high transfer rates, high transactional rates and large storage capacities, coupled with high functionality, fault tolerance and flexibility in configuration, are major challenges in storage subsystems. Recent progress in optical disk technology has resulted in improved performance of on-line external memories to optical disk drives, which are competing with mid-range magnetic disks. Optical disks are more effective than magnetic disks in using low-traffic random-access file storing multimedia data that requires large capacity, such as in archive use and in information distribution use by ROM disks. Finally, it demonstrates image coded document file servers for local area network use that employ 130mm rewritable magneto-optical disk subsystems.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
Coordinated Collaboration between Heterogeneous Distributed Energy Resources
Abdollahy, Shahin; Lavrova, Olga; Mammoli, Andrea
2014-01-01
A power distribution feeder, where a heterogeneous set of distributed energy resources is deployed, is examined by simulation. The energy resources include PV, battery storage, natural gas GenSet, fuel cells, and active thermal storage for commercial buildings. The resource scenario considered is one that may exist in a not too distant future. Two cases of interaction between different resources are examined. One interaction involves a GenSet used to partially offset the duty cycle of a smoothing battery connected to a large PV system. The other example involves the coordination of twenty thermal storage devices, each associated with a commercial building.more » Storage devices are intended to provide maximum benefit to the building, but it is shown that this can have a deleterious effect on the overall system, unless the action of the individual storage devices is coordinated. A network based approach is also introduced to calculate some type of effectiveness metric to all available resources which take part in coordinated operation. The main finding is that it is possible to achieve synergy between DERs on a system; however this required a unified strategy to coordinate the action of all devices in a decentralized way.« less
Santucci, Claudio; Tenori, Leonardo; Luchinat, Claudio
2015-09-01
The time-related changes of three agricultural products, coming from two distribution routes, have been followed using NMR fingerprinting to monitor metabolic variations occurring during several days of cold storage. An NMR profiling approach was employed to evaluate the variations in metabolic profile and metabolite content in three different agricultural products highly consumed in Italy (peaches, tomatoes and plums) coming from Tuscanian farms and how they change with time after collection. For each product, we followed the time-related changes during cold storage along three different collection periods. We monitored the variations in metabolic fingerprint and the trend of a set of metabolites, focusing our attention on nutritive and health-promoting metabolites (mainly, essential amino acids and antioxidants) as well as metabolites that contribute to the taste. Concurrently, for comparison, the time-dependent changes of the same kind of products coming from large-scale distribution have been also analyzed under the same conditions. In this second category, only slight variations in the metabolic fingerprint and metabolite levels were seen during cold storage. Unsupervised and supervised multivariate statistics was also employed to enlighten the differences between the three collections. In particular it seems that the metabolic fingerprint of large-scale distribution products is quite similar in the early, middle and late collection, while peaches and plums locally collected are markedly different among the three periods. The metabolic profiles of the agricultural products belonging to these two different distribution routes are intrinsically different, and they show different changes during the time of cold storage. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks
NASA Astrophysics Data System (ADS)
Langner, Tobias; Schindelhauer, Christian; Souza, Alexander
We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents. We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time {O}(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
Dynamic tuning of optical absorbers for accelerated solar-thermal energy storage.
Wang, Zhongyong; Tong, Zhen; Ye, Qinxian; Hu, Hang; Nie, Xiao; Yan, Chen; Shang, Wen; Song, Chengyi; Wu, Jianbo; Wang, Jun; Bao, Hua; Tao, Peng; Deng, Tao
2017-11-14
Currently, solar-thermal energy storage within phase-change materials relies on adding high thermal-conductivity fillers to improve the thermal-diffusion-based charging rate, which often leads to limited enhancement of charging speed and sacrificed energy storage capacity. Here we report the exploration of a magnetically enhanced photon-transport-based charging approach, which enables the dynamic tuning of the distribution of optical absorbers dispersed within phase-change materials, to simultaneously achieve fast charging rates, large phase-change enthalpy, and high solar-thermal energy conversion efficiency. Compared with conventional thermal charging, the optical charging strategy improves the charging rate by more than 270% and triples the amount of overall stored thermal energy. This superior performance results from the distinct step-by-step photon-transport charging mechanism and the increased latent heat storage through magnetic manipulation of the dynamic distribution of optical absorbers.
LIQHYSMES - spectral power distributions of imbalances and implications for the SMES
NASA Astrophysics Data System (ADS)
Sander, M.; Gehring, R.; Neumann, H.
2014-05-01
LIQHYSMES, the recently proposed hybrid energy storage concept for variable renewable energies, combines the storage of LIQuid HYdrogen (LH2) with Superconducting Magnetic Energy Storage (SMES). LH2 as the bulk energy carrier is used for the large scale stationary longer-term energy storage, and the SMES cooled by the LH2 bath, provides highest power over shorter periods and at superior efficiencies. Both together contribute to the balancing of electric load or supply fluctuations from seconds to several hours, days or even weeks. Here different spectral power distributions of such imbalances between electricity supply and load reflecting different sources of fluctuations in the range between 1 sec and 15 minutes are considered. Some related implications for MgB2-based 100 MW-SMES operated at maximum fields of 2 T and 4 T, are considered for these buffering scenarios. Requirements as regards the storage capacity and correspondingly the minimum size of the LH2 storage tank are derived. The related loss contributions with a particular focus on the ramping losses are analysed.
Understanding I/O workload characteristics of a Peta-scale storage system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul
2015-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less
NASA Technical Reports Server (NTRS)
Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
This report contains copies of nearly all of the technical papers and viewgraphs presented at the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Application. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include the following: magnetic disk and tape technologies; optical disk and tape; software storage and file management systems; and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
NASA Technical Reports Server (NTRS)
Kobler, Ben (Editor); Hariharan, P. C. (Editor); Blasso, L. G. (Editor)
1992-01-01
This report contains copies of nearly all of the technical papers and viewgraphs presented at the National Space Science Data Center (NSSDC) Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990s.
NASA Astrophysics Data System (ADS)
Read, A.; Taga, A.; O-Saada, F.; Pajchel, K.; Samset, B. H.; Cameron, D.
2008-07-01
Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation.
46 CFR 120.354 - Battery installations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...
46 CFR 129.356 - Battery installations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...
46 CFR 120.354 - Battery installations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...
46 CFR 129.356 - Battery installations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...
46 CFR 129.356 - Battery installations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...
46 CFR 129.356 - Battery installations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...
46 CFR 129.356 - Battery installations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...
46 CFR 120.354 - Battery installations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...
46 CFR 120.354 - Battery installations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...
46 CFR 120.354 - Battery installations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...
NASA Astrophysics Data System (ADS)
Skaugen, T.; Mengistu, Z.
2015-10-01
In this study we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady-state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff and hence estimated prior to calibration. Key principles guiding the evaluation of the new subsurface storage routine have been (a) to minimize the number of parameters to be estimated through the, often arbitrary fitting to optimize runoff predictions (calibration) and (b) maximize the range of testing conditions (i.e. large-sample hydrology). The new storage routine has been implemented in the already parameter parsimonious Distance Distribution Dynamics (DDD) model and tested for 73 catchments in Norway of varying size, mean elevations and landscape types. Runoff simulations for the 73 catchments from two model structures; DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage were compared. No loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe Efficiency criterion of 0.68 was found using the new estimated storage routine compared with 0.66 using calibrated storage routine. The average Kling-Gupta Efficiency criterion was 0.69 and 0.70 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recessions was reduced by almost 50 % using the new storage routine.
Possibility of the market expansion of large capacity optical cold archive
NASA Astrophysics Data System (ADS)
Matsumoto, Ikuo; Sakata, Emiko
2017-08-01
The field, IoT and Big data, which is activated by the revolution of ICT, has caused rapid increase of distribution data of various business application. As a result, data with low access frequency has been rapidly increasing into a huge scale that human has never experienced before. This data with low access frequency is called "cold data", and the storage for cold data is called "cold storage". In this situation, the specifications of storage including access frequency, response speed and cost is determined by the application's request.
Acoustic Profiling of Bottom Sediments in Large Oil Storage Tanks
NASA Astrophysics Data System (ADS)
Svet, V. D.; Tsysar', S. A.
2018-01-01
Characteristic features of acoustic profiling of bottom sediments in large oil storage tanks are considered. Basic acoustic parameters of crude oil and bottom sediments are presented. It is shown that, because of the presence of both transition layers in crude oil and strong reverberation effects in oil tanks, the volume of bottom sediments that is calculated from an acoustic surface image is generally overestimated. To reduce the error, additional post-processing of acoustic profilometry data is proposed in combination with additional measurements of viscosity and tank density distributions in vertical at several points of the tank.
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
Flexible C : N ratio enhances metabolism of large phytoplankton when resource supply is intermittent
NASA Astrophysics Data System (ADS)
Talmy, D.; Blackford, J.; Hardman-Mountford, N. J.; Polimene, L.; Follows, M. J.; Geider, R. J.
2014-04-01
Phytoplankton cell size influences particle sinking rate, food web interactions and biogeographical distributions. We present a model in which the uptake, storage and assimilation of nitrogen and carbon are explicitly resolved in different sized phytoplankton cells. In the model, metabolism and cellular C : N ratio are influenced by accumulation of carbon polymers such as carbohydrate and lipid, which is greatest when cells are nutrient starved, or exposed to high light. Allometric relations and empirical datasets are used to constrain the range of possible C : N, and indicate larger cells can accumulate significantly more carbon storage compounds than smaller cells. When forced with extended periods of darkness combined with brief exposure to saturating irradiance, the model predicts organisms large enough to accumulate significant carbon reserves may on average synthesize protein and other functional apparatus up to five times faster than smaller organisms. The advantage of storage in terms of average daily protein synthesis rate is greatest when modeled organisms were previously nutrient starved, and carbon storage reservoirs saturated. Small organisms may therefore be at a disadvantage in terms of average daily growth rate in environments that involve prolonged periods of darkness and intermittent nutrient limitation. We suggest this mechanism is a significant constraint on phytoplankton C : N variability and cell size distribution in different oceanic regimes.
CERN data services for LHC computing
NASA Astrophysics Data System (ADS)
Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.
2017-10-01
Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.
Survey of Large Methane Emitters in North America
NASA Astrophysics Data System (ADS)
Deiker, S.
2017-12-01
It has been theorized that methane emissions in the oil and gas industry follow log normal or "fat tail" distributions, with large numbers of small sources for every very large source. Such distributions would have significant policy and operational implications. Unfortunately, by their very nature such distributions would require large sample sizes to verify. Until recently, such large-scale studies would be prohibitively expensive. The largest public study to date sampled 450 wells, an order of magnitude too low to effectively constrain these models. During 2016 and 2017, Kairos Aerospace conducted a series of surveys the LeakSurveyor imaging spectrometer, mounted on light aircraft. This small, lightweight instrument was designed to rapidly locate large emission sources. The resulting survey covers over three million acres of oil and gas production. This includes over 100,000 wells, thousands of storage tanks and over 7,500 miles of gathering lines. This data set allows us to now probe the distribution of large methane emitters. Results of this survey, and implications for methane emission distribution, methane policy and LDAR will be discussed.
Thermal performance and heat transport in aquifer thermal energy storage
NASA Astrophysics Data System (ADS)
Sommer, W. T.; Doornenbal, P. J.; Drijver, B. C.; van Gaans, P. F. M.; Leusbrock, I.; Grotenhuis, J. T. C.; Rijnaarts, H. H. M.
2014-01-01
Aquifer thermal energy storage (ATES) is used for seasonal storage of large quantities of thermal energy. Due to the increasing demand for sustainable energy, the number of ATES systems has increased rapidly, which has raised questions on the effect of ATES systems on their surroundings as well as their thermal performance. Furthermore, the increasing density of systems generates concern regarding thermal interference between the wells of one system and between neighboring systems. An assessment is made of (1) the thermal storage performance, and (2) the heat transport around the wells of an existing ATES system in the Netherlands. Reconstruction of flow rates and injection and extraction temperatures from hourly logs of operational data from 2005 to 2012 show that the average thermal recovery is 82 % for cold storage and 68 % for heat storage. Subsurface heat transport is monitored using distributed temperature sensing. Although the measurements reveal unequal distribution of flow rate over different parts of the well screen and preferential flow due to aquifer heterogeneity, sufficient well spacing has avoided thermal interference. However, oversizing of well spacing may limit the number of systems that can be realized in an area and lower the potential of ATES.
Workload Characterization of a Leadership Class Storage Cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul; Shipman, Galen M
2010-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize themore » system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.« less
Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets
NASA Astrophysics Data System (ADS)
Juric, Mario
2011-01-01
The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.
NASA Technical Reports Server (NTRS)
Blackwell, Kim; Blasso, Len (Editor); Lipscomb, Ann (Editor)
1991-01-01
The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's.
Changes in ocean circulation and carbon storage are decoupled from air-sea CO2 fluxes
NASA Astrophysics Data System (ADS)
Marinov, I.; Gnanadesikan, A.
2011-02-01
The spatial distribution of the air-sea flux of carbon dioxide is a poor indicator of the underlying ocean circulation and of ocean carbon storage. The weak dependence on circulation arises because mixing-driven changes in solubility-driven and biologically-driven air-sea fluxes largely cancel out. This cancellation occurs because mixing driven increases in the poleward residual mean circulation result in more transport of both remineralized nutrients and heat from low to high latitudes. By contrast, increasing vertical mixing decreases the storage associated with both the biological and solubility pumps, as it decreases remineralized carbon storage in the deep ocean and warms the ocean as a whole.
Changes in ocean circulation and carbon storage are decoupled from air-sea CO2 fluxes
NASA Astrophysics Data System (ADS)
Marinov, I.; Gnanadesikan, A.
2010-11-01
The spatial distribution of the air-sea flux of carbon dioxide is a poor indicator of the underlying ocean circulation and of ocean carbon storage. The weak dependence on circulation arises because mixing-driven changes in solubility-driven and biologically-driven air-sea fluxes largely cancel out. This cancellation occurs because mixing driven increases in the poleward residual mean circulation results in more transport of both remineralized nutrients and heat from low to high latitudes. By contrast, increasing vertical mixing decreases the storage associated with both the biological and solubility pumps, as it decreases remineralized carbon storage in the deep ocean and warms the ocean as a whole.
Storage and distribution of pathology digital images using integrated web-based viewing systems.
Marchevsky, Alberto M; Dulbandzhyan, Ronda; Seely, Kevin; Carey, Steve; Duncan, Raymond G
2002-05-01
Health care providers have expressed increasing interest in incorporating digital images of gross pathology specimens and photomicrographs in routine pathology reports. To describe the multiple technical and logistical challenges involved in the integration of the various components needed for the development of a system for integrated Web-based viewing, storage, and distribution of digital images in a large health system. An Oracle version 8.1.6 database was developed to store, index, and deploy pathology digital photographs via our Intranet. The database allows for retrieval of images by patient demographics or by SNOMED code information. The Intranet of a large health system accessible from multiple computers located within the medical center and at distant private physician offices. The images can be viewed using any of the workstations of the health system that have authorized access to our Intranet, using a standard browser or a browser configured with an external viewer or inexpensive plug-in software, such as Prizm 2.0. The images can be printed on paper or transferred to film using a digital film recorder. Digital images can also be displayed at pathology conferences by using wireless local area network (LAN) and secure remote technologies. The standardization of technologies and the adoption of a Web interface for all our computer systems allows us to distribute digital images from a pathology database to a potentially large group of users distributed in multiple locations throughout a large medical center.
A Technical Survey on Optimization of Processing Geo Distributed Data
NASA Astrophysics Data System (ADS)
Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.
2018-04-01
With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHenry, Mark P.; Johnson, Jay; Hightower, Mike
The increasing pressure for network operators to meet distribution network power quality standards with increasing peak loads, renewable energy targets, and advances in automated distributed power electronics and communications is forcing policy-makers to understand new means to distribute costs and benefits within electricity markets. Discussions surrounding how distributed generation (DG) exhibits active voltage regulation and power factor/reactive power control and other power quality capabilities are complicated by uncertainties of baseline local distribution network power quality and to whom and how costs and benefits of improved electricity infrastructure will be allocated. DG providing ancillary services that dynamically respond to the networkmore » characteristics could lead to major network improvements. With proper market structures renewable energy systems could greatly improve power quality on distribution systems with nearly no additional cost to the grid operators. Renewable DG does have variability challenges, though this issue can be overcome with energy storage, forecasting, and advanced inverter functionality. This paper presents real data from a large-scale grid-connected PV array with large-scale storage and explores effective mitigation measures for PV system variability. As a result, we discuss useful inverter technical knowledge for policy-makers to mitigate ongoing inflation of electricity network tariff components by new DG interconnection requirements or electricity markets which value power quality and control.« less
McHenry, Mark P.; Johnson, Jay; Hightower, Mike
2016-01-01
The increasing pressure for network operators to meet distribution network power quality standards with increasing peak loads, renewable energy targets, and advances in automated distributed power electronics and communications is forcing policy-makers to understand new means to distribute costs and benefits within electricity markets. Discussions surrounding how distributed generation (DG) exhibits active voltage regulation and power factor/reactive power control and other power quality capabilities are complicated by uncertainties of baseline local distribution network power quality and to whom and how costs and benefits of improved electricity infrastructure will be allocated. DG providing ancillary services that dynamically respond to the networkmore » characteristics could lead to major network improvements. With proper market structures renewable energy systems could greatly improve power quality on distribution systems with nearly no additional cost to the grid operators. Renewable DG does have variability challenges, though this issue can be overcome with energy storage, forecasting, and advanced inverter functionality. This paper presents real data from a large-scale grid-connected PV array with large-scale storage and explores effective mitigation measures for PV system variability. As a result, we discuss useful inverter technical knowledge for policy-makers to mitigate ongoing inflation of electricity network tariff components by new DG interconnection requirements or electricity markets which value power quality and control.« less
Cyclic high temperature heat storage using borehole heat exchangers
NASA Astrophysics Data System (ADS)
Boockmeyer, Anke; Delfs, Jens-Olaf; Bauer, Sebastian
2016-04-01
The transition of the German energy supply towards mainly renewable energy sources like wind or solar power, termed "Energiewende", makes energy storage a requirement in order to compensate their fluctuating production and to ensure a reliable energy and power supply. One option is to store heat in the subsurface using borehole heat exchangers (BHEs). Efficiency of thermal storage is increasing with increasing temperatures, as heat at high temperatures is more easily injected and extracted than at temperatures at ambient levels. This work aims at quantifying achievable storage capacities, storage cycle times, injection and extraction rates as well as thermal and hydraulic effects induced in the subsurface for a BHE storage site in the shallow subsurface. To achieve these aims, simulation of these highly dynamic storage sites is performed. A detailed, high-resolution numerical simulation model was developed, that accounts for all BHE components in geometrical detail and incorporates the governing processes. This model was verified using high quality experimental data and is shown to achieve accurate simulation results with excellent fit to the available experimental data, but also leads to large computational times due to the large numerical meshes required for discretizing the highly transient effects. An approximate numerical model for each type of BHE (single U, double U and coaxial) that reduces the number of elements and the simulation time significantly was therefore developed for use in larger scale simulations. The approximate numerical model still includes all BHE components and represents the temporal and spatial temperature distribution with a deviation of less than 2% from the fully discretized model. Simulation times are reduced by a factor of ~10 for single U-tube BHEs, ~20 for double U-tube BHEs and ~150 for coaxial BHEs. This model is then used to investigate achievable storage capacity, injection and extraction rates as well as induced effects for varying storage cycle times, operating conditions and storage set-ups. A sensitivity analysis shows that storage efficiency strongly depends on the number of BHEs composing the storage site and the cycle time. Using a half-yearly cycle of heat injection and extraction with the maximum possible rates shows that the fraction of recovered heat increases with the number of storage cycles used, as initial losses due to heat conduction become smaller. Also, overall recovery rates of 70 to 80% are possible in the set-ups investigated. Temperature distribution in the geological heat storage site is most sensitive to the thermal conductivity of both borehole grouting and storage formation, while storage efficiency is dominated by the thermal conductivity of the storage formation. For the large cycle times of 6 months each used, heat capacity is less sensitive than the heat conductivity. Acknowledgments: This work is part of the ANGUS+ project (www.angusplus.de) and funded by the German Federal Ministry of Education and Research (BMBF) as part of the energy storage initiative "Energiespeicher".
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)
2002-01-01
This document contains copies of those technical papers received in time for publication prior to the Tenth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Nineteenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center April 15-18, 2002. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the ingest, storage, and management of large volumes of data. The Conference encourages all interested organizations to discuss long-term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long-term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, storage networking with emphasis on IP storage, performance, standards, site reports, and vendor solutions. Tutorials will be available on perpendicular magnetic recording, object based storage, storage virtualization and IP storage.
Lessons Learned from the Puerto Rico Battery Energy Storage System
DOE Office of Scientific and Technical Information (OSTI.GOV)
BOYES, JOHN D.; DE ANA, MINDI FARBER; TORRES, WENCESLANO
1999-09-01
The Puerto Rico Electric Power Authority (PREPA) installed a distributed battery energy storage system in 1994 at a substation near San Juan, Puerto Rico. It was patterned after two other large energy storage systems operated by electric utilities in California and Germany. The U.S. Department of Energy (DOE) Energy Storage Systems Program at Sandia National Laboratories has followed the progress of all stages of the project since its inception. It directly supported the critical battery room cooling system design by conducting laboratory thermal testing of a scale model of the battery under simulated operating conditions. The Puerto Rico facility ismore » at present the largest operating battery storage system in the world and is successfully providing frequency control, voltage regulation, and spinning reserve to the Caribbean island. The system further proved its usefulness to the PREPA network in the fall of 1998 in the aftermath of Hurricane Georges. The owner-operator, PREPA, and the architect/engineer, vendors, and contractors learned many valuable lessons during all phases of project development and operation. In documenting these lessons, this report will help PREPA and other utilities in planning to build large energy storage systems.« less
Study on parallel and distributed management of RS data based on spatial database
NASA Astrophysics Data System (ADS)
Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin
2009-10-01
With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.
Study on parallel and distributed management of RS data based on spatial data base
NASA Astrophysics Data System (ADS)
Chen, Yingbiao; Qian, Qinglan; Liu, Shijin
2006-12-01
With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.
Grid data access on widely distributed worker nodes using scalla and SRM
NASA Astrophysics Data System (ADS)
Jakl, P.; Lauret, J.; Hanushevsky, A.; Shoshani, A.; Sim, A.; Gu, J.
2008-07-01
Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.
NASA Astrophysics Data System (ADS)
Wong, Jianhui; Lim, Yun Seng; Morris, Stella; Morris, Ezra; Chua, Kein Huat
2017-04-01
The amount of small-scaled renewable energy sources is anticipated to increase on the low-voltage distribution networks for the improvement of energy efficiency and reduction of greenhouse gas emission. The growth of the PV systems on the low-voltage distribution networks can create voltage unbalance, voltage rise, and reverse-power flow. Usually these issues happen with little fluctuation. However, it tends to fluctuate severely as Malaysia is a region with low clear sky index. A large amount of clouds often passes over the country, hence making the solar irradiance to be highly scattered. Therefore, the PV power output fluctuates substantially. These issues can lead to the malfunction of the electronic based equipment, reduction in the network efficiency and improper operation of the power protection system. At the current practice, the amount of PV system installed on the distribution network is constraint by the utility company. As a result, this can limit the reduction of carbon footprint. Therefore, energy storage system is proposed as a solution for these power quality issues. To ensure an effective operation of the distribution network with PV system, a fuzzy control system is developed and implemented to govern the operation of an energy storage system. The fuzzy driven energy storage system is able to mitigate the fluctuating voltage rise and voltage unbalance on the electrical grid by actively manipulates the flow of real power between the grid and the batteries. To verify the effectiveness of the proposed fuzzy driven energy storage system, an experimental network integrated with 7.2kWp PV system was setup. Several case studies are performed to evaluate the response of the proposed solution to mitigate voltage rises, voltage unbalance and reduce the amount of reverse power flow under highly intermittent PV power output.
Understanding Customer Dissatisfaction with Underutilized Distributed File Servers
NASA Technical Reports Server (NTRS)
Riedel, Erik; Gibson, Garth
1996-01-01
An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance, as perceived by users, the response time of distributed operations must improve. In this paper we analyze measurements of an Andrew file system (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server's overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to period of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server central processing unit (CPU) use. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has to potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst period should be reduced.
Large-scale atomistic simulations of helium-3 bubble growth in complex palladium alloys
Hale, Lucas M.; Zimmerman, Jonathan A.; Wong, Bryan M.
2016-05-18
Palladium is an attractive material for hydrogen and hydrogen-isotope storage applications due to its properties of large storage density and high diffusion of lattice hydrogen. When considering tritium storage, the material’s structural and mechanical integrity is threatened by both the embrittlement effect of hydrogen and the creation and evolution of additional crystal defects (e.g., dislocations, stacking faults) caused by the formation and growth of helium-3 bubbles. Using recently developed inter-atomic potentials for the palladium-silver-hydrogen system, we perform large-scale atomistic simulations to examine the defect-mediated mechanisms that govern helium bubble growth. Our simulations show the evolution of a distribution of materialmore » defects, and we compare the material behavior displayed with expectations from experiment and theory. In conclusion, we also present density functional theory calculations to characterize ideal tensile and shear strengths for these materials, which enable the understanding of how and why our developed potentials either meet or confound these expectations.« less
Flexible C : N ratio enhances metabolism of large phytoplankton when resource supply is intermittent
NASA Astrophysics Data System (ADS)
Talmy, D.; Blackford, J.; Hardman-Mountford, N. J.; Polimene, L.; Follows, M. J.; Geider, R. J.
2014-09-01
Phytoplankton cell size influences particle sinking rate, food web interactions and biogeographical distributions. We present a model in which the uptake, storage and assimilation of nitrogen and carbon are explicitly resolved in different-sized phytoplankton cells. In the model, metabolism and cellular C : N ratio are influenced by the accumulation of carbon polymers such as carbohydrate and lipid, which is greatest when cells are nutrient starved, or exposed to high light. Allometric relations and empirical data sets are used to constrain the range of possible C : N, and indicate that larger cells can accumulate significantly more carbon storage compounds than smaller cells. When forced with extended periods of darkness combined with brief exposure to saturating irradiance, the model predicts organisms large enough to accumulate significant carbon reserves may on average synthesize protein and other functional apparatus up to five times faster than smaller organisms. The advantage of storage in terms of average daily protein synthesis rate is greatest when modeled organisms were previously nutrient starved, and carbon storage reservoirs saturated. Small organisms may therefore be at a disadvantage in terms of average daily growth rate in environments that involve prolonged periods of darkness and intermittent nutrient limitation. We suggest this mechanism is a significant constraint on phytoplankton C : N variability and cell size distribution in different oceanic regimes.
Aggregation of carbon dioxide sequestration storage assessment units
Blondes, Madalyn S.; Schuenemeyer, John H.; Olea, Ricardo A.; Drew, Lawrence J.
2013-01-01
The U.S. Geological Survey is currently conducting a national assessment of carbon dioxide (CO2) storage resources, mandated by the Energy Independence and Security Act of 2007. Pre-emission capture and storage of CO2 in subsurface saline formations is one potential method to reduce greenhouse gas emissions and the negative impact of global climate change. Like many large-scale resource assessments, the area under investigation is split into smaller, more manageable storage assessment units (SAUs), which must be aggregated with correctly propagated uncertainty to the basin, regional, and national scales. The aggregation methodology requires two types of data: marginal probability distributions of storage resource for each SAU, and a correlation matrix obtained by expert elicitation describing interdependencies between pairs of SAUs. Dependencies arise because geologic analogs, assessment methods, and assessors often overlap. The correlation matrix is used to induce rank correlation, using a Cholesky decomposition, among the empirical marginal distributions representing individually assessed SAUs. This manuscript presents a probabilistic aggregation method tailored to the correlations and dependencies inherent to a CO2 storage assessment. Aggregation results must be presented at the basin, regional, and national scales. A single stage approach, in which one large correlation matrix is defined and subsets are used for different scales, is compared to a multiple stage approach, in which new correlation matrices are created to aggregate intermediate results. Although the single-stage approach requires determination of significantly more correlation coefficients, it captures geologic dependencies among similar units in different basins and it is less sensitive to fluctuations in low correlation coefficients than the multiple stage approach. Thus, subsets of one single-stage correlation matrix are used to aggregate to basin, regional, and national scales.
Chelonia: A self-healing, replicated storage system
NASA Astrophysics Data System (ADS)
Kerr Nilsen, Jon; Toor, Salman; Nagy, Zsombor; Read, Alex
2011-12-01
Chelonia is a novel grid storage system designed to fill the requirements gap between those of large, sophisticated scientific collaborations which have adopted the grid paradigm for their distributed storage needs, and of corporate business communities gravitating towards the cloud paradigm. Chelonia is an integrated system of heterogeneous, geographically dispersed storage sites which is easily and dynamically expandable and optimized for high availability and scalability. The architecture and implementation in term of web-services running inside the Advanced Resource Connector Hosting Environment Dameon (ARC HED) are described and results of tests in both local -area and wide-area networks that demonstrate the fault tolerance, stability and scalability of Chelonia will be presented. In addition, example setups for production deployments for small and medium-sized VO's are described.
Bibliography On Multiprocessors And Distributed Processing
NASA Technical Reports Server (NTRS)
Miya, Eugene N.
1988-01-01
Multiprocessor and Distributed Processing Bibliography package consists of large machine-readable bibliographic data base, which in addition to usual keyword searches, used for producing citations, indexes, and cross-references. Data base contains UNIX(R) "refer" -formatted ASCII data and implemented on any computer running under UNIX(R) operating system. Easily convertible to other operating systems. Requires approximately one megabyte of secondary storage. Bibliography compiled in 1985.
LSD: Large Survey Database framework
NASA Astrophysics Data System (ADS)
Juric, Mario
2012-09-01
The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.
Research on the Orientation and Application of Distributed Energy Storage in Energy Internet
NASA Astrophysics Data System (ADS)
Zeng, Ming; Zhou, Pengcheng; Li, Ran; Zhou, Jingjing; Chen, Tao; Li, Zhe
2018-01-01
Energy storage is indispensable resources to achieve a high proportion of new energy power consumption in electric power system. As an important support to energy Internet, energy storage system can achieve a variety of energy integration operation to ensure maximum energy efficiency. In this paper, firstly, the SWOT analysis method is used to express the internal and external advantages and disadvantages of distributed energy storage participating in the energy Internet. Secondly, the function orientation of distributed energy storage in energy Internet is studied, based on which the application modes of distributed energy storage in virtual power plant, community energy storage and auxiliary services are deeply studied. Finally, this paper puts forward the development strategy of distributed energy storage which is suitable for the development of China’s energy Internet, and summarizes and prospects the application of distributed energy storage system.
Tetzlaff, D; Birkel, C; Dick, J; Geris, J; Soulsby, C
2014-01-01
We examined the storage dynamics and isotopic composition of soil water over 12 months in three hydropedological units in order to understand runoff generation in a montane catchment. The units form classic catena sequences from freely draining podzols on steep upper hillslopes through peaty gleys in shallower lower slopes to deeper peats in the riparian zone. The peaty gleys and peats remained saturated throughout the year, while the podzols showed distinct wetting and drying cycles. In this region, most precipitation events are <10 mm in magnitude, and storm runoff is mainly generated from the peats and peaty gleys, with runoff coefficients (RCs) typically <10%. In larger events the podzolic soils become strongly connected to the saturated areas, and RCs can exceed 40%. Isotopic variations in precipitation are significantly damped in the organic-rich soil surface horizons due to mixing with larger volumes of stored water. This damping is accentuated in the deeper soil profile and groundwater. Consequently, the isotopic composition of stream water is also damped, but the dynamics strongly reflect those of the near-surface waters in the riparian peats. “pre-event” water typically accounts for >80% of flow, even in large events, reflecting the displacement of water from the riparian soils that has been stored in the catchment for >2 years. These riparian areas are the key zone where different source waters mix. Our study is novel in showing that they act as “isostats,” not only regulating the isotopic composition of stream water, but also integrating the transit time distribution for the catchment. Key Points Hillslope connectivity is controlled by small storage changes in soil units Different catchment source waters mix in large riparian wetland storage Isotopes show riparian wetlands set the catchment transit time distribution PMID:25506098
Tetzlaff, D; Birkel, C; Dick, J; Geris, J; Soulsby, C
2014-02-01
We examined the storage dynamics and isotopic composition of soil water over 12 months in three hydropedological units in order to understand runoff generation in a montane catchment. The units form classic catena sequences from freely draining podzols on steep upper hillslopes through peaty gleys in shallower lower slopes to deeper peats in the riparian zone. The peaty gleys and peats remained saturated throughout the year, while the podzols showed distinct wetting and drying cycles. In this region, most precipitation events are <10 mm in magnitude, and storm runoff is mainly generated from the peats and peaty gleys, with runoff coefficients (RCs) typically <10%. In larger events the podzolic soils become strongly connected to the saturated areas, and RCs can exceed 40%. Isotopic variations in precipitation are significantly damped in the organic-rich soil surface horizons due to mixing with larger volumes of stored water. This damping is accentuated in the deeper soil profile and groundwater. Consequently, the isotopic composition of stream water is also damped, but the dynamics strongly reflect those of the near-surface waters in the riparian peats. "pre-event" water typically accounts for >80% of flow, even in large events, reflecting the displacement of water from the riparian soils that has been stored in the catchment for >2 years. These riparian areas are the key zone where different source waters mix. Our study is novel in showing that they act as "isostats," not only regulating the isotopic composition of stream water, but also integrating the transit time distribution for the catchment. Hillslope connectivity is controlled by small storage changes in soil unitsDifferent catchment source waters mix in large riparian wetland storageIsotopes show riparian wetlands set the catchment transit time distribution.
Bradley, D. Nathan; Tucker, Gregory E.
2013-01-01
A sediment particle traversing the fluvial system may spend the majority of the total transit time at rest, stored in various sedimentary deposits. Floodplains are among the most important of these deposits, with the potential to store large amounts of sediment for long periods of time. The virtual velocity of a sediment grain depends strongly on the amount of time spent in storage, but little is known about sediment storage times. Measurements of floodplain vegetation age have suggested that storage times are exponentially distributed, a case that arises when all the sediment on a floodplain is equally vulnerable to erosion in a given interval. This assumption has been incorporated into sediment routing models, despite some evidence that younger sediment is more likely to be eroded from floodplains than older sediment. We investigate the relationship between sediment age and erosion, which we term the “erosion hazard,” with a model of a meandering river that constructs its floodplain by lateral accretion. We find that the erosion hazard decreases with sediment age, leading to a storage time distribution that is not exponential. We propose an alternate model that requires that channel motion is approximately diffusive and results in a heavy tailed distribution of storage time. The model applies to timescales over which the direction of channel motion is uncorrelated. We speculate that the lower end of this range of time is set by the meander cutoff timescale and the upper end is set by processes that limit the width of the meander belt.
Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakl, Pavel; /Prague, Inst. Phys.; Lauret, Jerome
2011-11-10
Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of themore » largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.« less
An object-based storage model for distributed remote sensing images
NASA Astrophysics Data System (ADS)
Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng
2006-10-01
It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.
Evolution of the use of relational and NoSQL databases in the ATLAS experiment
NASA Astrophysics Data System (ADS)
Barberis, D.
2016-09-01
The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.
Mean PB To Failure - Initial results from a long-term study of disk storage patterns at the RACF
NASA Astrophysics Data System (ADS)
Caramarcu, C.; Hollowell, C.; Rao, T.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, S. A.
2015-12-01
The RACF (RHIC-ATLAS Computing Facility) has operated a large, multi-purpose dedicated computing facility since the mid-1990’s, serving a worldwide, geographically diverse scientific community that is a major contributor to various HEPN projects. A central component of the RACF is the Linux-based worker node cluster that is used for both computing and data storage purposes. It currently has nearly 50,000 computing cores and over 23 PB of storage capacity distributed over 12,000+ (non-SSD) disk drives. The majority of the 12,000+ disk drives provide a cost-effective solution for dCache/XRootD-managed storage, and a key concern is the reliability of this solution over the lifetime of the hardware, particularly as the number of disk drives and the storage capacity of individual drives grow. We report initial results of a long-term study to measure lifetime PB read/written to disk drives in the worker node cluster. We discuss the historical disk drive mortality rate, disk drive manufacturers' published MPTF (Mean PB to Failure) data and how they are correlated to our results. The results help the RACF understand the productivity and reliability of its storage solutions and have implications for other highly-available storage systems (NFS, GPFS, CVMFS, etc) with large I/O requirements.
ERIC Educational Resources Information Center
Black, Claudia
Libraries are becoming information access points, not just book repositories. With greater distribution of printed materials, increased use of optical disks and other compact storage techniques, the emergence of publication on demand, and the proliferation of electronic databases, libraries without large collections will be able to provide prompt…
Hydrogen Storage Materials for Mobile and Stationary Applications: Current State of the Art.
Lai, Qiwen; Paskevicius, Mark; Sheppard, Drew A; Buckley, Craig E; Thornton, Aaron W; Hill, Matthew R; Gu, Qinfen; Mao, Jianfeng; Huang, Zhenguo; Liu, Hua Kun; Guo, Zaiping; Banerjee, Amitava; Chakraborty, Sudip; Ahuja, Rajeev; Aguey-Zinsou, Kondo-Francois
2015-09-07
One of the limitations to the widespread use of hydrogen as an energy carrier is its storage in a safe and compact form. Herein, recent developments in effective high-capacity hydrogen storage materials are reviewed, with a special emphasis on light compounds, including those based on organic porous structures, boron, nitrogen, and aluminum. These elements and their related compounds hold the promise of high, reversible, and practical hydrogen storage capacity for mobile applications, including vehicles and portable power equipment, but also for the large scale and distributed storage of energy for stationary applications. Current understanding of the fundamental principles that govern the interaction of hydrogen with these light compounds is summarized, as well as basic strategies to meet practical targets of hydrogen uptake and release. The limitation of these strategies and current understanding is also discussed and new directions proposed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Redox flow cell energy storage systems
NASA Technical Reports Server (NTRS)
Thaller, L. H.
1979-01-01
The redox flow cell energy storage system being developed by NASA for use in remote power systems and distributed storage installations for electric utilities is presented. The system under consideration is an electrochemical storage device which utilizes the oxidation and reduction of two fully soluble redox couples (acidified chloride solutions of chromium and iron) as active electrode materials separated by a highly selective ion exchange membrane. The reactants are contained in large storage tanks and pumped through a stack of redox flow cells where the electrochemical reactions take place at porous carbon felt electrodes. Redox equipment has allowed the incorporation of state of charge readout, stack voltage control and system capacity maintenance (rebalance) devices to regulate cells in a stack jointly. A 200 W, 12 V system with a capacity of about 400 Wh has been constructed, and a 2 kW, 10kWh system is planned.
Energy Management and Optimization Methods for Grid Energy Storage Systems
Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.; ...
2017-08-24
Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less
Energy Management and Optimization Methods for Grid Energy Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.
Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less
Roberts-Ashby, Tina; Brandon N. Ashby,
2016-01-01
This paper demonstrates geospatial modification of the USGS methodology for assessing geologic CO2 storage resources, and was applied to the Pre-Punta Gorda Composite and Dollar Bay reservoirs of the South Florida Basin. The study provides detailed evaluation of porous intervals within these reservoirs and utilizes GIS to evaluate the potential spatial distribution of reservoir parameters and volume of CO2 that can be stored. This study also shows that incorporating spatial variation of parameters using detailed and robust datasets may improve estimates of storage resources when compared to applying uniform values across the study area derived from small datasets, like many assessment methodologies. Geospatially derived estimates of storage resources presented here (Pre-Punta Gorda Composite = 105,570 MtCO2; Dollar Bay = 24,760 MtCO2) were greater than previous assessments, which was largely attributed to the fact that detailed evaluation of these reservoirs resulted in higher estimates of porosity and net-porous thickness, and areas of high porosity and thick net-porous intervals were incorporated into the model, likely increasing the calculated volume of storage space available for CO2 sequestration. The geospatial method for evaluating CO2 storage resources also provides the ability to identify areas that potentially contain higher volumes of storage resources, as well as areas that might be less favorable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.
Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less
Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.
2017-01-07
Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob
Electricity generated by Hydropower Plants (HPPs) contributes a considerable portion of bulk electricity generation and delivers it with a low carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which includes solar and wind energy. The increasing penetration of wind and solar penetration leads to a lowered inertia in the grid and hence poses stability challenges. In recent years, breakthrough in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments in power grids. Multiple ROR HPPs if integrated with scalable, multi time-step energy storage so that the total output canmore » be controlled. Although, the size of a single energy storage is far smaller than that of a typical reservoir, cohesively managing multiple sets of energy storage distributed in different locations is proposed. The ratings of storages and multiple ROR HPPs approximately equals the rating of a large, conventional HPP. The challenges associated with the system architecture and operation are described. Energy storage technologies such as supercapacitors, flywheels, batteries etc. can function as a dispatchable synthetic reservoir with a scalable size of energy storage will be integrated. Supercapacitors, flywheels, and battery are chosen to provide fast, medium, and slow responses to support grid requirements. Various dynamic and transient power grid conditions are simulated and performances of integrated ROR HPPs with energy storage is provided. The end goal of this research is to investigate the inertial equivalence of a large, conventional HPP with a unique set of multiple ROR HPPs and optimally rated energy storage systems.« less
Hydrogen Storage Technologies for Future Energy Systems.
Preuster, Patrick; Alekseev, Alexander; Wasserscheid, Peter
2017-06-07
Future energy systems will be determined by the increasing relevance of solar and wind energy. Crude oil and gas prices are expected to increase in the long run, and penalties for CO 2 emissions will become a relevant economic factor. Solar- and wind-powered electricity will become significantly cheaper, such that hydrogen produced from electrolysis will be competitively priced against hydrogen manufactured from natural gas. However, to handle the unsteadiness of system input from fluctuating energy sources, energy storage technologies that cover the full scale of power (in megawatts) and energy storage amounts (in megawatt hours) are required. Hydrogen, in particular, is a promising secondary energy vector for storing, transporting, and distributing large and very large amounts of energy at the gigawatt-hour and terawatt-hour scales. However, we also discuss energy storage at the 120-200-kWh scale, for example, for onboard hydrogen storage in fuel cell vehicles using compressed hydrogen storage. This article focuses on the characteristics and development potential of hydrogen storage technologies in light of such a changing energy system and its related challenges. Technological factors that influence the dynamics, flexibility, and operating costs of unsteady operation are therefore highlighted in particular. Moreover, the potential for using renewable hydrogen in the mobility sector, industrial production, and the heat market is discussed, as this potential may determine to a significant extent the future economic value of hydrogen storage technology as it applies to other industries. This evaluation elucidates known and well-established options for hydrogen storage and may guide the development and direction of newer, less developed technologies.
Inagaki, Taichi; Ishida, Toyokazu
2016-09-14
Thermal storage, a technology that enables us to control thermal energy, makes it possible to reuse a huge amount of waste heat, and materials with the ability to treat larger thermal energy are in high demand for energy-saving societies. Sugar alcohols are now one promising candidate for phase change materials (PCMs) because of their large thermal storage density. In this study, we computationally design experimentally unknown non-natural sugar alcohols and predict their thermal storage density as a basic step toward the development of new high performance PCMs. The non-natural sugar alcohol molecules are constructed in silico in accordance with the previously suggested molecular design guidelines: linear elongation of a carbon backbone, separated distribution of OH groups, and even numbers of carbon atoms. Their crystal structures are then predicted using the random search method and first-principles calculations. Our molecular simulation results clearly demonstrate that the non-natural sugar alcohols have potential ability to have thermal storage density up to ∼450-500 kJ/kg, which is significantly larger than the maximum thermal storage density of the present known organic PCMs (∼350 kJ/kg). This computational study suggests that, even in the case of H-bonded molecular crystals where the electrostatic energy contributes mainly to thermal storage density, the molecular distortion and van der Waals energies are also important factors to increase thermal storage density. In addition, the comparison between the three eight-carbon non-natural sugar alcohol isomers indicates that the selection of preferable isomers is also essential for large thermal storage density.
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)
1998-01-01
This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence.
NASA Technical Reports Server (NTRS)
Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)
2000-01-01
This document contains copies of those technical papers received in time for publication prior to the Eighth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Seventeenth IEEE Symposium on Mass Storage Systems at the University of Maryland University College Inn and Conference Center March 27-30, 2000. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, future of current technology, new technology with a special emphasis on holographic storage, performance, standards, site reports, vendor solutions. Tutorials will be available on stability of optical media, disk subsystem performance evaluation, I/O and storage tuning, functionality and performance evaluation of file systems for storage area networks.
Using data tagging to improve the performance of Kanerva's sparse distributed memory
NASA Technical Reports Server (NTRS)
Rogers, David
1988-01-01
The standard formulation of Kanerva's sparse distributed memory (SDM) involves the selection of a large number of data storage locations, followed by averaging the data contained in those locations to reconstruct the stored data. A variant of this model is discussed, in which the predominant pattern is the focus of reconstruction. First, one architecture is proposed which returns the predominant pattern rather than the average pattern. However, this model will require too much storage for most uses. Next, a hybrid model is proposed, called tagged SDM, which approximates the results of the predominant pattern machine, but is nearly as efficient as Kanerva's original formulation. Finally, some experimental results are shown which confirm that significant improvements in the recall capability of SDM can be achieved using the tagged architecture.
Impact of Data Placement on Resilience in Large-Scale Object Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carns, Philip; Harms, Kevin; Jenkins, John
Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model tomore » investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.« less
Optimising LAN access to grid enabled storage elements
NASA Astrophysics Data System (ADS)
Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.
2008-07-01
When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.
NASA Astrophysics Data System (ADS)
Kies, Alexander; Nag, Kabitri; von Bremen, Lueder; Lorenz, Elke; Heinemann, Detlev
2015-04-01
The penetration of renewable energies in the European power system has increased in the last decades (23.5% share of renewables in the gross electricity consumption of the EU-28 in 2012) and is expected to increase further up to very high shares close to 100%. Planning and organizing this European energy transition towards sustainable power sources will be one of the major challenges of the 21st century. It is very likely that in a fully renewable European power system wind and photovoltaics (pv) will contribute the largest shares to the generation mix followed by hydro power. However, feed-in from wind and pv is due to the weather dependant nature of their resources fluctuating and non-controllable. To match generation and consumption several solutions and their combinations were proposed like very high backup-capacities of conventional power generation (e.g. fossile or nuclear), storages or the extension of the transmission grid. Apart from those options hydro power can be used to counterbalance fluctuating wind and pv generation to some extent. In this work we investigate the effects of hydro power from Norway and Sweden on residual storage needs in Europe depending on the overlaying grid scenario. High temporally and spatially resolved weather data with a spatial resolution of 7 x 7 km and a temporal resolution of 1 hour was used to model the feed-in from wind and pv for 34 investigated European countries for the years 2003-2012. Inflow into hydro storages and generation by run-of-river power plants were computed from ERA-Interim reanalysis runoff data at a spatial resolution of 0.75° x 0.75° and a daily temporal resolution. Power flows in a simplified transmission grid connecting the 34 European countries were modelled minimizing dissipation using a DC-flow approximation. Previous work has shown that hydro power, namely in Norway and Sweden, can reduce storage needs in a renewable European power system by a large extent. A 15% share of hydro power in Europe can reduce storage needs by up to 50% with respect to stored energy. This requires however large transmission capacities between the major hydro power producers in Scandinavia and the largest consumers of electrical energy in Western Europe. We show how Scandinavian hydro power can reduce storage needs in dependency of the transmission grid for two fully renewable scenarios: The first one has its wind and pv generation capacities distributed according to an empirically derived approach. The second scenario has an optimal spatial distribution to minimize storage needs distribution of wind and pv generation capacities across Europe. We show that in both cases hydro power together with a well developed transmission grid has the potential to contribute a large share to the solution of the generation-consumption mismatch problem. The work is part of the RESTORE 2050 project (BMBF) that investigates the requirements for cross-country grid extensions, usage of storage technologies and capacities and the development of new balancing technologies.
Intermodal transport and distribution patterns in ports relationship to hinterland
NASA Astrophysics Data System (ADS)
Dinu, O.; Dragu, V.; Ruscă, F.; Ilie, A.; Oprea, C.
2017-08-01
It is of great importance to examine all interactions between ports, terminals, intermodal transport and logistic actors of distribution channels, as their optimization can lead to operational improvement. Proposed paper starts with a brief overview of different goods types and allocation of their logistic costs, with emphasis on storage component. Present trend is to optimize storage costs by means of port storage area buffer function, by making the best use of free storage time available, most of the ports offer. As a research methodology, starting point is to consider the cost structure of a generic intermodal transport (storage, handling and transport costs) and to link this to intermodal distribution patterns most frequently cast-off in port relationship to hinterland. The next step is to evaluate storage costs impact on distribution pattern selection. For a given value of port free storage time, a corresponding value of total storage time in the distribution channel can be identified, in order to substantiate a distribution pattern shift. Different scenarios for transport and handling costs variation, recorded when distribution pattern shift, are integrated in order to establish the reaction of the actors involved in port related logistic and intermodal transport costs evolution is analysed in order to optimize distribution pattern selection.
Earthquake mechanism and predictability shown by a laboratory fault
King, C.-Y.
1994-01-01
Slip events generated in a laboratory fault model consisting of a circulinear chain of eight spring-connected blocks of approximately equal weight elastically driven to slide on a frictional surface are studied. It is found that most of the input strain energy is released by a relatively few large events, which are approximately time predictable. A large event tends to roughen stress distribution along the fault, whereas the subsequent smaller events tend to smooth the stress distribution and prepare a condition of simultaneous criticality for the occurrence of the next large event. The frequency-size distribution resembles the Gutenberg-Richter relation for earthquakes, except for a falloff for the largest events due to the finite energy-storage capacity of the fault system. Slip distributions, in different events are commonly dissimilar. Stress drop, slip velocity, and rupture velocity all tend to increase with event size. Rupture-initiation locations are usually not close to the maximum-slip locations. ?? 1994 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram
2010-05-01
Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.
7 CFR 250.14 - Warehousing, distribution and storage of donated foods.
Code of Federal Regulations, 2013 CFR
2013-01-01
.... (iv) All initial data regarding the cost of the current warehousing and distribution system and the... 7 Agriculture 4 2013-01-01 2013-01-01 false Warehousing, distribution and storage of donated foods... General Operating Provisions § 250.14 Warehousing, distribution and storage of donated foods. (a...
7 CFR 250.14 - Warehousing, distribution and storage of donated foods.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... (iv) All initial data regarding the cost of the current warehousing and distribution system and the... 7 Agriculture 4 2014-01-01 2014-01-01 false Warehousing, distribution and storage of donated foods... General Operating Provisions § 250.14 Warehousing, distribution and storage of donated foods. (a...
7 CFR 250.14 - Warehousing, distribution and storage of donated foods.
Code of Federal Regulations, 2012 CFR
2012-01-01
.... (iv) All initial data regarding the cost of the current warehousing and distribution system and the... 7 Agriculture 4 2012-01-01 2012-01-01 false Warehousing, distribution and storage of donated foods... General Operating Provisions § 250.14 Warehousing, distribution and storage of donated foods. (a...
NASA Astrophysics Data System (ADS)
Yue, L.; Guan, Z.; He, C.; Luo, D.; Saif, U.
2017-06-01
In recent years, the competitive pressure on manufacturing companies shifted them from mass production to mass customization to produce large variety of products. It is a great challenge for companies nowadays to produce customized mixed flow mode of production to meet customized demand on time. Due to large variety of products, the storage system to deliver variety of products to production lines influences on the timely production of variety of products, as investigated from by simulation study of an inefficient storage system of a real Company, in the current research. Therefore, current research proposed a slotting optimization model with mixed model sequence to assemble in consideration of the final flow lines to optimize whole automated storage and retrieval system (AS/RS) and distribution system in the case company. Current research is aimed to minimize vertical height of centre of gravity of AS/RS and total time spent for taking the materials out from the AS/RS simultaneously. Genetic algorithm is adopted to solve the proposed problem and computational result shows significant improvement in stability and efficiency of AS/RS as compared to the existing method used in the case company.
Cardiovascular imaging environment: will the future be cloud-based?
Kawel-Boehm, Nadine; Bluemke, David A
2017-07-01
In cardiovascular CT and MR imaging large datasets have to be stored, post-processed, analyzed and distributed. Beside basic assessment of volume and function in cardiac magnetic resonance imaging e.g., more sophisticated quantitative analysis is requested requiring specific software. Several institutions cannot afford various types of software and provide expertise to perform sophisticated analysis. Areas covered: Various cloud services exist related to data storage and analysis specifically for cardiovascular CT and MR imaging. Instead of on-site data storage, cloud providers offer flexible storage services on a pay-per-use basis. To avoid purchase and maintenance of specialized software for cardiovascular image analysis, e.g. to assess myocardial iron overload, MR 4D flow and fractional flow reserve, evaluation can be performed with cloud based software by the consumer or complete analysis is performed by the cloud provider. However, challenges to widespread implementation of cloud services include regulatory issues regarding patient privacy and data security. Expert commentary: If patient privacy and data security is guaranteed cloud imaging is a valuable option to cope with storage of large image datasets and offer sophisticated cardiovascular image analysis for institutions of all sizes.
A Columnar Storage Strategy with Spatiotemporal Index for Big Climate Data
NASA Astrophysics Data System (ADS)
Hu, F.; Bowen, M. K.; Li, Z.; Schnase, J. L.; Duffy, D.; Lee, T. J.; Yang, C. P.
2015-12-01
Large collections of observational, reanalysis, and climate model output data may grow to as large as a 100 PB in the coming years, so climate dataset is in the Big Data domain, and various distributed computing frameworks have been utilized to address the challenges by big climate data analysis. However, due to the binary data format (NetCDF, HDF) with high spatial and temporal dimensions, the computing frameworks in Apache Hadoop ecosystem are not originally suited for big climate data. In order to make the computing frameworks in Hadoop ecosystem directly support big climate data, we propose a columnar storage format with spatiotemporal index to store climate data, which will support any project in the Apache Hadoop ecosystem (e.g. MapReduce, Spark, Hive, Impala). With this approach, the climate data will be transferred into binary Parquet data format, a columnar storage format, and spatial and temporal index will be built and attached into the end of Parquet files to enable real-time data query. Then such climate data in Parquet data format could be available to any computing frameworks in Hadoop ecosystem. The proposed approach is evaluated using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. Experimental results show that this approach could efficiently overcome the gap between the big climate data and the distributed computing frameworks, and the spatiotemporal index could significantly accelerate data querying and processing.
Proposal for massively parallel data storage system
NASA Technical Reports Server (NTRS)
Mansuripur, M.
1992-01-01
An architecture for integrating large numbers of data storage units (drives) to form a distributed mass storage system is proposed. The network of interconnected units consists of nodes and links. At each node there resides a controller board, a data storage unit and, possibly, a local/remote user-terminal. The links (twisted-pair wires, coax cables, or fiber-optic channels) provide the communications backbone of the network. There is no central controller for the system as a whole; all decisions regarding allocation of resources, routing of messages and data-blocks, creation and distribution of redundant data-blocks throughout the system (for protection against possible failures), frequency of backup operations, etc., are made locally at individual nodes. The system can handle as many user-terminals as there are nodes in the network. Various users compete for resources by sending their requests to the local controller-board and receiving allocations of time and storage space. In principle, each user can have access to the entire system, and all drives can be running in parallel to service the requests for one or more users. The system is expandable up to a maximum number of nodes, determined by the number of routing-buffers built into the controller boards. Additional drives, controller-boards, user-terminals, and links can be simply plugged into an existing system in order to expand its capacity.
Storing, Browsing, Querying, and Sharing Data: the THREDDS Data Repository (TDR)
NASA Astrophysics Data System (ADS)
Wilson, A.; Lindholm, D.; Baltzer, T.
2005-12-01
The Unidata Internet Data Distribution (IDD) network delivers gigabytes of data per day in near real time to sites across the U.S. and beyond. The THREDDS Data Server (TDS) supports public browsing of metadata and data access via OPeNDAP enabled URLs for datasets such as these. With such large quantities of data, sites generally employ a simple data management policy, keeping the data for a relatively short term on the order of hours to perhaps a week or two. In order to save interesting data in longer term storage and make it available for sharing, a user must move the data herself. In this case the user is responsible for determining where space is available, executing the data movement, generating any desired metadata, and setting access control to enable sharing. This task sequence is generally based on execution of a sequence of low level operating system specific commands with significant user involvement. The LEAD (Linked Environments for Atmospheric Discovery) project is building a cyberinfrastructure to support research and education in mesoscale meteorology. LEAD orchestrations require large, robust, and reliable storage with speedy access to stage data and store both intermediate and final results. These requirements suggest storage solutions that involve distributed storage, replication, and interfacing to archival storage systems such as mass storage systems and tape or removable disks. LEAD requirements also include metadata generation and access in order to support querying. In support of both THREDDS and LEAD requirements, Unidata is designing and prototyping the THREDDS Data Repository (TDR), a framework for a modular data repository to support distributed data storage and retrieval using a variety of back end storage media and interchangeable software components. The TDR interface will provide high level abstractions for long term storage, controlled, fast and reliable access, and data movement capabilities via a variety of technologies such as OPeNDAP and gridftp. The modular structure will allow substitution of software components so that both simple and complex storage media can be integrated into the repository. It will also allow integration of different varieties of supporting software. For example, if replication is desired, replica management could be handled via a simple hash table or a complex solution such as Replica Locater Service (RLS). In order to ensure that metadata is available for all the data in the repository, the TDR will also generate THREDDS metadata when necessary. Users will be able to establish levels of access control to their metadata and data. Coupled with a THREDDS Data Server, both browsing via THREDDS catalogs and querying capabilities will be supported. This presentation will describe the motivating factors, current status, and future plans of the TDR. References: IDD: http://www.unidata.ucar.edu/content/software/idd/index.html THREDDS: http://www.unidata.ucar.edu/content/projects/THREDDS/tech/server/ServerStatus.html LEAD: http://lead.ou.edu/ RLS: http://www.isi.edu/~annc/papers/chervenakRLSjournal05.pdf
NASA Astrophysics Data System (ADS)
Pyne, Moinak
This thesis aspires to model and control, the flow of power in a DC microgrid. Specifically, the energy sources are a photovoltaic system and the utility grid, a lead acid battery based energy storage system and twenty PEV charging stations as the loads. Theoretical principles of large scale state space modeling are applied to model the considerable number of power electronic converters needed for controlling voltage and current thresholds. The energy storage system is developed using principles of neural networks to facilitate a stable and uncomplicated model of the lead acid battery. Power flow control is structured as a hierarchical problem with multiple interactions between individual components of the microgrid. The implementation is done using fuzzy logic with scheduling the maximum use of available solar energy and compensating demand or excess power with the energy storage system, and minimizing utility grid use, while providing multiple speeds of charging the PEVs.
Coherent Spin Control at the Quantum Level in an Ensemble-Based Optical Memory.
Jobez, Pierre; Laplane, Cyril; Timoney, Nuala; Gisin, Nicolas; Ferrier, Alban; Goldner, Philippe; Afzelius, Mikael
2015-06-12
Long-lived quantum memories are essential components of a long-standing goal of remote distribution of entanglement in quantum networks. These can be realized by storing the quantum states of light as single-spin excitations in atomic ensembles. However, spin states are often subjected to different dephasing processes that limit the storage time, which in principle could be overcome using spin-echo techniques. Theoretical studies suggest this to be challenging due to unavoidable spontaneous emission noise in ensemble-based quantum memories. Here, we demonstrate spin-echo manipulation of a mean spin excitation of 1 in a large solid-state ensemble, generated through storage of a weak optical pulse. After a storage time of about 1 ms we optically read-out the spin excitation with a high signal-to-noise ratio. Our results pave the way for long-duration optical quantum storage using spin-echo techniques for any ensemble-based memory.
Photographic memory: The storage and retrieval of data
NASA Technical Reports Server (NTRS)
Horton, J.
1984-01-01
The concept of density encoding digital data in a mass-storage computer peripheral is proposed. This concept requires that digital data be encoded as distinguishable density levels (DDLS) of the film to be used as the storage medium. These DDL's are then recorded on the film in relatively large pixels. Retrieval of the data would be accomplished by scanning the photographic record using a relatively small aperture. Multiplexing of the pixels is used to store data of a range greater than the number of DDL's supportable by the film in question. Although a cartographic application is used as an example for the photographic storage of data, any digital data can be stored in a like manner. When the data is inherently spatially-distributed, the aptness of the proposed scheme is even more evident. In such a case, human-readability is an advantage which can be added to those mentioned earlier: speed of acquisition, ease of implementation, and cost effectiveness.
Study of Solid State Drives performance in PROOF distributed analysis system
NASA Astrophysics Data System (ADS)
Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.
2010-04-01
Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.
Shiwa, Yuh; Hachiya, Tsuyoshi; Furukawa, Ryohei; Ohmomo, Hideki; Ono, Kanako; Kudo, Hisaaki; Hata, Jun; Hozawa, Atsushi; Iwasaki, Motoki; Matsuda, Koichi; Minegishi, Naoko; Satoh, Mamoru; Tanno, Kozo; Yamaji, Taiki; Wakai, Kenji; Hitomi, Jiro; Kiyohara, Yutaka; Kubo, Michiaki; Tanaka, Hideo; Tsugane, Shoichiro; Yamamoto, Masayuki; Sobue, Kenji; Shimizu, Atsushi
2016-01-01
Differences in DNA collection protocols may be a potential confounder in epigenome-wide association studies (EWAS) using a large number of blood specimens from multiple biobanks and/or cohorts. Here we show that pre-analytical procedures involved in DNA collection can induce systematic bias in the DNA methylation profiles of blood cells that can be adjusted by cell-type composition variables. In Experiment 1, whole blood from 16 volunteers was collected to examine the effect of a 24 h storage period at 4°C on DNA methylation profiles as measured using the Infinium HumanMethylation450 BeadChip array. Our statistical analysis showed that the P-value distribution of more than 450,000 CpG sites was similar to the theoretical distribution (in quantile-quantile plot, λ = 1.03) when comparing two control replicates, which was remarkably deviated from the theoretical distribution (λ = 1.50) when comparing control and storage conditions. We then considered cell-type composition as a possible cause of the observed bias in DNA methylation profiles and found that the bias associated with the cold storage condition was largely decreased (λadjusted = 1.14) by taking into account a cell-type composition variable. As such, we compared four respective sample collection protocols used in large-scale Japanese biobanks or cohorts as well as two control replicates. Systematic biases in DNA methylation profiles were observed between control and three of four protocols without adjustment of cell-type composition (λ = 1.12–1.45) and no remarkable biases were seen after adjusting for cell-type composition in all four protocols (λadjusted = 1.00–1.17). These results revealed important implications for comparing DNA methylation profiles between blood specimens from different sources and may lead to discovery of disease-associated DNA methylation markers and the development of DNA methylation profile-based predictive risk models. PMID:26799745
Shiwa, Yuh; Hachiya, Tsuyoshi; Furukawa, Ryohei; Ohmomo, Hideki; Ono, Kanako; Kudo, Hisaaki; Hata, Jun; Hozawa, Atsushi; Iwasaki, Motoki; Matsuda, Koichi; Minegishi, Naoko; Satoh, Mamoru; Tanno, Kozo; Yamaji, Taiki; Wakai, Kenji; Hitomi, Jiro; Kiyohara, Yutaka; Kubo, Michiaki; Tanaka, Hideo; Tsugane, Shoichiro; Yamamoto, Masayuki; Sobue, Kenji; Shimizu, Atsushi
2016-01-01
Differences in DNA collection protocols may be a potential confounder in epigenome-wide association studies (EWAS) using a large number of blood specimens from multiple biobanks and/or cohorts. Here we show that pre-analytical procedures involved in DNA collection can induce systematic bias in the DNA methylation profiles of blood cells that can be adjusted by cell-type composition variables. In Experiment 1, whole blood from 16 volunteers was collected to examine the effect of a 24 h storage period at 4°C on DNA methylation profiles as measured using the Infinium HumanMethylation450 BeadChip array. Our statistical analysis showed that the P-value distribution of more than 450,000 CpG sites was similar to the theoretical distribution (in quantile-quantile plot, λ = 1.03) when comparing two control replicates, which was remarkably deviated from the theoretical distribution (λ = 1.50) when comparing control and storage conditions. We then considered cell-type composition as a possible cause of the observed bias in DNA methylation profiles and found that the bias associated with the cold storage condition was largely decreased (λ adjusted = 1.14) by taking into account a cell-type composition variable. As such, we compared four respective sample collection protocols used in large-scale Japanese biobanks or cohorts as well as two control replicates. Systematic biases in DNA methylation profiles were observed between control and three of four protocols without adjustment of cell-type composition (λ = 1.12-1.45) and no remarkable biases were seen after adjusting for cell-type composition in all four protocols (λ adjusted = 1.00-1.17). These results revealed important implications for comparing DNA methylation profiles between blood specimens from different sources and may lead to discovery of disease-associated DNA methylation markers and the development of DNA methylation profile-based predictive risk models.
NASA Astrophysics Data System (ADS)
Atubga, David; Wu, Huijuan; Lu, Lidong; Sun, Xiaoyan
2017-02-01
Typical fully distributed optical fiber sensors (DOFS) with dozens of kilometers are equivalent to tens of thousands of point sensors along the whole monitoring line, which means tens of thousands of data will be generated for one pulse launching period. Therefore, in an all-day nonstop monitoring, large volumes of data are created thereby triggering the demand for large storage space and high speed for data transmission. In addition, when the monitoring length and channel numbers increase, the data also increase extensively. The task of mitigating large volumes of data accumulation, large storage capacity, and high-speed data transmission is, therefore, the aim of this paper. To demonstrate our idea, we carried out a comparative study of two lossless methods, Huffman and Lempel Ziv Welch (LZW), with a lossy data compression algorithm, fast wavelet transform (FWT) based on three distinctive DOFS sensing data, such as Φ-OTDR, P-OTDR, and B-OTDA. Our results demonstrated that FWT yielded the best compression ratio with good consumption time, irrespective of errors in signal construction of the three DOFS data. Our outcomes indicate the promising potentials of FWT which makes it more suitable, reliable, and convenient for real-time compression of the DOFS data. Finally, it was observed that differences in the DOFS data structure have some influence on both the compression ratio and computational cost.
[Project to enhance bone bank tissue storage and distribution procedures].
Huang, Jui-Chen; Wu, Chiung-Lan; Chen, Chun-Chuan; Chen, Shu-Hua
2011-10-01
Organ and tissue transplantation are now commonly preformed procedures. Improper organ bank handling procedures may increase infection risks. Execution accuracy in terms of tissue storage and distribution at our bone bank was 80%. We thus proposed an execution improvement project to enhance procedures in order to fulfill the intent of donors and ensure recipient safety. This project was designed to raise nurse professionalism, and ensure patient safety through enhanced tissue storage and distribution procedures. Education programs developed for this project focus on teaching standard operating procedures for bone and ligament storage and distribution, bone bank facility maintenance, trouble shooting and solutions, and periodic inspection systems. Cognition of proper storage and distribution procedures rose from 81% to 100%; Execution accuracy also rose from 80% to 100%. The project successfully conveyed concepts essential to the correct execution of organ storage and distribution procedures and proper organ bank facility management. Achieving and maintaining procedural and management standards is crucial to continued organ donations and the recipient safety.
NASA Astrophysics Data System (ADS)
Lydersen, Ida; Sopher, Daniel; Juhlin, Christopher
2015-04-01
Geological storage of CO2 is one of the available options to reduce CO2-emissions from large point sources. Previous work in the Baltic Sea Basin has inferred a large storage potential in several stratigraphic units. The most promising of these is the Faludden sandstone, exhibiting favorable reservoir properties and forming a regional stratigraphic trap. A potential location for a pilot CO2 injection site, to explore the suitability of the Faludden reservoir is onshore Gotland, Sweden. In this study onshore and offshore data have been digitized and interpreted, along with well data, to provide a detailed characterization of the Faludden reservoir below parts of Gotland. Maps and regional seismic profiles describing the extent and top structure of the Faludden sandstone are presented. The study area covers large parts of the island of Gotland, and extends about 50-70km offshore. The seismic data presented is part of a larger dataset acquired by Oljeprospektering AB (OPAB) between 1970 and 1990. The dataset is to this date largely unpublished, therefore re-processing and interpretation of these data provide improved insight into the subsurface of the study area. Two longer seismic profiles crossing Gotland ENE-WSW have been interpreted to give a large scale, regional control of the Faludden sandstone. A relatively tight grid of land seismic following the extent of the Faludden sandstone along the eastern coast to the southernmost point has been interpreted to better understand the actual distribution and geometry of the Faludden sandstone beneath Gotland. The maps from this study help to identify the most suitable area for a potential test injection site for CO2-storage, and to further the geological understanding of the area in general.
Seismic Response Analysis of an Unanchored Steel Tank under Horizontal Excitation
NASA Astrophysics Data System (ADS)
Rulin, Zhang; Xudong, Cheng; Youhai, Guan
2017-06-01
The seismic performance of liquid storage tank affects the safety of people’s life and property. A 3-D finite element method (FEM) model of storage tank is established, which considers the liquid-solid coupling effect. Then, the displacement and stress distribution along the tank wall is studied under El Centro earthquake. Results show that, large amplitude sloshing with long period appears on liquid surface. The elephant-foot deformation occurs near the tank bottom, and at the elephant-foot deformation position maximum hoop stress and axial stress appear. The maximum axial compressive stress is very close to the allowable critical stress calculated by the design code, and may be local buckling failure occurs. The research can provide some reference for the seismic design of storage tanks.
Microcopying wildland maps for distribution and scanner digitizing
Elliot L Amidon; Joyce E. Dye
1976-01-01
Maps for wildland resource inventory and managament purposes typically show vegetation types, soils, and other areal information. For field work, maps must be large-scale. For safekeeping and compact storage, however, they can be reduced onto film, ready to be enlarged on demand by office viewers. By meeting certain simple requirements, film images are potential input...
NASA Astrophysics Data System (ADS)
Schuchardt, Anne; Pöppl, Ronald; Morche, David
2016-04-01
Large wood (LW) provides various ecological and morphological functions. Recent research has focused on habitat diversity and abundance, effects on channel planforms, pool formation, flow regimes and increased storage of organic matter as well as storage of fine sediment. While LW studies and sediment transport rates are the focus of numerous research questions, the influence of large channel blocking barriers (e.g. LW) and their impact on sediment trapping and decoupling transportation pathways is less studied. This project tries to diminish the obvious gap and deals with the modifications of the sediment connectivity by LW. To investigate the influence of large wood on sediment transporting processes and sediment connectivity, the spatial distribution and characterization of LW (>1 m in length and >10 cm in diameter) in channels is examined by field mapping and dGPS measurements. Channel hydraulic parameters are determined by field measurements of channel long profiles and cross sections. To quantify the direct effects of LW on discharge and bed load transport the flow velocity and bed load up- and downstream of LW is measured using an Ott-Nautilus and a portable Helley-Smith bed load sampler during different water stages. Sediment storages behind LWD accumulations will be monitored with dGPS. While accumulation of sediment indicates in-channel sediment storage and thus disconnection from downstream bed load transport, erosion of sediment evidences downstream sediment connectivity. First results will be presented from two study areas in mountain ranges in Germany (Wetterstein Mountain Range) and Austria (Bohemian Massif).
Wei, Yawei; Li, Maihe; Chen, Hua; Lewis, Bernard J; Yu, Dapao; Zhou, Li; Zhou, Wangming; Fang, Xiangmin; Zhao, Wei; Dai, Limin
2013-01-01
The northeastern forest region of China is an important component of total temperate and boreal forests in the northern hemisphere. But how carbon (C) pool size and distribution varies among tree, understory, forest floor and soil components, and across stand ages remains unclear. To address this knowledge gap, we selected three major temperate and two major boreal forest types in northeastern (NE) China. Within both forest zones, we focused on four stand age classes (young, mid-aged, mature and over-mature). Results showed that total C storage was greater in temperate than in boreal forests, and greater in older than in younger stands. Tree biomass C was the main C component, and its contribution to the total forest C storage increased with increasing stand age. It ranged from 27.7% in young to 62.8% in over-mature stands in boreal forests and from 26.5% in young to 72.8% in over-mature stands in temperate forests. Results from both forest zones thus confirm the large biomass C storage capacity of old-growth forests. Tree biomass C was influenced by forest zone, stand age, and forest type. Soil C contribution to total forest C storage ranged from 62.5% in young to 30.1% in over-mature stands in boreal and from 70.1% in young to 26.0% in over-mature in temperate forests. Thus soil C storage is a major C pool in forests of NE China. On the other hand, understory and forest floor C jointly contained less than 13% and <5%, in boreal and temperate forests respectively, and thus play a minor role in total forest C storage in NE China.
Wei, Yawei; Li, Maihe; Chen, Hua; Lewis, Bernard J.; Yu, Dapao; Zhou, Li; Zhou, Wangming; Fang, Xiangmin; Zhao, Wei; Dai, Limin
2013-01-01
The northeastern forest region of China is an important component of total temperate and boreal forests in the northern hemisphere. But how carbon (C) pool size and distribution varies among tree, understory, forest floor and soil components, and across stand ages remains unclear. To address this knowledge gap, we selected three major temperate and two major boreal forest types in northeastern (NE) China. Within both forest zones, we focused on four stand age classes (young, mid-aged, mature and over-mature). Results showed that total C storage was greater in temperate than in boreal forests, and greater in older than in younger stands. Tree biomass C was the main C component, and its contribution to the total forest C storage increased with increasing stand age. It ranged from 27.7% in young to 62.8% in over-mature stands in boreal forests and from 26.5% in young to 72.8% in over-mature stands in temperate forests. Results from both forest zones thus confirm the large biomass C storage capacity of old-growth forests. Tree biomass C was influenced by forest zone, stand age, and forest type. Soil C contribution to total forest C storage ranged from 62.5% in young to 30.1% in over-mature stands in boreal and from 70.1% in young to 26.0% in over-mature in temperate forests. Thus soil C storage is a major C pool in forests of NE China. On the other hand, understory and forest floor C jointly contained less than 13% and <5%, in boreal and temperate forests respectively, and thus play a minor role in total forest C storage in NE China. PMID:23977252
NASA Astrophysics Data System (ADS)
Shigenobu, Ryuto; Noorzad, Ahmad Samim; Muarapaz, Cirio; Yona, Atsushi; Senjyu, Tomonobu
2016-04-01
Distributed generators (DG) and renewable energy sources have been attracting special attention in distribution systems in all over the world. Renewable energies, such as photovoltaic (PV) and wind turbine generators are considered as green energy. However, a large amount of DG penetration causes voltage deviation beyond the statutory range and reverse power flow at interconnection points in the distribution system. If excessive voltage deviation occurs, consumer's electric devices might break and reverse power flow will also has a negative impact on the transmission system. Thus, mass interconnections of DGs has an adverse effect on both of the utility and the customer. Therefore, reactive power control method is proposed previous research by using inverters attached DGs for prevent voltage deviations. Moreover, battery energy storage system (BESS) is also proposed for resolve reverse power flow. In addition, it is possible to supply high quality power for managing DGs and BESSs. Therefore, this paper proposes a method to maintain voltage, active power, and reactive power flow at interconnection points by using cooperative controlled of PVs, house BESSs, EVs, large BESSs, and existing voltage control devices. This paper not only protect distribution system, but also attain distribution loss reduction and effectivity management of control devices. Therefore mentioned control objectives are formulated as an optimization problem that is solved by using the Particle Swarm Optimization (PSO) algorithm. Modified scheduling method is proposed in order to improve convergence probability of scheduling scheme. The effectiveness of the proposed method is verified by case studies results and by using numerical simulations in MATLAB®.
MIXING IN DISTRIBUTION SYSTEM STORAGE TANKS: ITS EFFECT ON WATER QUALITY
Nearly all distribution systems in the US include storage tanks and reservoirs. They are the most visible components of a wate distribution system but are generally the least understood in terms of their impact on water quality. Long residence times in storage tanks can have nega...
Monetizing Leakage Risk of Geologic CO2 Storage using Wellbore Permeability Frequency Distributions
NASA Astrophysics Data System (ADS)
Bielicki, Jeffrey; Fitts, Jeffrey; Peters, Catherine; Wilson, Elizabeth
2013-04-01
Carbon dioxide (CO2) may be captured from large point sources (e.g., coal-fired power plants, oil refineries, cement manufacturers) and injected into deep sedimentary basins for storage, or sequestration, from the atmosphere. This technology—CO2 Capture and Storage (CCS)—may be a significant component of the portfolio of technologies deployed to mitigate climate change. But injected CO2, or the brine it displaces, may leak from the storage reservoir through a variety of natural and manmade pathways, including existing wells and wellbores. Such leakage will incur costs to a variety of stakeholders, which may affect the desirability of potential CO2 injection locations as well as the feasibility of the CCS approach writ large. Consequently, analyzing and monetizing leakage risk is necessary to develop CCS as a viable technological option to mitigate climate change. Risk is the product of the probability of an outcome and the impact of that outcome. Assessment of leakage risk from geologic CO2 storage reservoirs requires an analysis of the probabilities and magnitudes of leakage, identification of the outcomes that may result from leakage, and an assessment of the expected economic costs of those outcomes. One critical uncertainty regarding the rate and magnitude of leakage is determined by the leakiness of the well leakage pathway. This leakiness is characterized by a leakage permeability for the pathway, and recent work has sought to determine frequency distributions for the leakage permeabilities of wells and wellbores. We conduct a probabilistic analysis of leakage and monetized leakage risk for CO2 injection locations in the Michigan Sedimentary Basin (USA) using empirically derived frequency distributions for wellbore leakage permeabilities. To conduct this probabilistic risk analysis, we apply the RISCS (Risk Interference of Subsurface CO2 Storage) model (Bielicki et al, 2013a, 2012b) to injection into the Mt. Simon Sandstone. RISCS monetizes leakage risk by combining 3D geospatial data with fluid-flow simulations from the ELSA (Estimating Leakage Semi-Analytically) model (e.g., Celia and Nordbotten, 2006) and the Leakage Impact Valuation (LIV) method (Pollak et al, 2013; Bielicki et al, 2013). We extend RISCS to iterate ELSA semi-analytic modeling simulations by drawing values from the frequency distribution of leakage permeabilities. The iterations assign these values to existing wells in the basin, and the probabilistic risk analysis thus incorporates the uncertainty of the extent of leakage. We show that monetized leakage risk can vary significantly over tens of kilometers, and we identify "hot spots" favorable to CO2 injection based on the monetized leakage risk for each potential location in the basin.
The Design of Distributed Micro Grid Energy Storage System
NASA Astrophysics Data System (ADS)
Liang, Ya-feng; Wang, Yan-ping
2018-03-01
Distributed micro-grid runs in island mode, the energy storage system is the core to maintain the micro-grid stable operation. For the problems that it is poor to adjust at work and easy to cause the volatility of micro-grid caused by the existing energy storage structure of fixed connection. In this paper, an array type energy storage structure is proposed, and the array type energy storage system structure and working principle are analyzed. Finally, the array type energy storage structure model is established based on MATLAB, the simulation results show that the array type energy storage system has great flexibility, which can maximize the utilization of energy storage system, guarantee the reliable operation of distributed micro-grid and achieve the function of peak clipping and valley filling.
Fujiwara, M.; Waseda, A.; Nojima, R.; Moriai, S.; Ogata, W.; Sasaki, M.
2016-01-01
Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir’s (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km). PMID:27363566
Fujiwara, M; Waseda, A; Nojima, R; Moriai, S; Ogata, W; Sasaki, M
2016-07-01
Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir's (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km).
A practical large scale/high speed data distribution system using 8 mm libraries
NASA Technical Reports Server (NTRS)
Howard, Kevin
1993-01-01
Eight mm tape libraries are known primarily for their small size, large storage capacity, and low cost. However, many applications require an additional attribute which, heretofore, has been lacking -- high transfer rate. Transfer rate is particularly important in a large scale data distribution environment -- an environment in which 8 mm tape should play a very important role. Data distribution is a natural application for 8 mm for several reasons: most large laboratories have access to 8 mm tape drives, 8 mm tapes are upwardly compatible, 8 mm media are very inexpensive, 8 mm media are light weight (important for shipping purposes), and 8 mm media densely pack data (5 gigabytes now and 15 gigabytes on the horizon). If the transfer rate issue were resolved, 8 mm could offer a good solution to the data distribution problem. To that end Exabyte has analyzed four ways to increase its transfer rate: native drive transfer rate increases, data compression at the drive level, tape striping, and homogeneous drive utilization. Exabyte is actively pursuing native drive transfer rate increases and drive level data compression. However, for non-transmitted bulk data applications (which include data distribution) the other two methods (tape striping and homogeneous drive utilization) hold promise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitanidis, Peter
As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron
2014-08-17
The original intent of this project was to build and operate an Advanced Network and Distributed Storage Laboratory (ANDSL) for Data Intensive Science that will prepare the Open Science Grid (OSG) community for a new generation of wide area communication capabilities operating at a 100Gb rate. Given the significant cut in our proposed budget we changed the scope of the ANDSL to focus on the software aspects of the laboratory – workload generators and monitoring tools and on the offering of experimental data to the ANI project. The main contributions of our work are twofold: early end-user input and experimentalmore » data to the ANI project and software tools for conducting large scale end-to-end data placement experiments.« less
Crystal gazing. Part 2: Implications of advanced in digital data storage technology
NASA Technical Reports Server (NTRS)
Wells, D. C.
1984-01-01
During the next 5-10 years it is likely that the bit density available in digital mass storage systems (magnetic tapes, optical and magnetic disks) will be increased to such an extent that it will greatly exceed that of the conventional photographic emulsions like IIIaJ which are used in astronomy. These developments imply that it will soon be advantageous for astronomers to use microdensitometers to completely digitize all photographic plates soon after they are developed. Distribution of digital copies of sky surveys and the contents of plate vaults will probably become feasible within ten years. Copies of other astronomical archieves (e.g., Space Telescope) could also be distributed with the same techniques. The implications for designers of future microdensitometers are: (1) there will be a continuing need for precision digitization of large-format photographic imagery, and (2) that the need for real-time analysis of the output of microdensitometers will decrease.
Nosql for Storage and Retrieval of Large LIDAR Data Collections
NASA Astrophysics Data System (ADS)
Boehm, J.; Liu, K.
2015-08-01
Developments in LiDAR technology over the past decades have made LiDAR to become a mature and widely accepted source of geospatial information. This in turn has led to an enormous growth in data volume. The central idea for a file-centric storage of LiDAR point clouds is the observation that large collections of LiDAR data are typically delivered as large collections of files, rather than single files of terabyte size. This split of the dataset, commonly referred to as tiling, was usually done to accommodate a specific processing pipeline. It makes therefore sense to preserve this split. A document oriented NoSQL database can easily emulate this data partitioning, by representing each tile (file) in a separate document. The document stores the metadata of the tile. The actual files are stored in a distributed file system emulated by the NoSQL database. We demonstrate the use of MongoDB a highly scalable document oriented NoSQL database for storing large LiDAR files. MongoDB like any NoSQL database allows for queries on the attributes of the document. As a specialty MongoDB also allows spatial queries. Hence we can perform spatial queries on the bounding boxes of the LiDAR tiles. Inserting and retrieving files on a cloud-based database is compared to native file system and cloud storage transfer speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Kody M.; Kim, Jong Suk; Cole, Wesley J.
2016-10-01
District energy systems can produce low-cost utilities for large energy networks, but can also be a resource for the electric grid by their ability to ramp production or to store thermal energy by responding to real-time market signals. In this work, dynamic optimization exploits the flexibility of thermal energy storage by determining optimal times to store and extract excess energy. This concept is applied to a polygeneration distributed energy system with combined heat and power, district heating, district cooling, and chilled water thermal energy storage. The system is a university campus responsible for meeting the energy needs of tens ofmore » thousands of people. The objective for the dynamic optimization problem is to minimize cost over a 24-h period while meeting multiple loads in real time. The paper presents a novel algorithm to solve this dynamic optimization problem with energy storage by decomposing the problem into multiple static mixed-integer nonlinear programming (MINLP) problems. Another innovative feature of this work is the study of a large, complex energy network which includes the interrelations of a wide variety of energy technologies. Results indicate that a cost savings of 16.5% is realized when the system can participate in the wholesale electricity market.« less
Cost Benefit and Alternatives Analysis of Distribution Systems with Energy Storage Systems: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Tom; Nagarajan, Adarsh; Baggu, Murali
This paper explores monetized and non-monetized benefits from storage interconnected to distribution system through use cases illustrating potential applications for energy storage in California's electric utility system. This work supports SDG&E in its efforts to quantify, summarize, and compare the cost and benefit streams related to implementation and operation of energy storage on its distribution feeders. This effort develops the cost benefit and alternatives analysis platform, integrated with QSTS feeder simulation capability, and analyzed use cases to explore the cost-benefit of implementation and operation of energy storage for feeder support and market participation.
Storage, transmission and distribution of hydrogen
NASA Technical Reports Server (NTRS)
Kelley, J. H.; Hagler, R., Jr.
1979-01-01
Current practices and future requirements for the storage, transmission and distribution of hydrogen are reviewed in order to identify inadequacies to be corrected before hydrogen can achieve its full potential as a substitute for fossil fuels. Consideration is given to the storage of hydrogen in underground solution-mined salt caverns, portable high-pressure containers and dewars, pressure vessels and aquifers and as metal hydrides, hydrogen transmission in evacuated double-walled insulated containers and by pipeline, and distribution by truck and internal distribution networks. Areas for the improvement of these techniques are indicated, and these technological deficiencies, including materials development, low-cost storage and transmission methods, low-cost, long-life metal hydrides and novel methods for hydrogen storage, are presented as challenges for research and development.
Compressing DNA sequence databases with coil.
White, W Timothy J; Hendy, Michael D
2008-05-20
Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.
Compressing DNA sequence databases with coil
White, W Timothy J; Hendy, Michael D
2008-01-01
Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Broderick, Robert; Mather, Barry
2016-05-01
This report analyzes distribution-integration challenges, solutions, and research needs in the context of distributed generation from PV (DGPV) deployment to date and the much higher levels of deployment expected with achievement of the U.S. Department of Energy's SunShot targets. Recent analyses have improved estimates of the DGPV hosting capacities of distribution systems. This report uses these results to statistically estimate the minimum DGPV hosting capacity for the contiguous United States using traditional inverters of approximately 170 GW without distribution system modifications. This hosting capacity roughly doubles if advanced inverters are used to manage local voltage and additional minor, low-cost changesmore » could further increase these levels substantially. Key to achieving these deployment levels at minimum cost is siting DGPV based on local hosting capacities, suggesting opportunities for regulatory, incentive, and interconnection innovation. Already, pre-computed hosting capacity is beginning to expedite DGPV interconnection requests and installations in select regions; however, realizing SunShot-scale deployment will require further improvements to DGPV interconnection processes, standards and codes, and compensation mechanisms so they embrace the contributions of DGPV to system-wide operations. SunShot-scale DGPV deployment will also require unprecedented coordination of the distribution and transmission systems. This includes harnessing DGPV's ability to relieve congestion and reduce system losses by generating closer to loads; minimizing system operating costs and reserve deployments through improved DGPV visibility; developing communication and control architectures that incorporate DGPV into system operations; providing frequency response, transient stability, and synthesized inertia with DGPV in the event of large-scale system disturbances; and potentially managing reactive power requirements due to large-scale deployment of advanced inverter functions. Finally, additional local and system-level value could be provided by integrating DGPV with energy storage and 'virtual storage,' which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Together, continued innovation across this rich distribution landscape can enable the very-high deployment levels envisioned by SunShot.« less
Fabrication of Graphene on Kevlar Supercapacitor Electrodes
2011-05-01
fabricated with graphene to investigate its applicability for energy storage devices, as this carbon- based material has a large surface area and...Distribution List 14 iv List of Figures Figure 1. Dip-and-dry technique applied to Kevlar- based electrodes...2 Figure 2. Three-electrode system used for the CV measurements. The (1) working electrode was the Kevlar- based electrode; (2) the counter
Common Accounting System for Monitoring the ATLAS Distributed Computing Resources
NASA Astrophysics Data System (ADS)
Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration
2014-06-01
This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.
Using DoD Maps to Examine the Influence of Large Wood on Channel Morphodynamics
NASA Astrophysics Data System (ADS)
MacKenzie, L. C.; Eaton, B. C.
2012-12-01
Since the advent of logging and slash burning, many streams in British Columbia have experienced changes to the amount of large wood added to or removed from these systems, which has, in turn, influenced the storage and movement of sediment within these channels. This set of flume experiments examines and quantifies the impacts of large wood on the reach-scale morphodynamics. Understanding the relation between the wood load and channel morphodynamics is important when assessing the quality of the aquatic habitat of a stream. The experiments were conducted using a fixed-bank, mobile bed Froude-scaled physical model of Fishtrap Creek, British Columbia, built in a shallow flume that is 1.5 m wide and 11 m long. The stream table was run without wood until it reached equilibrium at which point wood pieces of varying sizes were added to the channel. The bed morphology was surveyed using a laser profiling system at five-hour intervals. The laser profiles were then interpolated to create digital elevation models (DEM) from which DEM of difference (DoD) maps were produced. Analysis of the DoD maps focused on quantifying and locating differences in the distribution of sediment storage, erosion, and deposition between the runs as well as those induced by the addition of large wood into the stream channel. We then assessed the typical influence of individual pieces and of jams on pool frequency, size and distribution along the channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob
Hydropower plant (HPP) generation comprises a considerable portion of bulk electricity generation and is delivered with a low-carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which include wind and solar. Increasing penetration levels of wind and solar lead to a lower inertia on the electric grid, which poses stability challenges. In recent years, breakthroughs in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments of renewable energy resources on electric grids. If integrated with scalable, multi-time-step energy storage so that the total output can be controlled, multiple run-of-the-river (ROR)more » HPPs can be deployed. Although the size of a single energy storage system is much smaller than that of a typical reservoir, the ratings of storages and multiple ROR HPPs approximately equal the rating of a large, conventional HPP. This paper proposes cohesively managing multiple sets of energy storage systems distributed in different locations. This paper also describes the challenges associated with ROR HPP system architecture and operation.« less
de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D
2004-03-01
Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.
Data grid: a distributed solution to PACS
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyan; Zhang, Jianguo
2004-04-01
In a hospital, various kinds of medical images acquired from different modalities are generally used and stored in different department and each modality usually attaches several workstations to display or process images. To do better diagnosis, radiologists or physicians often need to retrieve other kinds of images for reference. The traditional image storage solution is to buildup a large-scale PACS archive server. However, the disadvantages of pure centralized management of PACS archive server are obvious. Besides high costs, any failure of PACS archive server would cripple the entire PACS operation. Here we present a new approach to develop the storage grid in PACS, which can provide more reliable image storage and more efficient query/retrieval for the whole hospital applications. In this paper, we also give the performance evaluation by comparing the three popular technologies mirror, cluster and grid.
Large-Scale Cryogen Systems and Test Facilities
NASA Technical Reports Server (NTRS)
Johnson, R. G.; Sass, J. P.; Hatfield, W. H.
2007-01-01
NASA has completed initial construction and verification testing of the Integrated Systems Test Facility (ISTF) Cryogenic Testbed. The ISTF is located at Complex 20 at Cape Canaveral Air Force Station, Florida. The remote and secure location is ideally suited for the following functions: (1) development testing of advanced cryogenic component technologies, (2) development testing of concepts and processes for entire ground support systems designed for servicing large launch vehicles, and (3) commercial sector testing of cryogenic- and energy-related products and systems. The ISTF Cryogenic Testbed consists of modular fluid distribution piping and storage tanks for liquid oxygen/nitrogen (56,000 gal) and liquid hydrogen (66,000 gal). Storage tanks for liquid methane (41,000 gal) and Rocket Propellant 1 (37,000 gal) are also specified for the facility. A state-of-the-art blast proof test command and control center provides capability for remote operation, video surveillance, and data recording for all test areas.
Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm.
Yang, Mengzhao; Song, Wei; Mei, Haibin
2017-07-23
The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.
Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm
Song, Wei; Mei, Haibin
2017-01-01
The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient. PMID:28737699
DPM — efficient storage in diverse environments
NASA Astrophysics Data System (ADS)
Hellmich, Martin; Furano, Fabrizio; Smith, David; Brito da Rocha, Ricardo; Álvarez Ayllón, Alejandro; Manzi, Andrea; Keeble, Oliver; Calvet, Ivan; Regala, Miguel Antonio
2014-06-01
Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular storage solution Disk Pool Manager (DPM/dmlite) as the enabling technology. The storage manager DPM is designed for these new environments, allowing users to scale up and down as they need it, and optimizing their computing centers energy efficiency and costs. DPM runs on high-performance machines, profiting from multi-core and multi-CPU setups. It supports separating the database from the metadata server, the head node, largely reducing its hard disk requirements. Since version 1.8.6, DPM is released in EPEL and Fedora, simplifying distribution and maintenance, but also supporting the ARM architecture beside i386 and x86_64, allowing it to run the smallest low-power machines such as the Raspberry Pi or the CuBox. This usage is facilitated by the possibility to scale horizontally using a main database and a distributed memcached-powered namespace cache. Additionally, DPM supports a variety of storage pools in the backend, most importantly HDFS, S3-enabled storage, and cluster file systems, allowing users to fit their DPM installation exactly to their needs. In this paper, we investigate the power-efficiency and total cost of ownership of various DPM configurations. We develop metrics to evaluate the expected performance of a setup both in terms of namespace and disk access considering the overall cost including equipment, power consumptions, or data/storage fees. The setups tested range from the lowest scale using Raspberry Pis with only 700MHz single cores and a 100Mbps network connections, over conventional multi-core servers to typical virtual machine instances in cloud settings. We evaluate the combinations of different name server setups, for example load-balanced clusters, with different storage setups, from using a classic local configuration to private and public clouds.
Archive Storage Media Alternatives.
ERIC Educational Resources Information Center
Ranade, Sanjay
1990-01-01
Reviews requirements for a data archive system and describes storage media alternatives that are currently available. Topics discussed include data storage; data distribution; hierarchical storage architecture, including inline storage, online storage, nearline storage, and offline storage; magnetic disks; optical disks; conventional magnetic…
Distributed Storage Healthcare — The Basis of a Planet-Wide Public Health Care Network
Kakouros, Nikolaos
2013-01-01
Background: As health providers move towards higher levels of information technology (IT) integration, they become increasingly dependent on the availability of the electronic health record (EHR). Current solutions of individually managed storage by each healthcare provider focus on efforts to ensure data security, availability and redundancy. Such models, however, scale poorly to a future of a planet-wide public health-care network (PWPHN). Our aim was to review the research literature on distributed storage systems and propose methods that may aid the implementation of a PWPHN. Methods: A systematic review was carried out of the research dealing with distributed storage systems and EHR. A literature search was conducted on five electronic databases: Pubmed/Medline, Cinalh, EMBASE, Web of Science (ISI) and Google Scholar and then expanded to include non-authoritative sources. Results: The English National Health Service Spine represents the most established country-wide PHN but is limited in deployment and remains underused. Other, literature identified and established distributed EHR attempts are more limited in scope. We discuss the currently available distributed file storage solutions and propose a schema of how one of these technologies can be used to deploy a distributed storage of EHR with benefits in terms of enhanced fault tolerance and global availability within the PWPHN. We conclude that a PWPHN distributed health care record storage system is technically feasible over current Internet infrastructure. Nonetheless, the socioeconomic viability of PWPHN implementations remains to be determined. PMID:23459171
GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments
NASA Astrophysics Data System (ADS)
Chen, Zhanlong; Wu, Xin-cai; Wu, Liang
2008-12-01
Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the distributed operation, reduplication operation transfer operation of spatial index in the grid environment. The design of GSHR-Tree has ensured the performance of the load balance in the parallel computation. This tree structure is fit for the parallel process of the spatial information in the distributed network environments. Instead of spatial object's recursive comparison where original R tree has been used, the algorithm builds the spatial index by applying binary code operation in which computer runs more efficiently, and extended dynamic hash code for bit comparison. In GSHR-Tree, a new server is assigned to the network whenever a split of a full node is required. We describe a more flexible allocation protocol which copes with a temporary shortage of storage resources. It uses a distributed balanced binary spatial tree that scales with insertions to potentially any number of storage servers through splits of the overloaded ones. The application manipulates the GSHR-Tree structure from a node in the grid environment. The node addresses the tree through its image that the splits can make outdated. This may generate addressing errors, solved by the forwarding among the servers. In this paper, a spatial index data distribution algorithm that limits the number of servers has been proposed. We improve the storage utilization at the cost of additional messages. The structure of GSHR-Tree is believed that the scheme of this grid spatial index should fit the needs of new applications using endlessly larger sets of spatial data. Our proposal constitutes a flexible storage allocation method for a distributed spatial index. The insertion policy can be tuned dynamically to cope with periods of storage shortage. In such cases storage balancing should be favored for better space utilization, at the price of extra message exchanges between servers. This structure makes a compromise in the updating of the duplicated index and the transformation of the spatial index data. Meeting the needs of the grid computing, GSHRTree has a flexible structure in order to satisfy new needs in the future. The GSHR-Tree provides the R-tree capabilities for large spatial datasets stored over interconnected servers. The analysis, including the experiments, confirmed the efficiency of our design choices. The scheme should fit the needs of new applications of spatial data, using endlessly larger datasets. Using the system response time of the parallel processing of spatial scope query algorithm as the performance evaluation factor, According to the result of the simulated the experiments, GSHR-Tree is performed to prove the reasonable design and the high performance of the indexing structure that the paper presented.
Code of Federal Regulations, 2010 CFR
2010-04-01
... blood and red blood cells during storage and immediately before distribution. (iii) Storage temperature... GOOD MANUFACTURING PRACTICE FOR BLOOD AND BLOOD COMPONENTS Records and Reports § 606.160 Records. (a)(1..., processing, compatibility testing, storage and distribution of each unit of blood and blood components so...
Code of Federal Regulations, 2013 CFR
2013-04-01
... blood and red blood cells during storage and immediately before distribution. (iii) Storage temperature... GOOD MANUFACTURING PRACTICE FOR BLOOD AND BLOOD COMPONENTS Records and Reports § 606.160 Records. (a)(1..., processing, compatibility testing, storage and distribution of each unit of blood and blood components so...
Code of Federal Regulations, 2012 CFR
2012-04-01
... blood and red blood cells during storage and immediately before distribution. (iii) Storage temperature... GOOD MANUFACTURING PRACTICE FOR BLOOD AND BLOOD COMPONENTS Records and Reports § 606.160 Records. (a)(1..., processing, compatibility testing, storage and distribution of each unit of blood and blood components so...
Code of Federal Regulations, 2014 CFR
2014-04-01
... blood and red blood cells during storage and immediately before distribution. (iii) Storage temperature... GOOD MANUFACTURING PRACTICE FOR BLOOD AND BLOOD COMPONENTS Records and Reports § 606.160 Records. (a)(1..., processing, compatibility testing, storage and distribution of each unit of blood and blood components so...
Code of Federal Regulations, 2011 CFR
2011-04-01
... blood and red blood cells during storage and immediately before distribution. (iii) Storage temperature... GOOD MANUFACTURING PRACTICE FOR BLOOD AND BLOOD COMPONENTS Records and Reports § 606.160 Records. (a)(1..., processing, compatibility testing, storage and distribution of each unit of blood and blood components so...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palminitier, Bryan; Broderick, Robert; Mather, Barry
2016-05-01
Wide use of advanced inverters could double the electricity-distribution system’s hosting capacity for distributed PV at low costs—from about 170 GW to 350 GW (see Palmintier et al. 2016). At the distribution system level, increased variable generation due to high penetrations of distributed PV (typically rooftop and smaller ground-mounted systems) could challenge the management of distribution voltage, potentially increase wear and tear on electromechanical utility equipment, and complicate the configuration of circuit-breakers and other protection systems—all of which could increase costs, limit further PV deployment, or both. However, improved analysis of distribution system hosting capacity—the amount of distributed PV thatmore » can be interconnected without changing the existing infrastructure or prematurely wearing out equipment—has overturned previous rule-of-thumb assumptions such as the idea that distributed PV penetrations higher than 15% require detailed impact studies. For example, new analysis suggests that the hosting capacity for distributed PV could rise from approximately 170 GW using traditional inverters to about 350 GW with the use of advanced inverters for voltage management, and it could be even higher using accessible and low-cost strategies such as careful siting of PV systems within a distribution feeder and additional minor changes in distribution operations. Also critical to facilitating distributed PV deployment is the improvement of interconnection processes, associated standards and codes, and compensation mechanisms so they embrace PV’s contributions to system-wide operations. Ultimately SunShot-level PV deployment will require unprecedented coordination of the historically separate distribution and transmission systems along with incorporation of energy storage and “virtual storage,” which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Additional analysis and innovation are neede« less
Rich D. Koehler; Keith I. Kelson; Graham Matthews; K.H. Kang; Andrew D. Barron
2007-01-01
The South Fork Noyo River (SFNR) watershed in coastal northern California contains large volumes of historic sediment that were delivered to channels in response to past logging operations. This sediment presently is stored beneath historic terraces and in present-day channels. We conducted geomorphic mapping on the SFNR valley floor to assess the volume and location...
Forming an ad-hoc nearby storage, based on IKAROS and social networking services
NASA Astrophysics Data System (ADS)
Filippidis, Christos; Cotronis, Yiannis; Markou, Christos
2014-06-01
We present an ad-hoc "nearby" storage, based on IKAROS and social networking services, such as Facebook. By design, IKAROS is capable to increase or decrease the number of nodes of the I/O system instance on the fly, without bringing everything down or losing data. IKAROS is capable to decide the file partition distribution schema, by taking on account requests from the user or an application, as well as a domain or a Virtual Organization policy. In this way, it is possible to form multiple instances of smaller capacity higher bandwidth storage utilities capable to respond in an ad-hoc manner. This approach, focusing on flexibility, can scale both up and down and so can provide more cost effective infrastructures for both large scale and smaller size systems. A set of experiments is performed comparing IKAROS with PVFS2 by using multiple clients requests under HPC IOR benchmark and MPICH2.
7 CFR 250.15 - Financial management.
Code of Federal Regulations, 2011 CFR
2011-01-01
... any single storage facility during the fiscal year in which the loss occurred, or during the period... the direct costs for intrastate storage and distribution of donated food through distribution charges... program costs referenced in paragraph (f)(2) of this section, i.e. transportation, storage and handling of...
7 CFR 250.15 - Financial management.
Code of Federal Regulations, 2013 CFR
2013-01-01
... any single storage facility during the fiscal year in which the loss occurred, or during the period... the direct costs for intrastate storage and distribution of donated food through distribution charges... program costs referenced in paragraph (f)(2) of this section, i.e. transportation, storage and handling of...
7 CFR 250.15 - Financial management.
Code of Federal Regulations, 2012 CFR
2012-01-01
... any single storage facility during the fiscal year in which the loss occurred, or during the period... the direct costs for intrastate storage and distribution of donated food through distribution charges... program costs referenced in paragraph (f)(2) of this section, i.e. transportation, storage and handling of...
7 CFR 250.15 - Financial management.
Code of Federal Regulations, 2014 CFR
2014-01-01
... any single storage facility during the fiscal year in which the loss occurred, or during the period... the direct costs for intrastate storage and distribution of donated food through distribution charges... program costs referenced in paragraph (f)(2) of this section, i.e. transportation, storage and handling of...
GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data
NASA Astrophysics Data System (ADS)
Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.
2016-12-01
Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.
Floodplain soil organic carbon storage in the central Yukon River Basin
NASA Astrophysics Data System (ADS)
Lininger, K.; Wohl, E.
2017-12-01
As rivers transport sediment, organic matter, and large wood, they can deposit those materials in their floodplains, storing carbon. One aspect of the carbon cycle that isn't well understood is how much carbon is stored in rivers and floodplains. There may be more carbon in rivers and floodplains than previously thought. This is important for accounting for all aspects of the carbon cycle, which is the movement of carbon among the land, ocean, and atmosphere. We are quantifying that storage in high latitude floodplains through fieldwork along five rivers in the central Yukon River Basin within the Yukon Flats National Wildlife Refuge in interior Alaska. We find that the geomorphic environment and geomorphic characteristics of rivers influence the spatial distribution of carbon on the landscape, and that floodplains may be disproportionally important for carbon storage compared to other areas. Our study area contains discontinuous permafrost, which is soil that is perennially frozen, and is warming quickly due to climate change, as in other high latitude regions. The large amount of carbon stored in the subsurface and in permafrost in the high latitudes highlights the importance of understanding where carbon is stored within rivers and floodplains in these regions and how long that carbon remains in storage. Our research helps inform how river systems influence the carbon cycle in a region undergoing rapid change.
NASA Astrophysics Data System (ADS)
Ceballos-Núñez, Verónika; Richardson, Andrew; Sierra, Carlos
2017-04-01
The global carbon cycle is strongly controlled by the source/sink strength of vegetation as well as the capacity of terrestrial ecosystems to retain this carbon. However, it is uncertain how some vegetation dynamics such as the allocation of carbon to different ecosystem compartments should be represented in models. The assumptions behind model structures may result in highly divergent model predictions. Here, we asses model performance by calculating the age of the carbon in the system and in each compartment, and the overall transit time of C in the system. We used these diagnostics to assess the influence of three different carbon allocation schemes on the rates of C cycling in vegetation. First, we used published measurements of ecosystem C compartments from the Harvard Forest Environmental Measurement Site to find the best set of parameters for the different model structures. Second, we calculated C stocks, respiration fluxes, radiocarbon values, ages, and transit times. We found a good fit of the three model structures to the available data, but the time series of C in foliage and wood need to be complemented with other ecosystem compartments in order to reduce the high parameter collinearity that we observed and reduce model equifinality. Differences in model structures had a small impact on predicting ecosystem C compartments, but overall they resulted in very different predictions of age and transit time distributions. In particular, the inclusion of a storage compartment had an important impact on predicting system ages and transit times. In the case of the models with 1 or 2 storage compartments, the age of carbon in the system and in each of the compartments was distributed more towards younger ages than in the model that had no storage; the mean system age of these two models with storage was 80 years younger than in the model without storage. As expected from these age distributions, the mean transit time for the two models with storage compartments was 50 years faster than for the model without storage. These results suggest that ages and transit times, which can be indirectly measured using isotope tracers, serve as important diagnostics of model structure and could largely help to reduce uncertainties in model predictions. Furthermore, by considering age and transit times of C in vegetation compartments as distributions, not only their mean values, we obtain additional insights on the temporal dynamics of carbon use, storage, and allocation to plant parts, which not only depends on the rate at which this C is transferred in and out of the compartments, but also on the stochastic nature of the process itself.
LVFS: A Scalable Petabye/Exabyte Data Storage System
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.
2013-12-01
Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system. The LVFS system replaces the NFS disk mounting approach of LAADS and utilizes the already existing highly optimized metadata database server, which is applicable to most scientific big data intensive compute systems. Thus, LVFS ties the existing storage system with the existing metadata infrastructure system which we believe leads to a scalable exabyte virtual file system. The uniqueness of the implemented design is not limited to LAADS but can be employed with most scientific data processing systems. By utilizing the Filesystem In Userspace (FUSE), a kernel module available in many operating systems, LVFS was able to replace the NFS system while staying POSIX compliant. As a result, the LVFS system becomes scalable to exabyte sizes owing to the use of highly scalable database servers optimized for metadata storage. The flexibility of the LVFS design allows it to organize data on the fly in different ways, such as by region, date, instrument or product without the need for duplication, symbolic links, or any other replication methods. We proposed here a strategic reference architecture that addresses the inefficiencies of scientific petabyte/exabyte file system access through the dynamic integration of the observing system's large metadata file.
Scalable cloud without dedicated storage
NASA Astrophysics Data System (ADS)
Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.
2015-05-01
We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.
Niksirat, Hamid; Kouba, Antonín
2016-04-01
The freshly ejaculated spermatophore of crayfish undergoes a hardening process during post-mating storage on the body surface of female. The ultrastructural distribution of calcium deposits were studied and compared in freshly ejaculated and post-mating noble crayfish spermatophores, using the oxalate-pyroantimonate technique, to determine possible roles of calcium in post-mating spermatophore hardening and spermatozoon maturation. Small particles of sparsely distributed calcium deposits were visible in the wall of freshly ejaculated spermatophore. Also, large amount of calcium deposits were visible in the membranes of the freshly ejaculated spermatozoon. Five minutes post-ejaculation, granules in the spermatophore wall appeared as porous formations with numerous electron lucent spaces. Calcium deposits were visible within the spaces and scattered in the spermatophore wall matrix, where smaller calcium deposits combined to form globular calcium deposits. Large numbers of the globular calcium deposits were visible in the wall of the post-mating spermatophore. Smaller calcium deposits were detected in the central area of post-mating spermatophore, which contains the sperm mass, and in the extracellular matrix and capsule. While the density of calcium deposits decreased in the post-mating spermatozoon membranes, numerous small calcium deposits appeared in the subacrosomal zone and nucleus. Substantial changes in calcium deposit distribution in the crayfish spermatophore during post-mating storage on the body of female may be involved in the processes of the spermatophore hardening and spermatozoon maturation. © 2016 Wiley Periodicals, Inc.
Orthos, an alarm system for the ALICE DAQ operations
NASA Astrophysics Data System (ADS)
Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy
2012-12-01
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.
An application of digital network technology to medical image management.
Chu, W K; Smith, C L; Wobig, R K; Hahn, F A
1997-01-01
With the advent of network technology, there is considerable interest within the medical community to manage the storage and distribution of medical images by digital means. Higher workflow efficiency leading to better patient care is one of the commonly cited outcomes [1,2]. However, due to the size of medical image files and the unique requirements in detail and resolution, medical image management poses special challenges. Storage requirements are usually large, which implies expenses or investment costs make digital networking projects financially out of reach for many clinical institutions. New advances in network technology and telecommunication, in conjunction with the decreasing cost in computer devices, have made digital image management achievable. In our institution, we have recently completed a pilot project to distribute medical images both within the physical confines of the clinical enterprise as well as outside the medical center campus. The design concept and the configuration of a comprehensive digital image network is described in this report.
Bracale, Antonio; Barros, Julio; Cacciapuoti, Angela Sara; ...
2015-06-10
Electrical power systems are undergoing a radical change in structure, components, and operational paradigms, and are progressively approaching the new concept of smart grids (SGs). Future power distribution systems will be characterized by the simultaneous presence of various distributed resources, such as renewable energy systems (i.e., photovoltaic power plant and wind farms), storage systems, and controllable/non-controllable loads. Control and optimization architectures will enable network-wide coordination of these grid components in order to improve system efficiency and reliability and to limit greenhouse gas emissions. In this context, the energy flows will be bidirectional from large power plants to end users andmore » vice versa; producers and consumers will continuously interact at different voltage levels to determine in advance the requests of loads and to adapt the production and demand for electricity flexibly and efficiently also taking into account the presence of storage systems.« less
Performance Analysis of the Unitree Central File
NASA Technical Reports Server (NTRS)
Pentakalos, Odysseas I.; Flater, David
1994-01-01
This report consists of two parts. The first part briefly comments on the documentation status of two major systems at NASA#s Center for Computational Sciences, specifically the Cray C98 and the Convex C3830. The second part describes the work done on improving the performance of file transfers between the Unitree Mass Storage System running on the Convex file server and the users workstations distributed over a large georgraphic area.
JPRS Report, Soviet Union, Foreign Military Review, No. 8, August 1987
1988-01-28
Hinkley Point (1.5 million) and Hartlepool (1.3 million). In recent years the country has begun building large hydro- electric pumped storage power ...antenna 6. Interface equipment 7. Data transmission line terminal 8. Computer 9. Power supply plant control station 10. Radio-relay station terminals... stations and data transmission line, interface equipment, and power distribution unit (Fig. 3). The parallel computer, which performs operations on
Dynamic Collaboration Infrastructure for Hydrologic Science
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.
2016-12-01
Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.
NASA Astrophysics Data System (ADS)
Queloz, Pierre; Carraro, Luca; Benettin, Paolo; Botter, Gianluca; Rinaldo, Andrea; Bertuzzo, Enrico
2015-04-01
A theoretical analysis of transport in a controlled hydrologic volume, inclusive of two willow trees and forced by erratic water inputs, is carried out contrasting the experimental data described in a companion paper. The data refer to the hydrologic transport in a large lysimeter of different fluorobenzoic acids seen as tracers. Export of solute is modeled through a recently developed framework which accounts for nonstationary travel time distributions where we parameterize how output fluxes (namely, discharge and evapotranspiration) sample the available water ages in storage. The relevance of this work lies in the study of hydrologic drivers of the nonstationary character of residence and travel time distributions, whose definition and computation shape this theoretical transport study. Our results show that a large fraction of the different behaviors exhibited by the tracers may be charged to the variability of the hydrologic forcings experienced after the injection. Moreover, the results highlight the crucial, and often overlooked, role of evapotranspiration and plant uptake in determining the transport of water and solutes. This application also suggests that the ways evapotranspiration selects water with different ages in storage can be inferred through model calibration contrasting only tracer concentrations in the discharge. A view on upscaled transport volumes like hillslopes or catchments is maintained throughout the paper.
The Czech National Grid Infrastructure
NASA Astrophysics Data System (ADS)
Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.
2017-10-01
The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.
Physico-chemical characterization of grain dust in storage air of Bangalore.
Mukherjee, A K; Nag, D P; Kakde, Y; Babu, K R; Prdkash, M N; Rao, S R
1998-06-01
An Anderson personal cascade impactor was used to study the particle mass size distribution in the storage air of two major grain storage centers in Bangalore. Dust levels in storage air as well as the personal exposures of workers were determined along with a detailed study on the particle size distribution. Protein and carbohydrate content of the dust were also determined respectively in the phosphate buffer saline (PBS) and water extracts by using the standard analytical techniques. Personal exposures in both of the grain storage centers have been found to be much above the limit prescribed by ACGIH (1995-96). But the results of particle size analysis showed a higher particle mass distribution in the non-respirable size range. The mass median diameters (MMD) of the storage air particulate of both the centers were found to be beyond the respirable range. Presence of protein and carbohydrate in the storage air dust is indicative of the existence of glyco-proteins, mostly of membrane origin.
Implementation of Dynamic Extensible Adaptive Locally Exchangeable Measures (IDEALEM) v 0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sim, Alex; Lee, Dongeun; Wu, K. John
2016-03-04
Handling large streaming data is essential for various applications such as network traffic analysis, social networks, energy cost trends, and environment modeling. However, it is in general intractable to store, compute, search, and retrieve large streaming data. This software addresses a fundamental issue, which is to reduce the size of large streaming data and still obtain accurate statistical analysis. As an example, when a high-speed network such as 100 Gbps network is monitored, the collected measurement data rapidly grows so that polynomial time algorithms (e.g., Gaussian processes) become intractable. One possible solution to reduce the storage of vast amounts ofmore » measured data is to store a random sample, such as one out of 1000 network packets. However, such static sampling methods (linear sampling) have drawbacks: (1) it is not scalable for high-rate streaming data, and (2) there is no guarantee of reflecting the underlying distribution. In this software, we implemented a dynamic sampling algorithm, based on the recent technology from the relational dynamic bayesian online locally exchangeable measures, that reduces the storage of data records in a large scale, and still provides accurate analysis of large streaming data. The software can be used for both online and offline data records.« less
Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.
Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul
2017-02-01
Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.
Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid
Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul
2017-01-01
Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654
Grid regulation services for energy storage devices based on grid frequency
Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K
2013-07-02
Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).
Grid regulation services for energy storage devices based on grid frequency
Pratt, Richard M.; Hammerstrom, Donald J.; Kintner-Meyer, Michael C. W.; Tuffner, Francis K.
2017-09-05
Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).
Grid regulation services for energy storage devices based on grid frequency
Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K
2014-04-15
Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).
Solar electricity supply isolines of generation capacity and storage.
Grossmann, Wolf; Grossmann, Iris; Steininger, Karl W
2015-03-24
The recent sharp drop in the cost of photovoltaic (PV) electricity generation accompanied by globally rapidly increasing investment in PV plants calls for new planning and management tools for large-scale distributed solar networks. Of major importance are methods to overcome intermittency of solar electricity, i.e., to provide dispatchable electricity at minimal costs. We find that pairs of electricity generation capacity G and storage S that give dispatchable electricity and are minimal with respect to S for a given G exhibit a smooth relationship of mutual substitutability between G and S. These isolines between G and S support the solving of several tasks, including the optimal sizing of generation capacity and storage, optimal siting of solar parks, optimal connections of solar parks across time zones for minimizing intermittency, and management of storage in situations of far below average insolation to provide dispatchable electricity. G-S isolines allow determining the cost-optimal pair (G,S) as a function of the cost ratio of G and S. G-S isolines provide a method for evaluating the effect of geographic spread and time zone coverage on costs of solar electricity.
Solar electricity supply isolines of generation capacity and storage
Grossmann, Wolf; Grossmann, Iris; Steininger, Karl W.
2015-01-01
The recent sharp drop in the cost of photovoltaic (PV) electricity generation accompanied by globally rapidly increasing investment in PV plants calls for new planning and management tools for large-scale distributed solar networks. Of major importance are methods to overcome intermittency of solar electricity, i.e., to provide dispatchable electricity at minimal costs. We find that pairs of electricity generation capacity G and storage S that give dispatchable electricity and are minimal with respect to S for a given G exhibit a smooth relationship of mutual substitutability between G and S. These isolines between G and S support the solving of several tasks, including the optimal sizing of generation capacity and storage, optimal siting of solar parks, optimal connections of solar parks across time zones for minimizing intermittency, and management of storage in situations of far below average insolation to provide dispatchable electricity. G−S isolines allow determining the cost-optimal pair (G,S) as a function of the cost ratio of G and S. G−S isolines provide a method for evaluating the effect of geographic spread and time zone coverage on costs of solar electricity. PMID:25755261
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
Autonomic Management in a Distributed Storage System
NASA Astrophysics Data System (ADS)
Tauber, Markus
2010-07-01
This thesis investigates the application of autonomic management to a distributed storage system. Effects on performance and resource consumption were measured in experiments, which were carried out in a local area test-bed. The experiments were conducted with components of one specific distributed storage system, but seek to be applicable to a wide range of such systems, in particular those exposed to varying conditions. The perceived characteristics of distributed storage systems depend on their configuration parameters and on various dynamic conditions. For a given set of conditions, one specific configuration may be better than another with respect to measures such as resource consumption and performance. Here, configuration parameter values were set dynamically and the results compared with a static configuration. It was hypothesised that under non-changing conditions this would allow the system to converge on a configuration that was more suitable than any that could be set a priori. Furthermore, the system could react to a change in conditions by adopting a more appropriate configuration. Autonomic management was applied to the peer-to-peer (P2P) and data retrieval components of ASA, a distributed storage system. The effects were measured experimentally for various workload and churn patterns. The management policies and mechanisms were implemented using a generic autonomic management framework developed during this work. The experimental evaluations of autonomic management show promising results, and suggest several future research topics. The findings of this thesis could be exploited in building other distributed storage systems that focus on harnessing storage on user workstations, since these are particularly likely to be exposed to varying, unpredictable conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houssainy, Sammy; Janbozorgi, Mohammad; Kavehpour, Pirouz
Compressed Air Energy Storage (CAES) can potentially allow renewable energy sources to meet electricity demands as reliably as coal-fired power plants. However, conventional CAES systems rely on the combustion of natural gas, require large storage volumes, and operate at high pressures, which possess inherent problems such as high costs, strict geological locations, and the production of greenhouse gas emissions. A novel and patented hybrid thermal-compressed air energy storage (HT-CAES) design is presented which allows a portion of the available energy, from the grid or renewable sources, to operate a compressor and the remainder to be converted and stored in themore » form of heat, through joule heating in a sensible thermal storage medium. The HT-CAES design incudes a turbocharger unit that provides supplementary mass flow rate alongside the air storage. The hybrid design and the addition of a turbocharger have the beneficial effect of mitigating the shortcomings of conventional CAES systems and its derivatives by eliminating combustion emissions and reducing storage volumes, operating pressures, and costs. Storage efficiency and cost are the two key factors, which upon integration with renewable energies would allow the sources to operate as independent forms of sustainable energy. The potential of the HT-CAES design is illustrated through a thermodynamic optimization study, which outlines key variables that have a major impact on the performance and economics of the storage system. The optimization analysis quantifies the required distribution of energy between thermal and compressed air energy storage, for maximum efficiency, and for minimum cost. This study provides a roundtrip energy and exergy efficiency map of the storage system and illustrates a trade off that exists between its capital cost and performance.« less
NASA Astrophysics Data System (ADS)
Chubar, O.; Couprie, M.-E.
2007-01-01
CPU-efficient method for calculation of the frequency domain electric field of Coherent Synchrotron Radiation (CSR) taking into account 6D phase space distribution of electrons in a bunch is proposed. As an application example, calculation results of the CSR emitted by an electron bunch with small longitudinal and large transverse sizes are presented. Such situation can be realized in storage rings or ERLs by transverse deflection of the electron bunches in special crab-type RF cavities, i.e. using the technique proposed for the generation of femtosecond X-ray pulses (A. Zholents et. al., 1999). The computation, performed for the parameters of the SOLEIL storage ring, shows that if the transverse size of electron bunch is larger than the diffraction limit for single-electron SR at a given wavelength — this affects the angular distribution of the CSR at this wavelength and reduces the coherent flux. Nevertheless, for transverse bunch dimensions up to several millimeters and a longitudinal bunch size smaller than hundred micrometers, the resulting CSR flux in the far infrared spectral range is still many orders of magnitude higher than the flux of incoherent SR, and therefore can be considered for practical use.
A Distributed Fuzzy Associative Classifier for Big Data.
Segatori, Armando; Bechini, Alessio; Ducange, Pietro; Marcelloni, Francesco
2017-09-19
Fuzzy associative classification has not been widely analyzed in the literature, although associative classifiers (ACs) have proved to be very effective in different real domain applications. The main reason is that learning fuzzy ACs is a very heavy task, especially when dealing with large datasets. To overcome this drawback, in this paper, we propose an efficient distributed fuzzy associative classification approach based on the MapReduce paradigm. The approach exploits a novel distributed discretizer based on fuzzy entropy for efficiently generating fuzzy partitions of the attributes. Then, a set of candidate fuzzy association rules is generated by employing a distributed fuzzy extension of the well-known FP-Growth algorithm. Finally, this set is pruned by using three purposely adapted types of pruning. We implemented our approach on the popular Hadoop framework. Hadoop allows distributing storage and processing of very large data sets on computer clusters built from commodity hardware. We have performed an extensive experimentation and a detailed analysis of the results using six very large datasets with up to 11,000,000 instances. We have also experimented different types of reasoning methods. Focusing on accuracy, model complexity, computation time, and scalability, we compare the results achieved by our approach with those obtained by two distributed nonfuzzy ACs recently proposed in the literature. We highlight that, although the accuracies result to be comparable, the complexity, evaluated in terms of number of rules, of the classifiers generated by the fuzzy distributed approach is lower than the one of the nonfuzzy classifiers.
Taking digital imaging to the next level: challenges and opportunities.
Hobbs, W Cecyl
2004-01-01
New medical imaging technology, such as multi-detector computed tomography (CT) scanners and positron emission tomography (PET) scanners, are creating new possibilities for non-invasive diagnosis that are leading providers to invest heavily in these new technologies. The volume of data produced by such technology is so large that it cannot be "read" using traditional film-based methods, and once in digital form, it creates a massive data integration and archiving challenge. Despite the benefits of digital imaging and archiving, there are several key challenges that healthcare organizations should consider in planning, selecting, and implementing the information technology (IT) infrastructure to support digital imaging. Decisions about storage and image distribution are essentially questions of "where" and "how fast." When planning the digital archiving infrastructure, organizations should think about where they want to store and distribute their images. This is similar to decisions that organizations have to make in regard to physical film storage and distribution, except the portability of images is even greater in a digital environment. The principle of "network effects" seems like a simple concept, yet the effect is not always considered when implementing a technology plan. To fully realize the benefits of digital imaging, the radiology department must integrate the archiving solutions throughout the department and, ultimately, with applications across other departments and enterprises. Medical institutions can derive a number of benefits from implementing digital imaging and archiving solutions like PACS. Hospitals and imaging centers can use the transition from film-based imaging as a foundational opportunity to reduce costs, increase competitive advantage, attract talent, and improve service to patients. The key factors in achieving these goals include attention to the means of data storage, distribution and protection.
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1993-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X DataSlice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1992-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X Data Slice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud
NASA Astrophysics Data System (ADS)
Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.
2014-12-01
The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.
Roccato, Anna; Uyttendaele, Mieke; Membré, Jeanne-Marie
2017-06-01
In the framework of food safety, when mimicking the consumer phase, the storage time and temperature used are mainly considered as single point estimates instead of probability distributions. This singlepoint approach does not take into account the variability within a population and could lead to an overestimation of the parameters. Therefore, the aim of this study was to analyse data on domestic refrigerator temperatures and storage times of chilled food in European countries in order to draw general rules which could be used either in shelf-life testing or risk assessment. In relation to domestic refrigerator temperatures, 15 studies provided pertinent data. Twelve studies presented normal distributions, according to the authors or from the data fitted into distributions. Analysis of temperature distributions revealed that the countries were separated into two groups: northern European countries and southern European countries. The overall variability of European domestic refrigerators is described by a normal distribution: N (7.0, 2.7)°C for southern countries, and, N (6.1, 2.8)°C for the northern countries. Concerning storage times, seven papers were pertinent. Analysis indicated that the storage time was likely to end in the first days or weeks (depending on the product use-by-date) after purchase. Data fitting showed the exponential distribution was the most appropriate distribution to describe the time that food spent at consumer's place. The storage time was described by an exponential distribution corresponding to the use-by date period divided by 4. In conclusion, knowing that collecting data is time and money consuming, in the absence of data, and at least for the European market and for refrigerated products, building a domestic refrigerator temperature distribution using a Normal law and a time-to-consumption distribution using an Exponential law would be appropriate. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kanerva's sparse distributed memory with multiple hamming thresholds
NASA Technical Reports Server (NTRS)
Pohja, Seppo; Kaski, Kimmo
1992-01-01
If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.
Large Scale Simulation Platform for NODES Validation Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sotorrio, P.; Qin, Y.; Min, L.
2017-04-27
This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and lightmore » commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.« less
Domoic acid excretion in dungeness crabs, razor clams and mussels.
Schultz, Irvin R; Skillman, Ann; Woodruff, Dana
2008-07-01
Domoic acid (DA) is a neurotoxic amino acid produced by several marine algal species of the Pseudo-nitzschia (PN) genus. We studied the elimination of DA from hemolymph after intravascular (IV) injection in razor clams (Siliqua patula), mussels (Mytilus edulis) and Dungeness crabs (Cancer magister). Crabs were also injected with two other organic acids, dichloroacetic acid (DCAA) and kainic acid (KA). For IV dosing, hemolymph was repetitively sampled and DA concentrations measured by HPLC-UV. Toxicokinetic analysis of DA in crabs suggested most of the injected dose remained within hemolymph compartment with little extravascular distribution. This observation is in sharp contrast to results obtained from clams and mussels which exhibited similarly large apparent volumes of distribution despite large differences in overall clearance. These findings suggest fundamentally different storage and elimination processes are occurring for DA between bivalves and crabs.
Final Report for File System Support for Burst Buffers on HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, W.; Mohror, K.
Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respectivemore » efforts are elaborated further in this report.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei
Tensor contractions represent the most compute-intensive core kernels in ab initio computational quantum chemistry and nuclear physics. Symmetries in these tensor contractions makes them difficult to load balance and scale to large distributed systems. In this paper, we develop an efficient and scalable algorithm to contract symmetric tensors. We introduce a novel approach that avoids data redistribution in contracting symmetric tensors while also avoiding redundant storage and maintaining load balance. We present experimental results on two parallel supercomputers for several symmetric contractions that appear in the CCSD quantum chemistry method. We also present a novel approach to tensor redistribution thatmore » can take advantage of parallel hyperplanes when the initial distribution has replicated dimensions, and use collective broadcast when the final distribution has replicated dimensions, making the algorithm very efficient.« less
a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images
NASA Astrophysics Data System (ADS)
Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.
2015-07-01
Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.
Hu, Ning; Ma, Zhi-min; Lan, Jia-cheng; Wu, Yu-chun; Chen, Gao-qi; Fu, Wa-li; Wen, Zhi-lin; Wang, Wen-jing
2015-09-01
In order to illuminate the impact on soil nitrogen accumulation and supply in karst rocky desertification area, the distribution characteristics of soil nitrogen pool for each class of soil aggregates and the relationship between aggregates nitrogen pool and soil nitrogen mineralization were analyzed in this study. The results showed that the content of total nitrogen, light fraction nitrogen, available nitrogen and mineral nitrogen in soil aggregates had an increasing tendency along with the descending of aggregate-size, and the highest content was occurred in < 0. 25 mm. The content of nitrogen fractions for all aggregate-classes followed in the order of abandoned land < grass land < brush land < brush-arbor land < arbor land in different sample plots. Artificial forest lands had more effects on the improvement of the soil nitrogen than honeysuckle land. In this study it also showed the nitrogen stockpiling quantity of each aggregate-size class was differed in all aggregate-size classes, in which the content of nitrogen fraction in 5-10 mm and 2-5 mm classes of soil aggregate-size were the highest. And it meant that soil nutrient mainly was stored in large size aggregates. Large size aggregates were significant to the storage of soil nutrient. For each class of soil aggregate-size, the contribution of the nitrogen stockpiling quantity of 0. 25-1 mm class to soil net nitrogen mineralization quantity was the biggest, and following >5mm and 2-5 mm classes, and the others were the smallest. With the positive vegetation succession, the weight percentage of > 5 mm aggregate-size classes was improved and the nitrogen storage of macro-aggregates also was increased. Accordingly, the capacity of soil supply mineral nitrogen and storage organic nitrogen were intensified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hongyi; Sivapalan, Murugesu; Tian, Fuqiang
This paper presents the development and implementation of a distributed model of coupled water nutrient processes, based on the representative elementary watershed (REW) approach, to the Upper Sangamon River Basin, a large, tile-drained agricultural basin located in central Illinois, mid-west of USA. Comparison of model predictions with the observed hydrological and biogeochemical data, as well as regional estimates from literature studies, shows that the model is capable of capturing the dynamics of water, sediment and nutrient cycles reasonably well. The model is then used as a tool to gain insights into the physical and chemical processes underlying the inter- andmore » intra-annual variability of water and nutrient balances. Model predictions show that about 80% of annual runoff is contributed by tile drainage, while the remainder comes from surface runoff (mainly saturation excess flow) and subsurface runoff. It is also found that, at the annual scale nitrogen storage in the soil is depleted during wet years, and is supplemented during dry years. This carryover of nitrogen storage from dry year to wet year is mainly caused by the lateral loading of nitrate. Phosphorus storage, on the other hand, is not affected much by wet/dry conditions simply because the leaching of it is very minor compared to the other mechanisms taking phosphorous out of the basin, such as crop harvest. The analysis then turned to the movement of nitrate with runoff. Model results suggested that nitrate loading from hillslope into the channel is preferentially carried by tile drainage. Once in the stream it is then subject to in-stream denitrification, the significant spatio-temporal variability of which can be related to the variation of the hydrologic and hydraulic conditions across the river network.« less
41 CFR 109-28.000-51 - Storage guidelines.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false Storage guidelines. 109...-STORAGE AND DISTRIBUTION § 109-28.000-51 Storage guidelines. (a) Indoor storage areas should be arranged... capacities. (b) Storage yards for items not requiring covered protection shall be protected by locked fenced...
41 CFR 109-28.000-51 - Storage guidelines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Storage guidelines. 109...-STORAGE AND DISTRIBUTION § 109-28.000-51 Storage guidelines. (a) Indoor storage areas should be arranged... capacities. (b) Storage yards for items not requiring covered protection shall be protected by locked fenced...
41 CFR 109-28.000-51 - Storage guidelines.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false Storage guidelines. 109...-STORAGE AND DISTRIBUTION § 109-28.000-51 Storage guidelines. (a) Indoor storage areas should be arranged... capacities. (b) Storage yards for items not requiring covered protection shall be protected by locked fenced...
41 CFR 109-28.000-51 - Storage guidelines.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false Storage guidelines. 109...-STORAGE AND DISTRIBUTION § 109-28.000-51 Storage guidelines. (a) Indoor storage areas should be arranged... capacities. (b) Storage yards for items not requiring covered protection shall be protected by locked fenced...
41 CFR 109-28.000-51 - Storage guidelines.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false Storage guidelines. 109...-STORAGE AND DISTRIBUTION § 109-28.000-51 Storage guidelines. (a) Indoor storage areas should be arranged... capacities. (b) Storage yards for items not requiring covered protection shall be protected by locked fenced...
Integration of Decentralized Thermal Storages Within District Heating (DH) Networks
NASA Astrophysics Data System (ADS)
Schuchardt, Georg K.
2016-12-01
Thermal Storages and Thermal Accumulators are an important component within District Heating (DH) systems, adding flexibility and offering additional business opportunities for these systems. Furthermore, these components have a major impact on the energy and exergy efficiency as well as the heat losses of the heat distribution system. Especially the integration of Thermal Storages within ill-conditioned parts of the overall DH system enhances the efficiency of the heat distribution. Regarding an illustrative and simplified example for a DH system, the interactions of different heat storage concepts (centralized and decentralized) and the heat losses, energy and exergy efficiencies will be examined by considering the thermal state of the heat distribution network.
Li, Xiaoying; Voss, Paul L; Chen, Jun; Sharping, Jay E; Kumar, Prem
2005-05-15
We demonstrate storage of polarization-entangled photons for 125 micros, a record storage time to date, in a 25-km-long fiber spool, using a telecommunications-band fiber-based source of entanglement. With this source we also demonstrate distribution of polarization entanglement over 50 km by separating the two photons of an entangled pair and transmitting them individually over separate 25-km fibers. The measured two-photon fringe visibilities were 82% in the storage experiment and 86% in the distribution experiment. Preservation of polarization entanglement over such long-distance transmission demonstrates the viability of all-fiber sources for use in quantum memories and quantum logic gates.
NASA Technical Reports Server (NTRS)
Andrews, J.
1977-01-01
An optimal decision model of crop production, trade, and storage was developed for use in estimating the economic consequences of improved forecasts and estimates of worldwide crop production. The model extends earlier distribution benefits models to include production effects as well. Application to improved information systems meeting the goals set in the large area crop inventory experiment (LACIE) indicates annual benefits to the United States of $200 to $250 million for wheat, $50 to $100 million for corn, and $6 to $11 million for soybeans, using conservative assumptions on expected LANDSAT system performance.
Wynden, Rob; Anderson, Nick; Casale, Marco; Lakshminarayanan, Prakash; Anderson, Kent; Prosser, Justin; Errecart, Larry; Livshits, Alice; Thimman, Tim; Weiner, Mark
2011-01-01
Within the CTSA (Clinical Translational Sciences Awards) program, academic medical centers are tasked with the storage of clinical formulary data within an Integrated Data Repository (IDR) and the subsequent exposure of that data over grid computing environments for hypothesis generation and cohort selection. Formulary data collected over long periods of time across multiple institutions requires normalization of terms before those data sets can be aggregated and compared. This paper sets forth a solution to the challenge of generating derived aggregated normalized views from large, distributed data sets of clinical formulary data intended for re-use within clinical translational research.
Peng, Jing; Dan, Li; Huang, Mei
2014-01-01
Global and regional land carbon storage has been significantly affected by increasing atmospheric CO2 concentration and climate change. Based on fully coupled climate-carbon-cycle simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5), we investigate sensitivities of land carbon storage to rising atmospheric CO2 concentration and climate change over the world and 21 regions during the 130 years. Overall, the simulations suggest that consistently spatial positive effects of the increasing CO2 concentrations on land carbon storage are expressed with a multi-model averaged value of 1.04 PgC per ppm. The stronger positive values are mainly located in the broad areas of temperate and tropical forest, especially in Amazon basin and western Africa. However, large heterogeneity distributed for sensitivities of land carbon storage to climate change. Climate change causes decrease in land carbon storage in most tropics and the Southern Hemisphere. In these regions, decrease in soil moisture (MRSO) and enhanced drought somewhat contribute to such a decrease accompanied with rising temperature. Conversely, an increase in land carbon storage has been observed in high latitude and altitude regions (e.g., northern Asia and Tibet). The model simulations also suggest that global negative impacts of climate change on land carbon storage are predominantly attributed to decrease in land carbon storage in tropics. Although current warming can lead to an increase in land storage of high latitudes of Northern Hemisphere due to elevated vegetation growth, a risk of exacerbated future climate change may be induced due to release of carbon from tropics.
Peng, Jing; Dan, Li; Huang, Mei
2014-01-01
Global and regional land carbon storage has been significantly affected by increasing atmospheric CO2 concentration and climate change. Based on fully coupled climate-carbon-cycle simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5), we investigate sensitivities of land carbon storage to rising atmospheric CO2 concentration and climate change over the world and 21 regions during the 130 years. Overall, the simulations suggest that consistently spatial positive effects of the increasing CO2 concentrations on land carbon storage are expressed with a multi-model averaged value of 1.04PgC per ppm. The stronger positive values are mainly located in the broad areas of temperate and tropical forest, especially in Amazon basin and western Africa. However, large heterogeneity distributed for sensitivities of land carbon storage to climate change. Climate change causes decrease in land carbon storage in most tropics and the Southern Hemisphere. In these regions, decrease in soil moisture (MRSO) and enhanced drought somewhat contribute to such a decrease accompanied with rising temperature. Conversely, an increase in land carbon storage has been observed in high latitude and altitude regions (e.g., northern Asia and Tibet). The model simulations also suggest that global negative impacts of climate change on land carbon storage are predominantly attributed to decrease in land carbon storage in tropics. Although current warming can lead to an increase in land storage of high latitudes of Northern Hemisphere due to elevated vegetation growth, a risk of exacerbated future climate change may be induced due to release of carbon from tropics. PMID:24748331
Shallow aquifer storage and recovery (SASR): Initial findings from the Willamette Basin, Oregon
NASA Astrophysics Data System (ADS)
Neumann, P.; Haggerty, R.
2012-12-01
A novel mode of shallow aquifer management could increase the volumetric potential and distribution of groundwater storage. We refer to this mode as shallow aquifer storage and recovery (SASR) and gauge its potential as a freshwater storage tool. By this mode, water is stored in hydraulically connected aquifers with minimal impact to surface water resources. Basin-scale numerical modeling provides a linkage between storage efficiency and hydrogeological parameters, which in turn guides rulemaking for how and where water can be stored. Increased understanding of regional groundwater-surface water interactions is vital to effective SASR implementation. In this study we (1) use a calibrated model of the central Willamette Basin (CWB), Oregon to quantify SASR storage efficiency at 30 locations; (2) estimate SASR volumetric storage potential throughout the CWB based on these results and pertinent hydrogeological parameters; and (3) introduce a methodology for management of SASR by such parameters. Of 3 shallow, sedimentary aquifers in the CWB, we find the moderately conductive, semi-confined, middle sedimentary unit (MSU) to be most efficient for SASR. We estimate that users overlying 80% of the area in this aquifer could store injected water with greater than 80% efficiency, and find efficiencies of up to 95%. As a function of local production well yields, we estimate a maximum annual volumetric storage potential of 30 million m3 using SASR in the MSU. This volume constitutes roughly 9% of the current estimated summer pumpage in the Willamette basin at large. The dimensionless quantity lag #—calculated using modeled specific capacity, distance to nearest in-layer stream boundary, and injection duration—exhibits relatively high correlation to SASR storage efficiency at potential locations in the CWB. This correlation suggests that basic field measurements could guide SASR as an efficient shallow aquifer storage tool.
Site specific comparison of H2, CH4 and compressed air energy storage in porous formations
NASA Astrophysics Data System (ADS)
Tilmann Pfeiffer, Wolf; Wang, Bo; Bauer, Sebastian
2016-04-01
The supply of energy from renewable sources like wind or solar power is subject to fluctuations determined by the climatic and weather conditions, and shortage periods can be expected on the order of days to weeks. Energy storage is thus required if renewable energy dominates the total energy production and has to compensate the shortages. Porous formations in the subsurface could provide large storage capacities for various energy carriers, such as hydrogen (H2), synthetic methane (CH4) or compressed air (CAES). All three energy storage options have similar requirements regarding the storage site characteristics and consequently compete for suitable subsurface structures. The aim of this work is to compare the individual storage methods for an individual storage site regarding the storage capacity as well as the achievable delivery rates. This objective is pursued using numerical simulation of the individual storage operations. In a first step, a synthetic anticline with a radius of 4 km, a drop of 900 m and a formation thickness of 20 m is used to compare the individual storage methods. The storage operations are carried out using -depending on the energy carrier- 5 to 13 wells placed in the top of the structure. A homogeneous parameter distribution is assumed with permeability, porosity and residual water saturation being 500 mD, 0.35 and 0.2, respectively. N2 is used as a cushion gas in the H2 storage simulations. In case of compressed air energy storage, a high discharge rate of 400 kg/s equating to 28.8 mio. m³/d at surface conditions is required to produce 320 MW of power. Using 13 wells the storage is capable of supplying the specified gas flow rate for a period of 31 hours. Two cases using 5 and 9 wells were simulated for both the H2 and the CH4 storage operation. The target withdrawal rates of 1 mio. sm³/d are maintained for the whole extraction period of one week in all simulations. However, the power output differs with the 5 well scenario producing around 317 MW and 1208 MW and the 9 well scenario producing 539 MW and 2175 MW, for H2 and CH4, respectively. The difference in power output is due to the individual energy density of the carriers as well as working gas mixing with the cushion gas. To investigate the effects of a realistic geometry and parameter distribution on the storage performance, a realistic field site from the North German Basin is used. Results show that the performance of all storage options is affected as the delivery rate is reduced due to reservoir heterogeneity. Acknowledgments: This work is part of the ANGUS+ project (www.angusplus.de) and funded by the German Federal Ministry of Education and Research (BMBF) as part of the energy storage initiative "Energiespeicher".
NASA Astrophysics Data System (ADS)
Ming-Huang Chiang, David; Lin, Chia-Ping; Chen, Mu-Chen
2011-05-01
Among distribution centre operations, order picking has been reported to be the most labour-intensive activity. Sophisticated storage assignment policies adopted to reduce the travel distance of order picking have been explored in the literature. Unfortunately, previous research has been devoted to locating entire products from scratch. Instead, this study intends to propose an adaptive approach, a Data Mining-based Storage Assignment approach (DMSA), to find the optimal storage assignment for newly delivered products that need to be put away when there is vacant shelf space in a distribution centre. In the DMSA, a new association index (AIX) is developed to evaluate the fitness between the put away products and the unassigned storage locations by applying association rule mining. With AIX, the storage location assignment problem (SLAP) can be formulated and solved as a binary integer programming. To evaluate the performance of DMSA, a real-world order database of a distribution centre is obtained and used to compare the results from DMSA with a random assignment approach. It turns out that DMSA outperforms random assignment as the number of put away products and the proportion of put away products with high turnover rates increase.
Notes on a storage manager for the Clouds kernel
NASA Technical Reports Server (NTRS)
Pitts, David V.; Spafford, Eugene H.
1986-01-01
The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-07-01
The report is divided into the following sections: (1) Introduction; (2) Conclusions and Recommendations; (3) Existing Conditions and Facilities for a Fuel Distribution Center; (4) Pacific Ocean Regional Tuna Fisheries and Resources; (5) Fishing Effort in the FSMEEZ 1992-1994; (6) Current Transshipping Operations in the Western Pacific Ocean; (7) Current and Probale Bunkering Practices of United States, Japanese, Koren, and Taiwanese Offshore-Based Vessels Operating in FSM and Adjacent Waters; (8) Shore-Based Fish-Handling/Processing; (9) Fuels Forecast; (10) Fuel Supply, Storage and Distribution; (11) Cost Estimates; (12) Economic Evaluation of Fuel Supply, Storage and Distribution.
Low-cost high performance distributed data storage for multi-channel observations
NASA Astrophysics Data System (ADS)
Liu, Ying-bo; Wang, Feng; Deng, Hui; Ji, Kai-fan; Dai, Wei; Wei, Shou-lin; Liang, Bo; Zhang, Xiao-li
2015-10-01
The New Vacuum Solar Telescope (NVST) is a 1-m solar telescope that aims to observe the fine structures in both the photosphere and the chromosphere of the Sun. The observational data acquired simultaneously from one channel for the chromosphere and two channels for the photosphere bring great challenges to the data storage of NVST. The multi-channel instruments of NVST, including scientific cameras and multi-band spectrometers, generate at least 3 terabytes data per day and require high access performance while storing massive short-exposure images. It is worth studying and implementing a storage system for NVST which would balance the data availability, access performance and the cost of development. In this paper, we build a distributed data storage system (DDSS) for NVST and then deeply evaluate the availability of real-time data storage on a distributed computing environment. The experimental results show that two factors, i.e., the number of concurrent read/write and the file size, are critically important for improving the performance of data access on a distributed environment. Referring to these two factors, three strategies for storing FITS files are presented and implemented to ensure the access performance of the DDSS under conditions of multi-host write and read simultaneously. The real applications of the DDSS proves that the system is capable of meeting the requirements of NVST real-time high performance observational data storage. Our study on the DDSS is the first attempt for modern astronomical telescope systems to store real-time observational data on a low-cost distributed system. The research results and corresponding techniques of the DDSS provide a new option for designing real-time massive astronomical data storage system and will be a reference for future astronomical data storage.
Johnson, Timothy C.; Versteeg, Roelof J.; Ward, Andy; Day-Lewis, Frederick D.; Revil, André
2010-01-01
Electrical geophysical methods have found wide use in the growing discipline of hydrogeophysics for characterizing the electrical properties of the subsurface and for monitoring subsurface processes in terms of the spatiotemporal changes in subsurface conductivity, chargeability, and source currents they govern. Presently, multichannel and multielectrode data collections systems can collect large data sets in relatively short periods of time. Practitioners, however, often are unable to fully utilize these large data sets and the information they contain because of standard desktop-computer processing limitations. These limitations can be addressed by utilizing the storage and processing capabilities of parallel computing environments. We have developed a parallel distributed-memory forward and inverse modeling algorithm for analyzing resistivity and time-domain induced polar-ization (IP) data. The primary components of the parallel computations include distributed computation of the pole solutions in forward mode, distributed storage and computation of the Jacobian matrix in inverse mode, and parallel execution of the inverse equation solver. We have tested the corresponding parallel code in three efforts: (1) resistivity characterization of the Hanford 300 Area Integrated Field Research Challenge site in Hanford, Washington, U.S.A., (2) resistivity characterization of a volcanic island in the southern Tyrrhenian Sea in Italy, and (3) resistivity and IP monitoring of biostimulation at a Superfund site in Brandywine, Maryland, U.S.A. Inverse analysis of each of these data sets would be limited or impossible in a standard serial computing environment, which underscores the need for parallel high-performance computing to fully utilize the potential of electrical geophysical methods in hydrogeophysical applications.
Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.
Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen
2018-07-01
Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).
Federated data storage and management infrastructure
NASA Astrophysics Data System (ADS)
Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.
2016-10-01
The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.
Large-scale thermal storage systems. Possibilities of operation and state of the art
NASA Astrophysics Data System (ADS)
Jank, R.
1983-05-01
The state of the art of large scale thermal energy storage concepts is reviewed. With earth pit storage, the materials question has to be concentrated on. The use of container storage in conventional long distance thermal nets has to be stimulated. Aquifer storage should be tested in a pilot plant to obtain experience in natural aquifer use.
Distributed Coordination of Energy Storage with Distributed Generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Stoorvogel, Antonie A.
2016-07-18
With a growing emphasis on energy efficiency and system flexibility, a great effort has been made recently in developing distributed energy resources (DER), including distributed generators and energy storage systems. This paper first formulates an optimal coordination problem considering constraints at both system and device levels, including power balance constraint, generator output limits, storage energy and power capacity and charging/discharging efficiencies. An algorithm is then proposed to dynamically and automatically coordinate DERs in a distributed manner. With the proposed algorithm, the agent at each DER only maintains a local incremental cost and updates it through information exchange with a fewmore » neighbors, without relying on any central decision maker. Simulation results are used to illustrate and validate the proposed algorithm.« less
Analysis of Large- Capacity Water Heaters in Electric Thermal Storage Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooke, Alan L.; Anderson, David M.; Winiarski, David W.
2015-03-17
This report documents a national impact analysis of large tank heat pump water heaters (HPWH) in electric thermal storage (ETS) programs and conveys the findings related to concerns raised by utilities regarding the ability of large-tank heat pump water heaters to provide electric thermal storage services.
A Grid Connected Photovoltaic Inverter with Battery-Supercapacitor Hybrid Energy Storage
Guerrero-Martínez, Miguel Ángel; Barrero-González, Fermín
2017-01-01
The power generation from renewable power sources is variable in nature, and may contain unacceptable fluctuations, which can be alleviated by using energy storage systems. However, the cost of batteries and their limited lifetime are serious disadvantages. To solve these problems, an improvement consisting in the collaborative association of batteries and supercapacitors has been studied. Nevertheless, these studies don’t address in detail the case of residential and large-scale photovoltaic systems. In this paper, a selected combined topology and a new control scheme are proposed to control the power sharing between batteries and supercapacitors. Also, a method for sizing the energy storage system together with the hybrid distribution based on the photovoltaic power curves is introduced. This innovative contribution not only reduces the stress levels on the battery, and hence increases its life span, but also provides constant power injection to the grid during a defined time interval. The proposed scheme is validated through detailed simulation and experimental tests. PMID:28800102
A Grid Connected Photovoltaic Inverter with Battery-Supercapacitor Hybrid Energy Storage.
Miñambres-Marcos, Víctor Manuel; Guerrero-Martínez, Miguel Ángel; Barrero-González, Fermín; Milanés-Montero, María Isabel
2017-08-11
The power generation from renewable power sources is variable in nature, and may contain unacceptable fluctuations, which can be alleviated by using energy storage systems. However, the cost of batteries and their limited lifetime are serious disadvantages. To solve these problems, an improvement consisting in the collaborative association of batteries and supercapacitors has been studied. Nevertheless, these studies don't address in detail the case of residential and large-scale photovoltaic systems. In this paper, a selected combined topology and a new control scheme are proposed to control the power sharing between batteries and supercapacitors. Also, a method for sizing the energy storage system together with the hybrid distribution based on the photovoltaic power curves is introduced. This innovative contribution not only reduces the stress levels on the battery, and hence increases its life span, but also provides constant power injection to the grid during a defined time interval. The proposed scheme is validated through detailed simulation and experimental tests.
A multiplexed light-matter interface for fibre-based quantum networks
Saglamyurek, Erhan; Grimau Puigibert, Marcelli; Zhou, Qiang; Giner, Lambert; Marsili, Francesco; Verma, Varun B.; Woo Nam, Sae; Oesterling, Lee; Nippa, David; Oblak, Daniel; Tittel, Wolfgang
2016-01-01
Processing and distributing quantum information using photons through fibre-optic or free-space links are essential for building future quantum networks. The scalability needed for such networks can be achieved by employing photonic quantum states that are multiplexed into time and/or frequency, and light-matter interfaces that are able to store and process such states with large time-bandwidth product and multimode capacities. Despite important progress in developing such devices, the demonstration of these capabilities using non-classical light remains challenging. Here, employing the atomic frequency comb quantum memory protocol in a cryogenically cooled erbium-doped optical fibre, we report the quantum storage of heralded single photons at a telecom-wavelength (1.53 μm) with a time-bandwidth product approaching 800. Furthermore, we demonstrate frequency-multimode storage and memory-based spectral-temporal photon manipulation. Notably, our demonstrations rely on fully integrated quantum technologies operating at telecommunication wavelengths. With improved storage efficiency, our light-matter interface may become a useful tool in future quantum networks. PMID:27046076
A multiplexed light-matter interface for fibre-based quantum networks.
Saglamyurek, Erhan; Grimau Puigibert, Marcelli; Zhou, Qiang; Giner, Lambert; Marsili, Francesco; Verma, Varun B; Woo Nam, Sae; Oesterling, Lee; Nippa, David; Oblak, Daniel; Tittel, Wolfgang
2016-04-05
Processing and distributing quantum information using photons through fibre-optic or free-space links are essential for building future quantum networks. The scalability needed for such networks can be achieved by employing photonic quantum states that are multiplexed into time and/or frequency, and light-matter interfaces that are able to store and process such states with large time-bandwidth product and multimode capacities. Despite important progress in developing such devices, the demonstration of these capabilities using non-classical light remains challenging. Here, employing the atomic frequency comb quantum memory protocol in a cryogenically cooled erbium-doped optical fibre, we report the quantum storage of heralded single photons at a telecom-wavelength (1.53 μm) with a time-bandwidth product approaching 800. Furthermore, we demonstrate frequency-multimode storage and memory-based spectral-temporal photon manipulation. Notably, our demonstrations rely on fully integrated quantum technologies operating at telecommunication wavelengths. With improved storage efficiency, our light-matter interface may become a useful tool in future quantum networks.
Integrated heat exchanger design for a cryogenic storage tank
NASA Astrophysics Data System (ADS)
Fesmire, J. E.; Tomsik, T. M.; Bonner, T.; Oliveira, J. M.; Conyers, H. J.; Johnson, W. L.; Notardonato, W. U.
2014-01-01
Field demonstrations of liquid hydrogen technology will be undertaken for the proliferation of advanced methods and applications in the use of cryofuels. Advancements in the use of cryofuels for transportation on Earth, from Earth, or in space are envisioned for automobiles, aircraft, rockets, and spacecraft. These advancements rely on practical ways of storage, transfer, and handling of liquid hydrogen. Focusing on storage, an integrated heat exchanger system has been designed for incorporation with an existing storage tank and a reverse Brayton cycle helium refrigerator of capacity 850 watts at 20 K. The storage tank is a 125,000-liter capacity horizontal cylindrical tank, with vacuum jacket and multilayer insulation, and a small 0.6-meter diameter manway opening. Addressed are the specific design challenges associated with the small opening, complete modularity, pressure systems re-certification for lower temperature and pressure service associated with hydrogen densification, and a large 8:1 length-to-diameter ratio for distribution of the cryogenic refrigeration. The approach, problem solving, and system design and analysis for integrated heat exchanger are detailed and discussed. Implications for future space launch facilities are also identified. The objective of the field demonstration will be to test various zero-loss and densified cryofuel handling concepts for future transportation applications.
Integrated heat exchanger design for a cryogenic storage tank
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fesmire, J. E.; Bonner, T.; Oliveira, J. M.
Field demonstrations of liquid hydrogen technology will be undertaken for the proliferation of advanced methods and applications in the use of cryofuels. Advancements in the use of cryofuels for transportation on Earth, from Earth, or in space are envisioned for automobiles, aircraft, rockets, and spacecraft. These advancements rely on practical ways of storage, transfer, and handling of liquid hydrogen. Focusing on storage, an integrated heat exchanger system has been designed for incorporation with an existing storage tank and a reverse Brayton cycle helium refrigerator of capacity 850 watts at 20 K. The storage tank is a 125,000-liter capacity horizontal cylindricalmore » tank, with vacuum jacket and multilayer insulation, and a small 0.6-meter diameter manway opening. Addressed are the specific design challenges associated with the small opening, complete modularity, pressure systems re-certification for lower temperature and pressure service associated with hydrogen densification, and a large 8:1 length-to-diameter ratio for distribution of the cryogenic refrigeration. The approach, problem solving, and system design and analysis for integrated heat exchanger are detailed and discussed. Implications for future space launch facilities are also identified. The objective of the field demonstration will be to test various zero-loss and densified cryofuel handling concepts for future transportation applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stetiu, C.; Feustel, H.E.
1998-07-01
As thermal storage media, phase-change materials (PCMs) such as paraffin, eutectic salts, etc. offer an order-of-magnitude increase in thermal storage capacity, and their discharge is almost isothermal. By embedding PCMs in dypsum board, plaster, or other wall-covering materials, the building structure acquires latent storage properties. Structural elements containing PCMs can store large amounts of energy while maintaining the indoor temperature within a relatively narrow range. As heat storage takes place inside the building where the loads occur, rather than at a central exterior location, the internal loads are removed without the need for additional transport energy. Distributed latent storage canmore » thus be used to reduce the peak power demand of a building, downsize the cooling system, and/or switch to low-energy cooling sources. The authors used RADCOOL, a thermal building simulation program based on the finite difference approach, to numerically evaluate the thermal performance of PCM wallboard coupled with mechanical night ventilation in office buildings offers the opportunity for system downsizing in climates where the outside air temperature drops below 18 C at night. In climates where the outside air temperature remains above 19 C at night, the use of PCM wallboard should be coupled with discharge mechanisms other than mechanical night ventilation with outside air.« less
Graphene materials having randomly distributed two-dimensional structural defects
Kung, Harold H; Zhao, Xin; Hayner, Cary M; Kung, Mayfair C
2013-10-08
Graphene-based storage materials for high-power battery applications are provided. The storage materials are composed of vertical stacks of graphene sheets and have reduced resistance for Li ion transport. This reduced resistance is achieved by incorporating a random distribution of structural defects into the stacked graphene sheets, whereby the structural defects facilitate the diffusion of Li ions into the interior of the storage materials.
Graphene materials having randomly distributed two-dimensional structural defects
Kung, Harold H.; Zhao, Xin; Hayner, Cary M.; Kung, Mayfair C.
2016-05-31
Graphene-based storage materials for high-power battery applications are provided. The storage materials are composed of vertical stacks of graphene sheets and have reduced resistance for Li ion transport. This reduced resistance is achieved by incorporating a random distribution of structural defects into the stacked graphene sheets, whereby the structural defects facilitate the diffusion of Li ions into the interior of the storage materials.
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Mengistu, Zelalem
2016-12-01
In this study, we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of catchment-scale storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff. The parameters are hence estimated prior to model calibration against runoff. The new storage routine is implemented in the parameter parsimonious distance distribution dynamics (DDD) model and has been tested for 73 catchments in Norway of varying size, mean elevation and landscape type. Runoff simulations for the 73 catchments from two model structures (DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage) were compared. Little loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe efficiency criterion of 0.73 was obtained using the new estimated storage routine compared with 0.75 using calibrated storage routine. The average Kling-Gupta efficiency criterion was 0.80 and 0.81 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recession characteristics was reduced by almost 50 % using the new storage routine. The parameters of the proposed storage routine are found to be significantly correlated to catchment characteristics, which is potentially useful for predictions in ungauged basins.
NASA Technical Reports Server (NTRS)
Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.
1996-01-01
The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.
Multiobjective assessment of distributed energy storage location in electricity networks
NASA Astrophysics Data System (ADS)
Ribeiro Gonçalves, José António; Neves, Luís Pires; Martins, António Gomes
2017-07-01
This paper presents a methodology to provide information to a decision maker on the associated impacts, both of economic and technical nature, of possible management schemes of storage units for choosing the best location of distributed storage devices, with a multiobjective optimisation approach based on genetic algorithms. The methodology was applied to a case study, a known distribution network model in which the installation of distributed storage units was tested, using lithium-ion batteries. The obtained results show a significant influence of the charging/discharging profile of batteries on the choice of their best location, as well as the relevance that these choices may have for the different network management objectives, for example, for reducing network energy losses or minimising voltage deviations. Results also show a difficult cost-effectiveness of an energy-only service, with the tested systems, both due to capital cost and due to the efficiency of conversion.
41 CFR 101-28.203-1 - Government storage activity.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 41 Public Contracts and Property Management 2 2014-07-01 2012-07-01 true Government storage... Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 28-STORAGE AND DISTRIBUTION 28.2-Interagency Cross-Servicing in Storage Activities § 101-28.203-1 Government storage activity...
Optimal Operation of Energy Storage in Power Transmission and Distribution
NASA Astrophysics Data System (ADS)
Akhavan Hejazi, Seyed Hossein
In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider uncertainty from various elements, such as solar photovoltaic , electric vehicle chargers, and residential baseloads, in the form of discrete probability functions. In the last part of this thesis we address some other resources and concepts for enhancing the operation of power distribution and transmission systems. In particular, we proposed a new framework to determine the best sites, sizes, and optimal payment incentives under special contracts for committed-type DG projects to offset distribution network investment costs. In this framework, the aim is to allocate DGs such that the profit gained by the distribution company is maximized while each DG unit's individual profit is also taken into account to assure that private DG investment remains economical.
Multi-Sensory Features for Personnel Detection at Border Crossings
2011-07-08
challenging problem. Video sensors consume high amounts of power and require a large volume for storage. Hence, it is preferable to use non- imaging sensors...temporal distribution of gait beats [5]. At border crossings, animals such as mules, horses, or donkeys are often known to carry loads. Animal hoof...field, passive ultrasonic, sonar, and both infrared and visi- ble video sensors. Each sensor suite is placed along the path with a spacing of 40 to
A new parallel-vector finite element analysis software on distributed-memory computers
NASA Technical Reports Server (NTRS)
Qin, Jiangning; Nguyen, Duc T.
1993-01-01
A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.
Influence of a dam on fine-sediment storage in a canyon river
Hazel, J.E.; Topping, D.J.; Schmidt, J.C.; Kaplinski, M.
2006-01-01
Glen Canyon Dam has caused a fundamental change in the distribution of fine sediment storage in the 99-km reach of the Colorado River in Marble Canyon, Grand Canyon National Park, Arizona. The two major storage sites for fine sediment (i.e., sand and finer material) in this canyon river are lateral recirculation eddies and the main-channel bed. We use a combination of methods, including direct measurement of sediment storage change, measurements of sediment flux, and comparison of the grain size of sediment found in different storage sites relative to the supply and that in transport, in order to evaluate the change in both the volume and location of sediment storage. The analysis shows that the bed of the main channel was an important storage environment for fine sediment in the predam era. In years of large seasonal accumulation, approximately 50% of the fine sediment supplied to the reach from upstream sources was stored on the main-channel bed. In contrast, sediment budgets constructed for two short-duration, high experimental releases from Glen Canyon Dam indicate that approximately 90% of the sediment discharge from the reach during each release was derived from eddy storage, rather than from sandy deposits on the main-channel bed. These results indicate that the majority of the fine sediment in Marble Canyon is now stored in eddies, even though they occupy a small percentage (???17%) of the total river area. Because of a 95% reduction in the supply of fine sediment to Marble Canyon, future high releases without significant input of tributary sediment will potentially erode sediment from long-term eddy storage, resulting in continued degradation in Marble Canyon. Copyright 2006 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Liu, L.; Xu, J. P.; Ji, F.; Chen, J. X.; Lai, P. T.
2012-07-01
Charge-trapping memory capacitor with nitrided gadolinium oxide (GdO) as charge storage layer (CSL) is fabricated, and the influence of post-deposition annealing in NH3 on its memory characteristics is investigated. Transmission electron microscopy, x-ray photoelectron spectroscopy, and x-ray diffraction are used to analyze the cross-section and interface quality, composition, and crystallinity of the stack gate dielectric, respectively. It is found that nitrogen incorporation can improve the memory window and achieve a good trade-off among the memory properties due to NH3-annealing-induced reasonable distribution profile of a large quantity of deep-level bulk traps created in the nitrided GdO film and reduction of shallow traps near the CSL/SiO2 interface.
41 CFR 101-28.203-1 - Government storage activity.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 2 2013-07-01 2012-07-01 true Government storage... DISTRIBUTION 28.2-Interagency Cross-Servicing in Storage Activities § 101-28.203-1 Government storage activity. A Government activity or facility utilized for the receipt, storage, and issue of supplies...
41 CFR 101-28.203-1 - Government storage activity.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 41 Public Contracts and Property Management 2 2012-07-01 2012-07-01 false Government storage... DISTRIBUTION 28.2-Interagency Cross-Servicing in Storage Activities § 101-28.203-1 Government storage activity. A Government activity or facility utilized for the receipt, storage, and issue of supplies...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Storage. 820.150 Section 820.150 Food and Drugs... QUALITY SYSTEM REGULATION Handling, Storage, Distribution, and Installation § 820.150 Storage. (a) Each manufacturer shall establish and maintain procedures for the control of storage areas and stock rooms for...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Storage. 820.150 Section 820.150 Food and Drugs... QUALITY SYSTEM REGULATION Handling, Storage, Distribution, and Installation § 820.150 Storage. (a) Each manufacturer shall establish and maintain procedures for the control of storage areas and stock rooms for...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Storage. 820.150 Section 820.150 Food and Drugs... QUALITY SYSTEM REGULATION Handling, Storage, Distribution, and Installation § 820.150 Storage. (a) Each manufacturer shall establish and maintain procedures for the control of storage areas and stock rooms for...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Storage. 820.150 Section 820.150 Food and Drugs... QUALITY SYSTEM REGULATION Handling, Storage, Distribution, and Installation § 820.150 Storage. (a) Each manufacturer shall establish and maintain procedures for the control of storage areas and stock rooms for...
Hu, Kexiang; Awange, Joseph L; Khandu; Forootan, Ehsan; Goncalves, Rodrigo Mikosz; Fleming, Kevin
2017-12-01
For Brazil, a country frequented by droughts and whose rural inhabitants largely depend on groundwater, reliance on isotope for its monitoring, though accurate, is expensive and limited in spatial coverage. We exploit total water storage (TWS) derived from Gravity Recovery and Climate Experiment (GRACE) satellites to analyse spatial-temporal groundwater changes in relation to geological characteristics. Large-scale groundwater changes are estimated using GRACE-derived TWS and altimetry observations in addition to GLDAS and WGHM model outputs. Additionally, TRMM precipitation data are used to infer impacts of climate variability on groundwater fluctuations. The results indicate that climate variability mainly controls groundwater change trends while geological properties control change rates, spatial distribution, and storage capacity. Granular rocks in the Amazon and Guarani aquifers are found to influence larger storage capability, higher permeability (>10 -4 m/s) and faster response to rainfall (1 to 3months' lag) compared to fractured rocks (permeability <10 -7 m/s and lags > 3months) found only in Bambui aquifer. Groundwater in the Amazon region is found to rely not only on precipitation but also on inflow from other regions. Areas beyond the northern and southern Amazon basin depict a 'dam-like' pattern, with high inflow and slow outflow rates (recharge slope > 0.75, discharge slope < 0.45). This is due to two impermeable rock layer-like 'walls' (permeability <10 -8 m/s) along the northern and southern Alter do Chão aquifer that help retain groundwater. The largest groundwater storage capacity in Brazil is the Amazon aquifer (with annual amplitudes of > 30cm). Amazon's groundwater declined between 2002 and 2008 due to below normal precipitation (wet seasons lasted for about 36 to 47% of the time). The Guarani aquifer and adjacent coastline areas rank second in terms of storage capacity, while the northeast and southeast coastal regions indicate the smallest storage capacity due to lack of rainfall (annual average is rainfall <10cm). Copyright © 2017 Elsevier B.V. All rights reserved.
The use of nested chevron rails in a distributed energy store railgun
NASA Astrophysics Data System (ADS)
Marshall, R. A.
1984-03-01
It is pointed out that the large amounts of energy required by electromagnetic launchers will necessitate that energy stores be distributed along their length. The nested chevron rail construction will make it possible for a railgun launcher to be produced in which most of the switching requirements for the launcher/energy store system will be met automatically. Each nested chevron-shaped conductor will be electrically insulated from its neighbors, and each opposing chevron pair (one on each rail) will be connected to the terminals of one energy store (Marshall, 1982). It is explained that as the projectile moves down the railgun the chevrons and associated energy stores at first are unaffected by the approach. At this time the inductors can be charged, and any other preliminary operation can be performed. When the armature comes into contact with the Nth chevron, charge begins to flow from the Nth energy storage system; it flows through one chevron, into the armature, out of the armature into the other chevron, and from that chevron back to the energy storage system.
NASA Astrophysics Data System (ADS)
Werder, M. A.; Hewitt, I. J.; Schoof, C.; Flowers, G. E.
2012-04-01
Basal boundary conditions are one of the least constrained components of today's ice sheet models. To get at these one needs to know the distributed basal water pressure. We present a new glacier drainage system model to contribute to this missing piece of the puzzle. This two dimensional mathematical/numerical model combines distributed and channelised drainage at the ice-bed interface coupled to a water storage component. Notably the model determines the location of the channels as part of the solution. This is achieved by allowing channels (modelled as R-channels) to form on any of the edges of the unstructured triangular grid used to discretise the model. The distributed system is represented by a water sheet which is a continuum description of a linked-cavity system and exchanges water with the channels along their length. Water storage is parameterised as a function of the subglacial water pressure, which can be interpreted as storage in an englacial aquifer or due to elastic processes. The parabolic equation that determines the water pressure is solved using finite elements, the time evolution of the water sheet thickness and channel diameter are governed by local differential equations that are integrated using explicit methods. To explore the model's properties, we apply it to synthetic ice sheet catchments with areas up to 3000km2. We present steady state drainage system configurations and evaluate their channel-network properties (fractal dimensions, channel spacing). We find that an arborescent channel network forms whose density depends on the water sheet conductivity relative to water input. As a further experiment, we force the model with a seasonally and diurnally varying melt water input to investigate how the modelled drainage system evolves on these time scales: a channelised system grows up glacier as meltwater is delivered to the bed in spring and collapses in autumn. Water pressure is highest just before the formation of channels and then drops. Conversely, the diurnal variations in discharge affect the drainage system morphology only slightly. Instead they lead to large water pressure variations which lag meltwater input and coincide with changes in the volume of stored water. By incorporating an evolving R-channel network within a continuum model of distributed water drainage and storage, this 2-D model succeeds in qualitatively reproducing many of the observed and postulated features of the glacier drainage system.
Diffraction-limited storage-ring vacuum technology
Al-Dmour, Eshraq; Ahlback, Jonny; Einfeld, Dieter; Tavares, Pedro Fernandes; Grabski, Marek
2014-01-01
Some of the characteristics of recent ultralow-emittance storage-ring designs and possibly future diffraction-limited storage rings are a compact lattice combined with small magnet apertures. Such requirements present a challenge for the design and performance of the vacuum system. The vacuum system should provide the required vacuum pressure for machine operation and be able to handle the heat load from synchrotron radiation. Small magnet apertures result in the conductance of the chamber being low, and lumped pumps are ineffective. One way to provide the required vacuum level is by distributed pumping, which can be realised by the use of a non-evaporable getter (NEG) coating of the chamber walls. It may not be possible to use crotch absorbers to absorb the heat from the synchrotron radiation because an antechamber is difficult to realise with such a compact lattice. To solve this, the chamber walls can work as distributed absorbers if they are made of a material with good thermal conductivity, and distributed cooling is used at the location where the synchrotron radiation hits the wall. The vacuum system of the 3 GeV storage ring of MAX IV is used as an example of possible solutions for vacuum technologies for diffraction-limited storage rings. PMID:25177979
Thermal Analysis for Ion-Exchange Column System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Si Y.; King, William D.
2012-12-20
Models have been developed to simulate the thermal characteristics of crystalline silicotitanate ion exchange media fully loaded with radioactive cesium either in a column configuration or distributed within a waste storage tank. This work was conducted to support the design and operation of a waste treatment process focused on treating dissolved, high-sodium salt waste solutions for the removal of specific radionuclides. The ion exchange column will be installed inside a high level waste storage tank at the Savannah River Site. After cesium loading, the ion exchange media may be transferred to the waste tank floor for interim storage. Models weremore » used to predict temperature profiles in these areas of the system where the cesium-loaded media is expected to lead to localized regions of elevated temperature due to radiolytic decay. Normal operating conditions and accident scenarios (including loss of solution flow, inadvertent drainage, and loss of active cooling) were evaluated for the ion exchange column using bounding conditions to establish the design safety basis. The modeling results demonstrate that the baseline design using one central and four outer cooling tubes provides a highly efficient cooling mechanism for reducing the maximum column temperature. In-tank modeling results revealed that an idealized hemispherical mound shape leads to the highest tank floor temperatures. In contrast, even large volumes of CST distributed in a flat layer with a cylindrical shape do not result in significant floor heating.« less
NASA Astrophysics Data System (ADS)
Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken
2018-04-01
A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.
Infrastructures for Distributed Computing: the case of BESIII
NASA Astrophysics Data System (ADS)
Pellegrino, J.
2018-05-01
The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.
Improved solar heating systems
Schreyer, J.M.; Dorsey, G.F.
1980-05-16
An improved solar heating system is described in which the incident radiation of the sun is absorbed on collector panels, transferred to a storage unit and then distributed as heat for a building and the like. The improvement is obtained by utilizing a storage unit comprising separate compartments containing an array of materials having different melting points ranging from 75 to 180/sup 0/F. The materials in the storage system are melted in accordance with the amount of heat absorbed from the sun and then transferred to the storage system. An efficient low volume storage system is provided by utilizing the latent heat of fusion of the materials as they change states in storing ad releasing heat for distribution.
Schreyer, James M.; Dorsey, George F.
1982-01-01
An improved solar heating system in which the incident radiation of the sun is absorbed on collector panels, transferred to a storage unit and then distributed as heat for a building and the like. The improvement is obtained by utilizing a storage unit comprising separate compartments containing an array of materials having different melting points ranging from 75.degree. to 180.degree. F. The materials in the storage system are melted in accordance with the amount of heat absorbed from the sun and then transferred to the storage system. An efficient low volume storage system is provided by utilizing the latent heat of fusion of the materials as they change states in storing and releasing heat for distribution.
The distributed production system of the SuperB project: description and results
NASA Astrophysics Data System (ADS)
Brown, D.; Corvo, M.; Di Simone, A.; Fella, A.; Luppi, E.; Paoloni, E.; Stroili, R.; Tomassetti, L.
2011-12-01
The SuperB experiment needs large samples of MonteCarlo simulated events in order to finalize the detector design and to estimate the data analysis performances. The requirements are beyond the capabilities of a single computing farm, so a distributed production model capable of exploiting the existing HEP worldwide distributed computing infrastructure is needed. In this paper we describe the set of tools that have been developed to manage the production of the required simulated events. The production of events follows three main phases: distribution of input data files to the remote site Storage Elements (SE); job submission, via SuperB GANGA interface, to all available remote sites; output files transfer to CNAF repository. The job workflow includes procedures for consistency checking, monitoring, data handling and bookkeeping. A replication mechanism allows storing the job output on the local site SE. Results from 2010 official productions are reported.
NASA Technical Reports Server (NTRS)
Joseph, T. A.; Birman, Kenneth P.
1989-01-01
A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.; Glicksman, Robert A.
1994-05-01
A Picture Archiving and Communications System (PACS) must be able to support the image rate of the medical treatment facility. In addition the PACS must have adequate working storage and archive storage capacity required. The calculation of the number of images per minute and the capacity of working storage and of archiving storage is discussed. The calculation takes into account the distribution of images over the different size of radiological images, the distribution between inpatient and outpatient, and the distribution over plain film CR images and other modality images. The support of the indirect clinical image load is difficult to estimate and is considered in some detail. The result of the exercise for a particular hospital is an estimate of the average size of the images and exams on the system, of the number of gigabytes of working storage, of the number of images moved per minute, of the size of the archive in gigabytes, and of the number of images that are to be moved by the archive per minute. The types of storage required to support the image rates and the capacity required are discussed.
Distributed metadata in a high performance computing environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Zhang, Zhenhua
A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination thatmore » a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.« less
NASA Astrophysics Data System (ADS)
Rehmer, Donald E.
Analysis of results from a mathematical programming model were examined to 1) determine the least cost options for infrastructure development of geologic storage of CO2 in the Illinois Basin, and 2) perform an analysis of a number of CO2 emission tax and oil price scenarios in order to implement development of the least-cost pipeline networks for distribution of CO2. The model, using mixed integer programming, tested the hypothesis of whether viable EOR sequestration sites can serve as nodal points or hubs to expand the CO2 delivery infrastructure to more distal locations from the emissions sources. This is in contrast to previous model results based on a point-to- point model having direct pipeline segments from each CO2 capture site to each storage sink. There is literature on the spoke and hub problem that relates to airline scheduling as well as maritime shipping. A large-scale ship assignment problem that utilized integer linear programming was run on Excel Solver and described by Mourao et al., (2001). Other literature indicates that aircraft assignment in spoke and hub routes can also be achieved using integer linear programming (Daskin and Panayotopoulos, 1989; Hane et al., 1995). The distribution concept is basically the reverse of the "tree and branch" type (Rothfarb et al., 1970) gathering systems for oil and natural gas that industry has been developing for decades. Model results indicate that the inclusion of hubs as variables in the model yields lower transportation costs for geologic carbon dioxide storage over previous models of point-to-point infrastructure geometries. Tabular results and GIS maps of the selected scenarios illustrate that EOR sites can serve as nodal points or hubs for distribution of CO2 to distal oil field locations as well as deeper saline reservoirs. Revenue amounts and capture percentages both show an improvement over solutions when the hubs are not allowed to come into the solution. Other results indicate that geologic storage of CO2 into saline aquifers does not come into solutions selected by the model until the CO 2 emissions tax approaches 50/tonne. CO2 capture and storage begins to occur when the oil price is above 24.42 a barrel based on the constraints of the model. The annual storage capacity of the basin is nearly maximized when the net price of oil is as low as 40 per barrel and the CO2 emission tax is 60/tonne. The results from every subsequent scenario that was examined by this study demonstrate that EOR utilizing anthropogenically captured CO2 will earn net revenue, and thus represents an economically viable option for CO2 storage in the Illinois Basin.
NASA Astrophysics Data System (ADS)
Tully, Katherine C.; Whitacre, Jay F.; Litster, Shawn
2014-02-01
This paper presents in-situ spatiotemporal measurements of the electrolyte phase potential within an electric double layer capacitor (EDLC) negative electrode as envisaged for use in an aqueous hybrid battery for grid-scale energy storage. The ultra-thick electrodes used in these batteries to reduce non-functional material costs require sufficiently fast through-plane mass and charge transport to attain suitable charging and discharging rates. To better evaluate the through-plane transport, we have developed an electrode scaffold (ES) for making in situ electrolyte potential distribution measurements at discrete known distances across the thickness of an uninterrupted EDLC negative electrode. Using finite difference methods, we calculate local current, volumetric charging current and charge storage distributions from the spatiotemporal electrolyte potential measurements. These potential distributions provide insight into complex phenomena that cannot be directly observed using other existing methods. Herein, we use the distributions to identify areas of the electrode that are underutilized, assess the effects of various parameters on the cumulative charge storage distribution, and evaluate an effectiveness factor for charge storage in EDLC electrodes.
A Grid Infrastructure for Supporting Space-based Science Operations
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)
2002-01-01
Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.
COSTING MODELS FOR WATER SUPPLY DISTRIBUTION: PART III- PUMPS, TANKS, AND RESERVOIRS
Distribution systems are generally designed to ensure hydraulic reliability. Storage tanks, reservoirs and pumps are critical in maintaining this reliability. Although storage tanks, reservoirs and pumps are necessary for maintaining adequate pressure, they may also have a negati...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-16
... Partners, Inc., Corporate Center Division, Group Technology Infrastructure Services, Infrastructure Service... Infrastructure Services, Distributed Systems and Storage Group, Chicago, Illinois. The workers provide... unit formerly known as Group Technology Infrastructure Services, Distributed Systems and Storage is...
The Analysis of RDF Semantic Data Storage Optimization in Large Data Era
NASA Astrophysics Data System (ADS)
He, Dandan; Wang, Lijuan; Wang, Can
2018-03-01
With the continuous development of information technology and network technology in China, the Internet has also ushered in the era of large data. In order to obtain the effective acquisition of information in the era of large data, it is necessary to optimize the existing RDF semantic data storage and realize the effective query of various data. This paper discusses the storage optimization of RDF semantic data under large data.
NASA Astrophysics Data System (ADS)
Yuliusman; Afdhol, M. K.; Sanal, Alristo; Nasruddin
2018-03-01
Indonesia imports fuel (fuel oil) in large quantities. Indonesia has reserves of methane gas in the form of natural gas in large numbers but has obstacles in the process of storage. To produce a storage tank to a safe condition then proclaimed to use ANG (Adsorbed Natural Gas) technology. Manufacture of activated PET based activated carbon for storage of natural gas where technology has been widely studied, but still has some shortcomings. Therefore to predict the performance of ANG technology, modeling of ANG tank with Fluent CFD program is done so the condition inside the ANG tank can be known and can be used to increased the performance of ANG technology. Therefore, in this experiment natural gas storage test is done at the ANG tank model using Fluent CFD program. This experiment is begin with preparation tools and material by characterize the natural gas and activated carbon followed by create the mesh and model of ANG tank. The next process is state the characteristic of activated carbon and fluid in this experiment. The last process is run the simulation using the condition that already been stated which is at 27°C and 35 bar during 15 minutes. The result is at adsorption contour we can see that adsorption is higher at the top of the tank because the input of the adsorbent is at the top of the ANG tank so the adsorbate distribution is uneven that cause the adsorbate concentration at the top of the ANG tank is higher than the bottom tank.
NASA Astrophysics Data System (ADS)
Melick, J. J.; Gardner, M. H.
2008-12-01
Carbon capture and storage from the over 2000 power plants is estimated at 3-5 GT/yr, which requires large- scale geologic storage of greenhouse gasses in sedimentary basins. Unfortunately, determination of basin scale storage capacity is currently based on oversimplified geologic models that are difficult to validate. Simplification involves reducing the number of geologic parameters incorporated into the model, modeling with large grid cells, and treatment of subsurface reservoirs as homogeneous media. The latter problem reflects the focus of current models on fluid and/or fluid-rock interactions rather than fluid movement and migration pathways. For example, homogeneous models over emphasize fluid behavior, like the buoyancy of super-critical CO2, and hence overestimate leakage rates. Fluid mixing and fluid-rock interactions cannot be assessed with models that only investigate these reactions at a human time scale. Preliminary and conservative estimates of the total pore volume for the PRB suggest 200 GT of supercritical CO2 can be stored in this typical onshore sedimentary basin. The connected pore volume (CPV) however is not included in this estimate. Geological characterization of the CPV relates subsurface storage units to the most prolific reservoir classes (RCs). The CPV, number of well penetrations, supercritical storage area, and potential leakage pathways characterize each RC. Within each RC, a hierarchy of stratigraphic cycles is populated with stationary sedimentation regions that control rock property distributions by correlating environment of deposition (EOD) to CPV. The degree to which CPV varies between RCs depends on the geology and attendant heterogeneity retained in the fluid flow model. Region-based modeling of the PRB incorporates 28000 wells correlated across a 70,000 Km2 area, 2 km thick on average. Within this basin, five of the most productive RCs were identified from production history and placed in a fourfold stratigraphic framework (second- through fourth-order cycles). Within the small- scale 4th-order sequences (30-150-m thick, 16 total), sedimentation regions, each corresponding to an EOD, are defined by thickness, lithology and core-calibrated well-log patterns. This talk illustrates the workflow by focusing on one of the 16 layers in the basin-scale model. Isopach maps from this sample layer conform to depositional patterns confirmed through definition of five core-calibrated, well-log defined sedimentation regions. Lithology distributions also conform to thickness trends in nearshore deltas, but not in offshore regions, where sand-rich and sheet-like, but thin-bedded sandstones are flanked by mud-rich intervals of equivalent thickness. These maps represent sedimentation patterns confined by basal erosional sequence boundary and basin-wide bentonite, yet containing up to seven high-frequency sequence boundaries. To illustrate over simplification problems in this same layer, a 14000 km2 sample area is 600 km3 and using standard averaging methods, which are considered to be geologic in origin, the CPV is 16 km3. However, averaging increases connectivity with high CPV more uniformly distributed; significantly, the key mud belt region separating nearshore from offshore sandstones is not represented. Region-based modeling of this layer yields 13 km3 (110 Bbl). Furthermore, significant vertical leakage may exist from the 20000 well penetrations and faults and fractures along the western basin margin. This example illustrates the importance of accurately characterizing heterogeneity and distributing CPV using sedimentation regions.
Defaunation affects carbon storage in tropical forests
Bello, Carolina; Galetti, Mauro; Pizo, Marco A.; Magnago, Luiz Fernando S.; Rocha, Mariana F.; Lima, Renato A. F.; Peres, Carlos A.; Ovaskainen, Otso; Jordano, Pedro
2015-01-01
Carbon storage is widely acknowledged as one of the most valuable forest ecosystem services. Deforestation, logging, fragmentation, fire, and climate change have significant effects on tropical carbon stocks; however, an elusive and yet undetected decrease in carbon storage may be due to defaunation of large seed dispersers. Many large tropical trees with sizeable contributions to carbon stock rely on large vertebrates for seed dispersal and regeneration, however many of these frugivores are threatened by hunting, illegal trade, and habitat loss. We used a large data set on tree species composition and abundance, seed, fruit, and carbon-related traits, and plant-animal interactions to estimate the loss of carbon storage capacity of tropical forests in defaunated scenarios. By simulating the local extinction of trees that depend on large frugivores in 31 Atlantic Forest communities, we found that defaunation has the potential to significantly erode carbon storage even when only a small proportion of large-seeded trees are extirpated. Although intergovernmental policies to reduce carbon emissions and reforestation programs have been mostly focused on deforestation, our results demonstrate that defaunation, and the loss of key ecological interactions, also poses a serious risk for the maintenance of tropical forest carbon storage. PMID:26824067
NASA Astrophysics Data System (ADS)
Safaei Mohamadabadi, Hossein
Increasing electrification of the economy while decarbonizing the electricity supply is among the most effective strategies for cutting greenhouse gas (GHG) emissions in order to abate climate change. This thesis offers insights into the role of bulk energy storage (BES) systems to cut GHG emissions from the electricity sector. Wind and solar energies can supply large volumes of low-carbon electricity. Nevertheless, large penetration of these resources poses serious reliability concerns to the grid, mainly because of their intermittency. This thesis evaluates the performance of BES systems - especially compressed air energy storage (CAES) technology - for integration of wind energy from engineering and economic aspects. Analytical thermodynamic analysis of Distributed CAES (D-CAES) and Adiabatic CAES (A-CAES) suggest high roundtrip storage efficiencies ( 80% and 70%) compared to conventional CAES ( 50%). Using hydrogen to fuel CAES plants - instead of natural gas - yields a low overall efficiency ( 35%), despite its negligible GHG emissions. The techno-economic study of D-CAES shows that exporting compression heat to low-temperature loads (e.g. space heating) can enhance both the economic and emissions performance of compressed air storage plants. A case study for Alberta, Canada reveals that the abatement cost of replacing a conventional CAES with D-CAES plant practicing electricity arbitrage can be negative (-$40 per tCO2e, when the heat load is 50 km away from the air storage site). A green-field simulation finds that reducing the capital cost of BES - even drastically below current levels - does not substantially impact the cost of low-carbon electricity. At a 70% reduction in the GHG emissions intensity of the grid, gas turbines remain three times more cost-efficient in managing the wind variability compared to BES (in the best case and with a 15-minute resolution). Wind and solar thus, do not need to wait for availability of cheap BES systems to cost-effectively decarbonize the grid. The prospects of A-CAES seem to be stronger compared to other BES systems due to its low energy-specific capital cost.
NASA Astrophysics Data System (ADS)
Xu, Chen; Zhou, Bao-Rong; Zhai, Jian-Wei; Zhang, Yong-Jun; Yi, Ying-Qi
2017-05-01
In order to solve the problem of voltage exceeding specified limits and improve the penetration of photovoltaic in distribution network, we can make full use of the active power regulation ability of energy storage(ES) and the reactive power regulation ability of grid-connected photovoltaic inverter to provide support of active power and reactive power for distribution network. A strategy of actively controlling the output power for photovoltaic-storage system based on extended PQ-QV-PV node by analyzing the voltage regulating mechanism of point of commom coupling(PCC) of photovoltaic with energy storage(PVES) by controlling photovoltaic inverter and energy storage. The strategy set a small wave range of voltage to every photovoltaic by making the type of PCC convert among PQ, PV and QV. The simulation results indicate that the active control method can provide a better solution to the problem of voltage exceeding specified limits when photovoltaic is connectted to electric distribution network.
NASA Astrophysics Data System (ADS)
Morikawa, Y.; Murata, K. T.; Watari, S.; Kato, H.; Yamamoto, K.; Inoue, S.; Tsubouchi, K.; Fukazawa, K.; Kimura, E.; Tatebe, O.; Shimojo, S.
2010-12-01
Main methodologies of Solar-Terrestrial Physics (STP) so far are theoretical, experimental and observational, and computer simulation approaches. Recently "informatics" is expected as a new (fourth) approach to the STP studies. Informatics is a methodology to analyze large-scale data (observation data and computer simulation data) to obtain new findings using a variety of data processing techniques. At NICT (National Institute of Information and Communications Technology, Japan) we are now developing a new research environment named "OneSpaceNet". The OneSpaceNet is a cloud-computing environment specialized for science works, which connects many researchers with high-speed network (JGN: Japan Gigabit Network). The JGN is a wide-area back-born network operated by NICT; it provides 10G network and many access points (AP) over Japan. The OneSpaceNet also provides with rich computer resources for research studies, such as super-computers, large-scale data storage area, licensed applications, visualization devices (like tiled display wall: TDW), database/DBMS, cluster computers (4-8 nodes) for data processing and communication devices. What is amazing in use of the science cloud is that a user simply prepares a terminal (low-cost PC). Once connecting the PC to JGN2plus, the user can make full use of the rich resources of the science cloud. Using communication devices, such as video-conference system, streaming and reflector servers, and media-players, the users on the OneSpaceNet can make research communications as if they belong to a same (one) laboratory: they are members of a virtual laboratory. The specification of the computer resources on the OneSpaceNet is as follows: The size of data storage we have developed so far is almost 1PB. The number of the data files managed on the cloud storage is getting larger and now more than 40,000,000. What is notable is that the disks forming the large-scale storage are distributed to 5 data centers over Japan (but the storage system performs as one disk). There are three supercomputers allocated on the cloud, one from Tokyo, one from Osaka and the other from Nagoya. One's simulation job data on any supercomputers are saved on the cloud data storage (same directory); it is a kind of virtual computing environment. The tiled display wall has 36 panels acting as one display; the pixel (resolution) size of it is as large as 18000x4300. This size is enough to preview or analyze the large-scale computer simulation data. It also allows us to take a look of multiple (e.g., 100 pictures) on one screen together with many researchers. In our talk we also present a brief report of the initial results using the OneSpaceNet for Global MHD simulations as an example of successful use of our science cloud; (i) Ultra-high time resolution visualization of Global MHD simulations on the large-scale storage and parallel processing system on the cloud, (ii) Database of real-time Global MHD simulation and statistic analyses of the data, and (iii) 3D Web service of Global MHD simulations.
Astro-WISE: Chaining to the Universe
NASA Astrophysics Data System (ADS)
Valentijn, E. A.; McFarland, J. P.; Snigula, J.; Begeman, K. G.; Boxhoorn, D. R.; Rengelink, R.; Helmich, E.; Heraudeau, P.; Verdoes Kleijn, G.; Vermeij, R.; Vriend, W.-J.; Tempelaar, M. J.; Deul, E.; Kuijken, K.; Capaccioli, M.; Silvotti, R.; Bender, R.; Neeser, M.; Saglia, R.; Bertin, E.; Mellier, Y.
2007-10-01
The recent explosion of recorded digital data and its processed derivatives threatens to overwhelm researchers when analysing their experimental data or looking up data items in archives and file systems. While current hardware developments allow the acquisition, processing and storage of hundreds of terabytes of data at the cost of a modern sports car, the software systems to handle these data are lagging behind. This problem is very general and is well recognized by various scientific communities; several large projects have been initiated, e.g., DATAGRID/EGEE {http://www.eu-egee.org/} federates compute and storage power over the high-energy physical community, while the international astronomical community is building an Internet geared Virtual Observatory {http://www.euro-vo.org/pub/} (Padovani 2006) connecting archival data. These large projects either focus on a specific distribution aspect or aim to connect many sub-communities and have a relatively long trajectory for setting standards and a common layer. Here, we report first light of a very different solution (Valentijn & Kuijken 2004) to the problem initiated by a smaller astronomical IT community. It provides an abstract scientific information layer which integrates distributed scientific analysis with distributed processing and federated archiving and publishing. By designing new abstractions and mixing in old ones, a Science Information System with fully scalable cornerstones has been achieved, transforming data systems into knowledge systems. This break-through is facilitated by the full end-to-end linking of all dependent data items, which allows full backward chaining from the observer/researcher to the experiment. Key is the notion that information is intrinsic in nature and thus is the data acquired by a scientific experiment. The new abstraction is that software systems guide the user to that intrinsic information by forcing full backward and forward chaining in the data modelling.
Development of an automated electrical power subsystem testbed for large spacecraft
NASA Technical Reports Server (NTRS)
Hall, David K.; Lollar, Louis F.
1990-01-01
The NASA Marshall Space Flight Center (MSFC) has developed two autonomous electrical power system breadboards. The first breadboard, the autonomously managed power system (AMPS), is a two power channel system featuring energy generation and storage and 24-kW of switchable loads, all under computer control. The second breadboard, the space station module/power management and distribution (SSM/PMAD) testbed, is a two-bus 120-Vdc model of the Space Station power subsystem featuring smart switchgear and multiple knowledge-based control systems. NASA/MSFC is combining these two breadboards to form a complete autonomous source-to-load power system called the large autonomous spacecraft electrical power system (LASEPS). LASEPS is a high-power, intelligent, physical electrical power system testbed which can be used to derive and test new power system control techniques, new power switching components, and new energy storage elements in a more accurate and realistic fashion. LASEPS has the potential to be interfaced with other spacecraft subsystem breadboards in order to simulate an entire space vehicle. The two individual systems, the combined systems (hardware and software), and the current and future uses of LASEPS are described.
Integration of end-user Cloud storage for CMS analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez
End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less
Integration of end-user Cloud storage for CMS analysis
Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...
2017-05-19
End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less
Longitudinal distribution and parameters of large wood in a Mediterranean ephemeral stream
NASA Astrophysics Data System (ADS)
Galia, T.; Škarpich, V.; Tichavský, R.; Vardakas, L.; Šilhán, K.
2018-06-01
Although large wood (LW) has been intensively studied in forested basins of humid temperate climates, data on LW patterns in different fluvial environments are rather scarce. Therefore, we investigated the dimensions, characteristics, longitudinal distribution, and dynamics of LW along a 4.05-km-long reach of an ephemeral channel typical of European Mediterranean mountainous landscape (Sfakiano Gorge, Crete, Greece). We analysed a total of 795 LW pieces, and the mean observed abundance of LW was generally lower (14.3 m3/ha of active valley floor or 19.6 LW pieces/100 m of stream length) than is usually documented for more humid environments. The number of LW pieces was primarily controlled by trees growing on the valley floor. These living trees acted as important LW supply agents (by tree throws or the supply of individual branches with sufficient LW dimensions) and flow obstructions during large flood events, causing storage of transported LW pieces in jams. However, the downstream transport of LW is probably episodic, and large jams are likely formed only during major floods; after >15 years, we still observed significant imprints of the last major flood event on the present distribution of LW. The geomorphic function of LW in the studied stream can only be perceived to be a spatially limited stabilising element for sediments, which was documented by a few accumulations of coarse clastic material by LW steps and jams.
ENERGY EFFICIENCY AND ENVIRONMENTALLY FRIENDLY DISTRIBUTED ENERGY STORAGE BATTERY
DOE Office of Scientific and Technical Information (OSTI.GOV)
LANDI, J.T.; PLIVELICH, R.F.
2006-04-30
Electro Energy, Inc. conducted a research project to develop an energy efficient and environmentally friendly bipolar Ni-MH battery for distributed energy storage applications. Rechargeable batteries with long life and low cost potentially play a significant role by reducing electricity cost and pollution. A rechargeable battery functions as a reservoir for storage for electrical energy, carries energy for portable applications, or can provide peaking energy when a demand for electrical power exceeds primary generating capabilities.
Methane adsorption in nanoporous carbon: the numerical estimation of optimal storage conditions
NASA Astrophysics Data System (ADS)
Ortiz, L.; Kuchta, B.; Firlej, L.; Roth, M. W.; Wexler, C.
2016-05-01
The efficient storage and transportation of natural gas is one of the most important enabling technologies for use in energy applications. Adsorption in porous systems, which will allow the transportation of high-density fuel under low pressure, is one of the possible solutions. We present and discuss extensive grand canonical Monte Carlo (GCMC) simulation results of the adsorption of methane into slit-shaped graphitic pores of various widths (between 7 Å and 50 Å), and at pressures P between 0 bar and 360 bar. Our results shed light on the dependence of film structure on pore width and pressure. For large widths, we observe multi-layer adsorption at supercritical conditions, with excess amounts even at large distances from the pore walls originating from the attractive interaction exerted by a very high-density film in the first layer. We are also able to successfully model the experimental adsorption isotherms of heterogeneous activated carbon samples by means of an ensemble average of the pore widths, based exclusively on the pore-size distributions (PSD) calculated from subcritical nitrogen adsorption isotherms. Finally, we propose a new formula, based on the PSD ensemble averages, to calculate the isosteric heat of adsorption of heterogeneous systems from single-pore-width calculations. The methods proposed here will contribute to the rational design and optimization of future adsorption-based storage tanks.
Model for CO2 leakage including multiple geological layers and multiple leaky wells.
Nordbotten, Jan M; Kavetski, Dmitri; Celia, Michael A; Bachu, Stefan
2009-02-01
Geological storage of carbon dioxide (CO2) is likely to be an integral component of any realistic plan to reduce anthropogenic greenhouse gas emissions. In conjunction with large-scale deployment of carbon storage as a technology, there is an urgent need for tools which provide reliable and quick assessments of aquifer storage performance. Previously, abandoned wells from over a century of oil and gas exploration and production have been identified as critical potential leakage paths. The practical importance of abandoned wells is emphasized by the correlation of heavy CO2 emitters (typically associated with industrialized areas) to oil and gas producing regions in North America. Herein, we describe a novel framework for predicting the leakage from large numbers of abandoned wells, forming leakage paths connecting multiple subsurface permeable formations. The framework is designed to exploit analytical solutions to various components of the problem and, ultimately, leads to a grid-free approximation to CO2 and brine leakage rates, as well as fluid distributions. We apply our model in a comparison to an established numerical solverforthe underlying governing equations. Thereafter, we demonstrate the capabilities of the model on typical field data taken from the vicinity of Edmonton, Alberta. This data set consists of over 500 wells and 7 permeable formations. Results show the flexibility and utility of the solution methods, and highlight the role that analytical and semianalytical solutions can play in this important problem.
Volume serving and media management in a networked, distributed client/server environment
NASA Technical Reports Server (NTRS)
Herring, Ralph H.; Tefend, Linda L.
1993-01-01
The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amerio, S.; Behari, S.; Boyd, J.
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less
Cryptography in the Bounded-Quantum-Storage Model
NASA Astrophysics Data System (ADS)
Schaffner, Christian
2007-09-01
This thesis initiates the study of cryptographic protocols in the bounded-quantum-storage model. On the practical side, simple protocols for Rabin Oblivious Transfer, 1-2 Oblivious Transfer and Bit Commitment are presented. No quantum memory is required for honest players, whereas the protocols can only be broken by an adversary controlling a large amount of quantum memory. The protocols are efficient, non-interactive and can be implemented with today's technology. On the theoretical side, new entropic uncertainty relations involving min-entropy are established and used to prove the security of protocols according to new strong security definitions. For instance, in the realistic setting of Quantum Key Distribution (QKD) against quantum-memory-bounded eavesdroppers, the uncertainty relation allows to prove the security of QKD protocols while tolerating considerably higher error rates compared to the standard model with unbounded adversaries.
Data preservation at the Fermilab Tevatron
NASA Astrophysics Data System (ADS)
Amerio, S.; Behari, S.; Boyd, J.; Brochmann, M.; Culbertson, R.; Diesburg, M.; Freeman, J.; Garren, L.; Greenlee, H.; Herner, K.; Illingworth, R.; Jayatilaka, B.; Jonckheere, A.; Li, Q.; Naymola, S.; Oleynik, G.; Sakumoto, W.; Varnes, E.; Vellidis, C.; Watts, G.; White, S.
2017-04-01
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. These efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.
NASA Technical Reports Server (NTRS)
1977-01-01
Components of a videotape storage and retrieval system originally developed for NASA have been adapted as a tool for law enforcement agencies. Ampex Corp., Redwood City, Cal., built a unique system for NASA-Marshall. The first application of professional broadcast technology to computerized record-keeping, it incorporates new equipment for transporting tapes within the system. After completing the NASA system, Ampex continued development, primarily to improve image resolution. The resulting advanced system, known as the Ampex Videofile, offers advantages over microfilm for filing, storing, retrieving, and distributing large volumes of information. The system's computer stores information in digital code rather than in pictorial form. While microfilm allows visual storage of whole documents, it requires a step before usage--developing the film. With Videofile, the actual document is recorded, complete with photos and graphic material, and a picture of the document is available instantly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1988-08-01
The objective of this report is to develop a generalized methodology for examining water distribution systems for adjustable speed drive (ASD) applications and to provide an example (the City of Chicago 68th Street Water Pumping Station) using the methodology. The City of Chicago water system was chosen as the candidate for analysis because it has a large service area distribution network with no storage provisions after the distribution pumps. Many industrial motors operate at only one speed or a few speeds. By speeding up or slowing down, ASDs achieve gentle startups and gradual shutdowns thereby providing plant equipment a longermore » life with fewer breakdowns while minimizing the energy requirements. The test program substantiated that ASDs enhance product quality and increase productivity in many industrial operations, including extended equipment life. 35 figs.« less
Electric power processing, distribution, management and energy storage
NASA Astrophysics Data System (ADS)
Giudici, R. J.
1980-07-01
Power distribution subsystems are required for three elements of the SPS program: (1) orbiting satellite, (2) ground rectenna, and (3) Electric Orbiting Transfer Vehicle (EOTV). Power distribution subsystems receive electrical power from the energy conversion subsystem and provide the power busses rotary power transfer devices, switchgear, power processing, energy storage, and power management required to deliver control, high voltage plasma interactions, electric thruster interactions, and spacecraft charging of the SPS and the EOTV are also included as part of the power distribution subsystem design.
Electric power processing, distribution, management and energy storage
NASA Technical Reports Server (NTRS)
Giudici, R. J.
1980-01-01
Power distribution subsystems are required for three elements of the SPS program: (1) orbiting satellite, (2) ground rectenna, and (3) Electric Orbiting Transfer Vehicle (EOTV). Power distribution subsystems receive electrical power from the energy conversion subsystem and provide the power busses rotary power transfer devices, switchgear, power processing, energy storage, and power management required to deliver control, high voltage plasma interactions, electric thruster interactions, and spacecraft charging of the SPS and the EOTV are also included as part of the power distribution subsystem design.
Proactive replica checking to assure reliability of data in cloud storage with minimum replication
NASA Astrophysics Data System (ADS)
Murarka, Damini; Maheswari, G. Uma
2017-11-01
The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.
Parallel task processing of very large datasets
NASA Astrophysics Data System (ADS)
Romig, Phillip Richardson, III
This research concerns the use of distributed computer technologies for the analysis and management of very large datasets. Improvements in sensor technology, an emphasis on global change research, and greater access to data warehouses all are increase the number of non-traditional users of remotely sensed data. We present a framework for distributed solutions to the challenges of datasets which exceed the online storage capacity of individual workstations. This framework, called parallel task processing (PTP), incorporates both the task- and data-level parallelism exemplified by many image processing operations. An implementation based on the principles of PTP, called Tricky, is also presented. Additionally, we describe the challenges and practical issues in modeling the performance of parallel task processing with large datasets. We present a mechanism for estimating the running time of each unit of work within a system and an algorithm that uses these estimates to simulate the execution environment and produce estimated runtimes. Finally, we describe and discuss experimental results which validate the design. Specifically, the system (a) is able to perform computation on datasets which exceed the capacity of any one disk, (b) provides reduction of overall computation time as a result of the task distribution even with the additional cost of data transfer and management, and (c) in the simulation mode accurately predicts the performance of the real execution environment.
NASA Astrophysics Data System (ADS)
Fedorov, D.; Miller, R. J.; Kvilekval, K. G.; Doheny, B.; Sampson, S.; Manjunath, B. S.
2016-02-01
Logistical and financial limitations of underwater operations are inherent in marine science, including biodiversity observation. Imagery is a promising way to address these challenges, but the diversity of organisms thwarts simple automated analysis. Recent developments in computer vision methods, such as convolutional neural networks (CNN), are promising for automated classification and detection tasks but are typically very computationally expensive and require extensive training on large datasets. Therefore, managing and connecting distributed computation, large storage and human annotations of diverse marine datasets is crucial for effective application of these methods. BisQue is a cloud-based system for management, annotation, visualization, analysis and data mining of underwater and remote sensing imagery and associated data. Designed to hide the complexity of distributed storage, large computational clusters, diversity of data formats and inhomogeneous computational environments behind a user friendly web-based interface, BisQue is built around an idea of flexible and hierarchical annotations defined by the user. Such textual and graphical annotations can describe captured attributes and the relationships between data elements. Annotations are powerful enough to describe cells in fluorescent 4D images, fish species in underwater videos and kelp beds in aerial imagery. Presently we are developing BisQue-based analysis modules for automated identification of benthic marine organisms. Recent experiments with drop-out and CNN based classification of several thousand annotated underwater images demonstrated an overall accuracy above 70% for the 15 best performing species and above 85% for the top 5 species. Based on these promising results, we have extended bisque with a CNN-based classification system allowing continuous training on user-provided data.
NASA Astrophysics Data System (ADS)
Magilligan, F. J.; Nislow, K. H.; Fisher, G. B.; Wright, J.; Mackey, G.; Laser, M.
2008-05-01
The role, function, and importance of large woody debris (LWD) in rivers depend strongly on environmental context and land use history. The coastal watersheds of central and northern Maine, northeastern U.S., are characterized by low gradients, moderate topography, and minimal influence of mass wasting processes, along with a history of intensive commercial timber harvest. In spite of the ecological importance of these rivers, which contain the last wild populations of Atlantic salmon ( Salmo salar) in the U.S., we know little about LWD distribution, dynamics, and function in these systems. We conducted a cross-basin analysis in seven coastal Maine watersheds, documenting the size, frequency, volume, position, and orientation of LWD, as well as the association between LWD, pool formation, and sediment storage. In conjunction with these LWD surveys, we conducted extensive riparian vegetation surveys. We observed very low LWD frequencies and volumes across the 60 km of rivers surveyed. Frequency of LWD ≥ 20 cm diameter ranged from 15-50 pieces km - 1 and wood volumes were commonly < 10-20 m 3 km - 1 . Moreover, most of this wood was located in the immediate low-flow channel zone, was oriented parallel to flow, and failed to span the stream channel. As a result, pool formation associated with LWD is generally lacking and < 20% of the wood was associated with sediment storage. Low LWD volumes are consistent with the relatively young riparian stands we observed, with the large majority of trees < 20 cm DBH. These results strongly reflect the legacy of intensive timber harvest and land clearing and suggest that the frequency and distribution of LWD may be considerably less than presettlement and/or future desired conditions.
7 CFR 250.59 - Storage and inventory management of donated foods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false Storage and inventory management of donated foods. 250... management of donated foods. (a) General requirements. Distributing agencies, subdistributing agencies, and... management system, as defined in this part, unless the distributing agency requires donated foods to be...
Spatio-temporal distribution of stored-product inects around food processing and storage facilities
USDA-ARS?s Scientific Manuscript database
Grain storage and processing facilities consist of a landscape of indoor and outdoor habitats that can potentially support stored-product insect pests, and understanding patterns of species diversity and spatial distribution in the landscape surrounding structures can provide insight into how the ou...
Biochars impact on soil moisture storage in an Ultisol and two Aridisols
USDA-ARS?s Scientific Manuscript database
Droughts associated with low or erratic rainfall distribution can cause detrimental crop moisture stress. This problem is exacerbated in the USA’s arid western and southeastern Coastal Plain due to poor rainfall distribution, poor soil water storage, or poorly-aggregated, subsurface hard layers that...
EFFECTS OF MIXING AND AGING ON WATER QUALITY IN DISTRIBUTION SYSTEM STORAGE FACILITIES
Aging of water in distribution system storage facilities can lead to deterioration of the water quality due to loss of disinfectant residual and bacterial regrowth. Facilities should be operated to insure that the age of the water is not excessive taking into account the quality...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-11
... Known as Brinson Partners, Inc., Corporate Center Division; Group Technology Infrastructure Services... Division, Group Technology Infrastructure Services, Distributed Systems and Storage Group, Chicago... Infrastructure Services, Distributed Systems and Storage Group have their wages reported under a separate...
Utilization of Boxes for Pesticide Storage in Sri Lanka.
Pieris, Ravi; Weerasinghe, Manjula; Abeywickrama, Tharaka; Manuweera, Gamini; Eddleston, Michael; Dawson, Andrew; Konradsen, Flemming
2017-01-01
Pesticide self-poisoning is now considered one of the two most common methods of suicide worldwide. Encouraging safe storage of pesticides is one particular approach aimed at reducing pesticide self-poisoning. CropLife Sri Lanka (the local association of pesticide manufacturers), with the aid of the Department of Agriculture, distributed lockable in-house pesticide storage boxes free of charge to a farming community in a rural district of Sri Lanka. Padlocks were not provided with the boxes. These storage boxes were distributed to the farmers without prior education. The authors carried out a cross-sectional follow-up survey to assess the usage of boxes at 7 months after distribution. In an inspection of a sample of 239 box recipients' households, 142 households stored pesticides in the provided box at the time of survey. Among them, only 42 (42/142, 29.65%) households had locked the box; the remaining households (100/142, 70.4%) had not locked the box. A simple hand over of in-house pesticide storage boxes without awareness/education results in poor use of boxes. Additionally, providing in-house storage boxes may encourage farmers to store pesticides in and around houses and, if they are not locked, may lead to unplanned adverse effects.
Evaluation of Apache Hadoop for parallel data analysis with ROOT
NASA Astrophysics Data System (ADS)
Lehrack, S.; Duckeck, G.; Ebke, J.
2014-06-01
The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers, using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of processing binary data files which cannot be split automatically. However, Hadoop offers attractive features in terms of fault tolerance, task supervision and control, multi-user functionality and job management. For this reason, we evaluated Apache Hadoop as an alternative approach to PROOF for ROOT data analysis. Two alternatives in distributing analysis data were discussed: either the data was stored in HDFS and processed with MapReduce, or the data was accessed via a standard Grid storage system (dCache Tier-2) and MapReduce was used only as execution back-end. The focus in the measurements were on the one hand to safely store analysis data on HDFS with reasonable data rates and on the other hand to process data fast and reliably with MapReduce. In the evaluation of the HDFS, read/write data rates from local Hadoop cluster have been measured and compared to standard data rates from the local NFS installation. In the evaluation of MapReduce, realistic ROOT analyses have been used and event rates have been compared to PROOF.
Making up for lost snow: lessons from a warming Sierra Nevada
NASA Astrophysics Data System (ADS)
Bales, R. C.
2017-12-01
Snowpack- and glacier-dependent river basins are home to over 1.2 billion people, one-sixth of the world's current population. These areas face severe challenges in a warmer climate, as declines in snow resources put more pressure on dams and groundwater. Closer to home, the seasonal snowpacks in California's Sierra Nevada provide water storage to both sustain productive forests and support the world's 6th largest economy. Rivers draining the Sierra supply the state's large cities, plus agricultural areas that provide nearly half of the nation's fruits and vegetables. Water storage is central to water security, especially given California's hot dry summers and high interannual variability in precipitation. On average seasonal snowpacks store about half as much water as do dams on Sierra rivers; and both the magnitude and duration of snowpack storage are decreasing. Precipitation amount and snow accumulation across the mountains in any given day, month or year remain uncertain. As historical index-statistical methods for hydrologic forecasts give way to tools based on mass and energy balances distributed across the landscape, opportunities are arising to broadly implement spatial measurements of snowpack storage and the equally important regolith-water storage. Advances in applying satellite and aircraft remote sensing, plus spatially distributed wireless-sensor networks, are filling this need. These same unprecedented data are driving process understanding to improve knowledge of snow-energy-forest interactions, snowmelt estimates, and hydrologic forecasts for hydropower, water supply, and flood control. Estimating the value of snowpacks and how they are changing provides a baseline for evaluating investments in restoration of headwater forests that will affect snowmelt runoff, and in providing replacement storage as snow declines. With California facing billions of dollars of green and grey infrastructure improvements, which must be compatible with the state's aggressive carbon-neutrality goals, it is critical to build support for expenditures. Science communication featuring Sierra Nevada snow through written, broadcast and film media can enhance public understanding and provide a basis for infrastructure and operational investments to address water security in a changing climate.
A dynamic programming approach to estimate the capacity value of energy storage
Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul
2013-09-17
Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less
Investigations into the use of energy storage in power system applications
NASA Astrophysics Data System (ADS)
Leung, Ka Kit
This thesis embodies research work on the design and implementation of novel fast responding battery energy storage systems, which, with sufficient capacity and rating, could remove the uncertainty in forecasting the annual peak demand. They would also benefit the day to day operation by curtailing the fastest demand variations, particularly at the daily peak periods. Energy storage that could curtail peak demands, when the most difficult operational problems occur offers a promising approach. Although AC energy cannot be stored, power electronic developments offer a fast responding interface between the AC network and DC energy stored in batteries. The attractive feature of the use of this energy storage could most effectively be located near the source of load variations, i.e. near consumers in the distribution networks. The proposed, three phase multi-purpose, Battery Energy Storage System will provide active and reactive power independent of the supply voltage with excellent power quality in terms of its waveform. Besides the above important functions applied at the distribution side of the utility, several new topologies have been developed to provide both Dynamic Voltage Regulator (DVR) and Unified Power Flow Controller (UPFC) functions for line compensation. These new topologies can provide fast and accurate control of power flow along a distribution corridor. The topologies also provide for fast damping of system oscillation due to transient or dynamic disturbances. Having demonstrated the various functions that the proposed Battery Energy Storage System can provide, the final part of the thesis investigates means of improving the performance of the proposed BESS. First, there is a need to reduce the switching losses by using soft switching instead of hard switching. A soft switching inverter using a parallel resonant dc-link (PRDCL) is proposed for use with the proposed BESS. The proposed PRDCL suppresses the dc-link voltage to zero for a very short time to allow zero voltage switching of inverter main switches without imposing excessive voltage and current stresses. Finally, in practice the battery terminal voltage fluctuates significantly as large current is being drawn or absorbed by the battery bank. When a hysteresis controller is used to control the supply line current, the ripple magnitude and frequency of the controlled current is highly dependent on the battery voltage, line inductance and the band limits of the controller. Even when these parameters are constant, the switching frequency can vary over quite a large range. A novel method is proposed to overcome this problem by controlling the dc voltage level by means of a dc-dc converter to provide a controllable voltage at the inverter dc terminal irrespective of the battery voltage variations. By proper control of the magnitude and frequency of the output of the DC-DC converter, the switching frequency can be made close to constant. A mathematical proof has been formulated and results from the simulation confirm that using the proposed technique, the frequency band has been significantly reduced and for the theoretical case, a single switching frequency is observed. The main disadvantage is the need to have an extra dc-dc converter, but this is relatively cheap and easy to obtain.
NASA Astrophysics Data System (ADS)
Goede, A. P. H.
2015-08-01
The need for storage of renewable energy (RE) generated by photovoltaic, concentrated solar and wind arises from the fact that supply and demand are ill-matched both geographically and temporarily. This already causes problems of overcapacity and grid congestion in countries where the fraction of RE exceeds the 20% level. A system approach is needed, which focusses not only on the energy source, but includes conversion, storage, transport, distribution, use and, last but not least, the recycling of waste. Furthermore, there is a need for more flexibility in the energy system, rather than relying on electrification, integration with other energy systems, for example the gas network, would yield a system less vulnerable to failure and better adapted to requirements. For example, long-term large-scale storage of electrical energy is limited by capacity, yet needed to cover weekly to seasonal demand. This limitation can be overcome by coupling the electricity net to the gas system, considering the fact that the Dutch gas network alone has a storage capacity of 552 TWh, sufficient to cover the entire EU energy demand for over a month. This lecture explores energy storage in chemicals bonds. The focus is on chemicals other than hydrogen, taking advantage of the higher volumetric energy density of hydrocarbons, in this case methane, which has an approximate 3.5 times higher volumetric energy density. More importantly, it allows the ready use of existing gas infrastructure for energy storage, transport and distribution. Intermittent wind electricity generated is converted into synthetic methane, the Power to Gas (P2G) scheme, by splitting feedstock CO2 and H2O into synthesis gas, a mixture of CO and H2. Syngas plays a central role in the synthesis of a range of hydrocarbon products, including methane, diesel and dimethyl ether. The splitting is accomplished by innovative means; plasmolysis and high-temperature solid oxygen electrolysis. A CO2-neutral fuel cycle is established by powering the conversion step by renewable energy and recapturing the CO2 emitted after combustion, ultimately from the surrounding air to cover emissions from distributed source. Carbon Capture and Utilisation (CCU) coupled to P2G thus creates a CO2-neutral energy system based on synthetic hydrocarbon fuel. It would enable a circular economy where the carbon cycle is closed by recovering the CO2 emitted after reuse of synthetic hydrocarbon fuel. The critical step, technically as well as economically, is the conversion of feedstock CO2/H2O into syngas rather than the capture of CO2 from ambient air.
40 CFR 761.35 - Storage for reuse.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Storage for reuse. 761.35 Section 761... Manufacturing, Processing, Distribution in Commerce, and Use of PCBs and PCB Items § 761.35 Storage for reuse... to the EPA Regional Administrator at least 6 months before the 5-year storage for reuse period...
40 CFR 761.35 - Storage for reuse.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Storage for reuse. 761.35 Section 761... Manufacturing, Processing, Distribution in Commerce, and Use of PCBs and PCB Items § 761.35 Storage for reuse... to the EPA Regional Administrator at least 6 months before the 5-year storage for reuse period...
40 CFR 761.35 - Storage for reuse.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Storage for reuse. 761.35 Section 761... Manufacturing, Processing, Distribution in Commerce, and Use of PCBs and PCB Items § 761.35 Storage for reuse... to the EPA Regional Administrator at least 6 months before the 5-year storage for reuse period...
40 CFR 761.35 - Storage for reuse.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Storage for reuse. 761.35 Section 761... Manufacturing, Processing, Distribution in Commerce, and Use of PCBs and PCB Items § 761.35 Storage for reuse... to the EPA Regional Administrator at least 6 months before the 5-year storage for reuse period...
Energy storage requirements of dc microgrids with high penetration renewables under droop control
Weaver, Wayne W.; Robinett, Rush D.; Parker, Gordon G.; ...
2015-01-09
Energy storage is a important design component in microgrids with high penetration renewable sources to maintain the system because of the highly variable and sometimes stochastic nature of the sources. Storage devices can be distributed close to the sources and/or at the microgrid bus. In addition, storage requirements can be minimized with a centralized control architecture, but this creates a single point of failure. Distributed droop control enables a completely decentralized architecture but, the energy storage optimization becomes more difficult. Our paper presents an approach to droop control that enables the local and bus storage requirements to be determined. Givenmore » a priori knowledge of the design structure of a microgrid and the basic cycles of the renewable sources, we found that the droop settings of the sources are such that they minimize both the bus voltage variations and overall energy storage capacity required in the system. This approach can be used in the design phase of a microgrid with a decentralized control structure to determine appropriate droop settings as well as the sizing of energy storage devices.« less
NASA Astrophysics Data System (ADS)
Chaianong, A.; Bangviwat, A.; Menke, C.
2017-07-01
Driven by decreasing PV and energy storage prices, increasing electricity costs and policy supports from Thai government (self-consumption era), rooftop PV and energy storage systems are going to be deployed in the country rapidly that may disrupt existing business models structure of Thai distribution utilities due to revenue erosion and lost earnings opportunities. The retail rates that directly affect ratepayers (non-solar customers) are expected to increase. This paper focuses on a framework for evaluating impacts of PV with and without energy storage systems on Thai distribution utilities and ratepayers by using cost-benefit analysis (CBA). Prior to calculation of cost/benefit components, changes in energy sales need to be addressed. Government policies for the support of PV generation will also help in accelerating the rooftop PV installation. Benefit components include avoided costs due to transmission losses and deferring distribution capacity with appropriate PV penetration level, while cost components consist of losses in revenue, program costs, integration costs and unrecovered fixed costs. It is necessary for Thailand to compare total costs and total benefits of rooftop PV and energy storage systems in order to adopt policy supports and mitigation approaches, such as business model innovation and regulatory reform, effectively.
Distributed and Dynamic Storage of Working Memory Stimulus Information in Extrastriate Cortex
Sreenivasan, Kartik K.; Vytlacil, Jason; D'Esposito, Mark
2015-01-01
The predominant neurobiological model of working memory (WM) posits that stimulus information is stored via stable elevated activity within highly selective neurons. Based on this model, which we refer to as the canonical model, the storage of stimulus information is largely associated with lateral prefrontal cortex (lPFC). A growing number of studies describe results that cannot be fully explained by the canonical model, suggesting that it is in need of revision. In the present study, we directly test key elements of the canonical model. We analyzed functional MRI data collected as participants performed a task requiring WM for faces and scenes. Multivariate decoding procedures identified patterns of activity containing information about the items maintained in WM (faces, scenes, or both). While information about WM items was identified in extrastriate visual cortex (EC) and lPFC, only EC exhibited a pattern of results consistent with a sensory representation. Information in both regions persisted even in the absence of elevated activity, suggesting that elevated population activity may not represent the storage of information in WM. Additionally, we observed that WM information was distributed across EC neural populations that exhibited a broad range of selectivity for the WM items rather than restricted to highly selective EC populations. Finally, we determined that activity patterns coding for WM information were not stable, but instead varied over the course of a trial, indicating that the neural code for WM information is dynamic rather than static. Together, these findings challenge the canonical model of WM. PMID:24392897
Technologies for Large Data Management in Scientific Computing
NASA Astrophysics Data System (ADS)
Pace, Alberto
2014-01-01
In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago. This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project. The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.
INDIGO-DataCloud solutions for Earth Sciences
NASA Astrophysics Data System (ADS)
Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Fiore, Sandro; Monna, Stephen; Chen, Yin
2017-04-01
INDIGO-DataCloud (https://www.indigo-datacloud.eu/) is a European Commission funded project aiming to develop a data and computing platform targeting scientific communities, deployable on multiple hardware and provisioned over hybrid (private or public) e-infrastructures. The development of INDIGO solutions covers the different layers in cloud computing (IaaS, PaaS, SaaS), and provides tools to exploit resources like HPC or GPGPUs. INDIGO is oriented to support European Scientific research communities, that are well represented in the project. Twelve different Case Studies have been analyzed in detail from different fields: Biological & Medical sciences, Social sciences & Humanities, Environmental and Earth sciences and Physics & Astrophysics. INDIGO-DataCloud provides solutions to emerging challenges in Earth Science like: -Enabling an easy deployment of community services at different cloud sites. Many Earth Science research infrastructures often involve distributed observation stations across countries, and also have distributed data centers to support the corresponding data acquisition and curation. There is a need to easily deploy new data center services while the research infrastructure continuous spans. As an example: LifeWatch (ESFRI, Ecosystems and Biodiversity) uses INDIGO solutions to manage the deployment of services to perform complex hydrodynamics and water quality modelling over a Cloud Computing environment, predicting algae blooms, using the Docker technology: TOSCA requirement description, Docker repository, Orchestrator for deployment, AAI (AuthN, AuthZ) and OneData (Distributed Storage System). -Supporting Big Data Analysis. Nowadays, many Earth Science research communities produce large amounts of data and and are challenged by the difficulties of processing and analysing it. A climate models intercomparison data analysis case study for the European Network for Earth System Modelling (ENES) community has been setup, based on the Ophidia big data analysis framework and the Kepler workflow management system. Such services normally involve a large and distributed set of data and computing resources. In this regard, this case study exploits the INDIGO PaaS for a flexible and dynamic allocation of the resources at the infrastructural level. -Providing Distributed Data Storage Solutions. In order to allow scientific communities to perform heavy computation on huge datasets, INDIGO provides global data access solutions allowing researchers to access data in a distributed environment like fashion regardless of its location, and also to publish and share their research results with public or close communities. INDIGO solutions that support the access to distributed data storage (OneData) are being tested on EMSO infrastructure (Ocean Sciences and Geohazards) data. Another aspect of interest for the EMSO community is in efficient data processing by exploiting INDIGO services like PaaS Orchestrator. Further, for HPC exploitation, a new solution named Udocker has been implemented, enabling users to execute docker containers in supercomputers, without requiring administration privileges. This presentation will overview INDIGO solutions that are interesting and useful for Earth science communities and will show how they can be applied to other Case Studies.
Effects of voltage control in utility interactive dispersed storage and generation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkham, H.; Das, R.
1983-03-15
When a small generator is connected to the distribution system, the voltage at the point of interconnection is determined largely by the system and not the generator. This report examines the effect on the generator, on the load voltage and on the distribution system of a number of different voltage control strategies in the generator. Synchronous generators with three kinds of exciter control are considered, as well as induction generators and dc/ac inverters, with and without capacitor compensation. The effect of varying input power during operation (which may be experienced by generators based on renewable resources) is explored, as wellmore » as the effect of connecting and disconnecting the generator at ten percent of its rated power.« less
Policy-based Distributed Data Management
NASA Astrophysics Data System (ADS)
Moore, R. W.
2009-12-01
The analysis and understanding of climate variability and change builds upon access to massive collections of observational and simulation data. The analyses involve distributed computing, both at the storage systems (which support data subsetting) and at compute engines (for assimilation of observational data into simulations). The integrated Rule Oriented Data System (iRODS) organizes the distributed data into collections to facilitate enforcement of management policies, support remote data processing, and enable development of reference collections. Currently at RENCI, the iRODS data grid is being used to manage ortho-photos and lidar data for the State of North Carolina, provide a unifying storage environment for engagement centers across the state, support distributed access to visualizations of weather data, and is being explored to manage and disseminate collections of ensembles of meteorological and hydrological model results. In collaboration with the National Climatic Data Center, an iRODS data grid is being established to support data transmission from NCDC to ORNL, and to integrate NCDC archives with ORNL compute services. To manage the massive data transfers, parallel I/O streams are used between High Performance Storage System tape archives and the supercomputers at ORNL. Further, we are exploring the movement and management of large RADAR and in situ datasets to be used for data mining between RENCI and NCDC, and for the distributed creation of decision support and climate analysis tools. The iRODS data grid supports all phases of the scientific data life cycle, from management of data products for a project, to sharing of data between research institutions, to publication of data in a digital library, to preservation of data for use in future research projects. Each phase is characterized by a broader user community, with higher expectations for more detailed descriptions and analysis mechanisms for manipulating the data. The higher usage requirements are enforced by management policies that define the required metadata, the required data formats, and the required analysis tools. The iRODS policy based data management system automates the creation of the community chosen data products, validates integrity and authenticity assessment criteria, and enforces management policies across all accesses of the system.
Modeling of thermal storage systems in MILP distributed energy resource models
Steen, David; Stadler, Michael; Cardoso, Gonçalo; ...
2014-08-04
Thermal energy storage (TES) and distributed generation technologies, such as combined heat and power (CHP) or photovoltaics (PV), can be used to reduce energy costs and decrease CO 2 emissions from buildings by shifting energy consumption to times with less emissions and/or lower energy prices. To determine the feasibility of investing in TES in combination with other distributed energy resources (DER), mixed integer linear programming (MILP) can be used. Such a MILP model is the well-established Distributed Energy Resources Customer Adoption Model (DER-CAM); however, it currently uses only a simplified TES model to guarantee linearity and short run-times. Loss calculationsmore » are based only on the energy contained in the storage. This paper presents a new DER-CAM TES model that allows improved tracking of losses based on ambient and storage temperatures, and compares results with the previous version. A multi-layer TES model is introduced that retains linearity and avoids creating an endogenous optimization problem. The improved model increases the accuracy of the estimated storage losses and enables use of heat pumps for low temperature storage charging. Ultimately,results indicate that the previous model overestimates the attractiveness of TES investments for cases without possibility to invest in heat pumps and underestimates it for some locations when heat pumps are allowed. Despite a variation in optimal technology selection between the two models, the objective function value stays quite stable, illustrating the complexity of optimal DER sizing problems in buildings and microgrids.« less
Zhao, Dehua; Wang, Penghe; Zuo, Jie; Zhang, Hui; An, Shuqing; Ramesh, Reddy K
2017-08-01
Numerous drought indices have been developed over the past several decades. However, few studies have focused on the suitability of indices for studies of ephemeral wetlands. The objective is to answer the following question: can the traditional large-scale drought indices characterize drought severity in shallow water wetlands such as the Everglades? The question was approached from two perspectives: the available water quantity and the response of wetland ecosystems to drought. The results showed the unsuitability of traditional large-scale drought indices for characterizing the actual available water quantity based on two findings. (1) Large spatial variations in precipitation (P), potential evapotranspiration (PE), water table depth (WTD) and the monthly water storage change (SC) were observed in the Everglades; notably, the spatial variation in SC, which reflects the monthly water balance, was 1.86 and 1.62 times larger than the temporal variation between seasons and between years, respectively. (2) The large-scale water balance measured based on the water storage variation had an average indicating efficiency (IE) of only 60.01% due to the redistribution of interior water. The spatial distribution of variations in the Normalized Different Vegetation Index (NDVI) in the 2011 dry season showed significantly positive, significantly negative and weak correlations with the minimum WTD in wet prairies, graminoid prairies and sawgrass wetlands, respectively. The significant and opposite correlations imply the unsuitability of the traditional large-scale drought indices in evaluating the effect of drought on shallow water wetlands. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cooperative high-performance storage in the accelerated strategic computing initiative
NASA Technical Reports Server (NTRS)
Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark
1996-01-01
The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.
NASA Astrophysics Data System (ADS)
Wang, Qian; Lu, Guangqi; Li, Xiaoyu; Zhang, Yichi; Yun, Zejian; Bian, Di
2018-01-01
To take advantage of the energy storage system (ESS) sufficiently, the factors that the service life of the distributed energy storage system (DESS) and the load should be considered when establishing optimization model. To reduce the complexity of the load shifting of DESS in the solution procedure, the loss coefficient and the equal capacity ratio distribution principle were adopted in this paper. Firstly, the model was established considering the constraint conditions of the cycles, depth, power of the charge-discharge of the ESS, the typical daily load curves, as well. Then, dynamic programming method was used to real-time solve the model in which the difference of power Δs, the real-time revised energy storage capacity Sk and the permission error of depth of charge-discharge were introduced to optimize the solution process. The simulation results show that the optimized results was achieved when the load shifting in the load variance was not considered which means the charge-discharge of the energy storage system was not executed. In the meantime, the service life of the ESS would increase.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-20
... related to operations and maintenance of storage and distribution facilities for petroleum products within the Colton and Colton North Terminals, and with habitat restoration and management on a proposed on... maintenance of storage and distribution facilities for petroleum products on approximately 20 acres (ac) (8...
9 CFR 317.13 - Storage and distribution of labels and containers bearing official marks.
Code of Federal Regulations, 2010 CFR
2010-01-01
... containers bearing official marks. 317.13 Section 317.13 Animals and Animal Products FOOD SAFETY AND... General § 317.13 Storage and distribution of labels and containers bearing official marks. Labels, wrappers, and containers bearing any official marks, with or without the establishment number, may be...
Remote-Sensing Data Distribution and Processing in the Cloud at the ASF DAAC
NASA Astrophysics Data System (ADS)
Stoner, C.; Arko, S. A.; Nicoll, J. B.; Labelle-Hamer, A. L.
2016-12-01
The Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) has been tasked to archive and distribute data from both SENTINEL-1 satellites and from the NASA-ISRO Synthetic Aperture Radar (NISAR) satellite in a cost effective manner. In order to best support processing and distribution of these large data sets for users, the ASF DAAC enhanced our data system in a number of ways that will be detailed in this presentation.The SENTINEL-1 mission comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band Synthetic Aperture Radar (SAR) imaging, enabling them to acquire imagery regardless of the weather. SENTINEL-1A was launched by the European Space Agency (ESA) in April 2014. SENTINEL-1B is scheduled to launch in April 2016.The NISAR satellite is designed to observe and take measurements of some of the planet's most complex processes, including ecosystem disturbances, ice-sheet collapse, and natural hazards such as earthquakes, tsunamis, volcanoes and landslides. NISAR will employ radar imaging, polarimetry, and interferometry techniques using the SweepSAR technology employed for full-resolution wide-swath imaging. NISAR data files are large, making storage and processing a challenge for conventional store and download systems.To effectively process, store, and distribute petabytes of data in a High-performance computing environment, ASF took a long view with regard to technology choices and picked a path of most flexibility and Software re-use. To that end, this Software tools and services presentation will cover Web Object Storage (WOS) and the ability to seamlessly move from local sunk cost hardware to public cloud, such as Amazon Web Services (AWS). A prototype of SENTINEL-1A system that is in AWS, as well as a local hardware solution, will be examined to explain the pros and cons of each. In preparation for NISAR files which will be even larger than SENTINEL-1A, ASF has embarked on a number of cloud initiatives, including processing in the cloud at scale, processing data on-demand, and processing end-user computations on DAAC data in the cloud.
National Storage Laboratory: a collaborative research project
NASA Astrophysics Data System (ADS)
Coyne, Robert A.; Hulen, Harry; Watson, Richard W.
1993-01-01
The grand challenges of science and industry that are driving computing and communications have created corresponding challenges in information storage and retrieval. An industry-led collaborative project has been organized to investigate technology for storage systems that will be the future repositories of national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and provider of applications. The expected result is the creation of a National Storage Laboratory to serve as a prototype and demonstration facility. It is expected that this prototype will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte-class files at gigabit-per-second data rates. Specifically, the collaboration expects to make significant advances in hardware, software, and systems technology in four areas of need, (1) network-attached high performance storage; (2) multiple, dynamic, distributed storage hierarchies; (3) layered access to storage system services; and (4) storage system management.
Use of Schema on Read in Earth Science Data Archives
NASA Technical Reports Server (NTRS)
Hegde, Mahabaleshwara; Smit, Christine; Pilone, Paul; Petrenko, Maksym; Pham, Long
2017-01-01
Traditionally, NASA Earth Science data archives have file-based storage using proprietary data file formats, such as HDF and HDF-EOS, which are optimized to support fast and efficient storage of spaceborne and model data as they are generated. The use of file-based storage essentially imposes an indexing strategy based on data dimensions. In most cases, NASA Earth Science data uses time as the primary index, leading to poor performance in accessing data in spatial dimensions. For example, producing a time series for a single spatial grid cell involves accessing a large number of data files. With exponential growth in data volume due to the ever-increasing spatial and temporal resolution of the data, using file-based archives poses significant performance and cost barriers to data discovery and access. Storing and disseminating data in proprietary data formats imposes an additional access barrier for users outside the mainstream research community. At the NASA Goddard Earth Sciences Data Information Services Center (GES DISC), we have evaluated applying the schema-on-read principle to data access and distribution. We used Apache Parquet to store geospatial data, and have exposed data through Amazon Web Services (AWS) Athena, AWS Simple Storage Service (S3), and Apache Spark. Using the schema-on-read approach allows customization of indexing spatially or temporally to suit the data access pattern. The storage of data in open formats such as Apache Parquet has widespread support in popular programming languages. A wide range of solutions for handling big data lowers the access barrier for all users. This presentation will discuss formats used for data storage, frameworks with This presentation will discuss formats used for data storage, frameworks with support for schema-on-read used for data access, and common use cases covering data usage patterns seen in a geospatial data archive.
Unconditional security from noisy quantum storage
NASA Astrophysics Data System (ADS)
Wehner, Stephanie
2010-03-01
We consider the implementation of two-party cryptographic primitives based on the sole physical assumption that no large-scale reliable quantum storage is available to the cheating party. An important example of such a task is secure identification. Here, Alice wants to identify herself to Bob (possibly an ATM machine) without revealing her password. More generally, Alice and Bob wish to solve problems where Alice holds an input x (e.g. her password), and Bob holds an input y (e.g. the password an honest Alice should possess), and they want to obtain the value of some function f(x,y) (e.g. the equality function). Security means that the legitimate users should not learn anything beyond this specification. That is, Alice should not learn anything about y and Bob should not learn anything about x, other than what they may be able to infer from the value of f(x,y). We show that any such problem can be solved securely in the noisy-storage model by constructing protocols for bit commitment and oblivious transfer, where we prove security against the most general attack. Our protocols can be implemented with present-day hardware used for quantum key distribution. In particular, no quantum storage is required for the honest parties. Our work raises a large number of immediate theoretical as well as experimental questions related to many aspects of quantum information science, such as for example understanding the information carrying properties of quantum channels and memories, randomness extraction, min-entropy sampling, as well as constructing small handheld devices which are suitable for the task of secure identification. [4pt] Full version available at arXiv:0906.1030 (theoretical) and arXiv:0911.2302 (practically oriented).
The influence of small mammal burrowing activity on water storage at the Hanford Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landeen, D.S.
The amount and rate at which water may penetrate a protective barrier and come into contact with buried radioactive waste is a major concern. Because burrowing animals eventually will reside on the surface of any protective barrier, the effect these burrow systems may have on the loss or retention of water needs to be determined. The first section of this document summarizes the known literature relative to small mammals and the effects that burrowing activities have on water distribution, infiltration, and the overall impact of burrows on the ecosystem. Topics that are summarized include burrow air pressures, airflow, burrow humidity,more » microtopography, mounding, infiltration, climate, soil evaporation, and discussions of large pores relative to water distribution. The second section of this document provides the results of the study that was conducted at the Hanford Site to determine what effect small mammal burrows have on water storage. This Biointrusion task is identified in the Permanent Isolation Surface Barrier Development Plan in support of protective barriers. This particular animal intrusion task is one part of the overall animal intrusion task identified in Animal Intrusion Test Plan.« less
Comprehensive security framework for the communication and storage of medical images
NASA Astrophysics Data System (ADS)
Slik, David; Montour, Mike; Altman, Tym
2003-05-01
Confidentiality, integrity verification and access control of medical imagery and associated metadata is critical for the successful deployment of integrated healthcare networks that extend beyond the department level. As medical imagery continues to become widely accessed across multiple administrative domains and geographically distributed locations, image data should be able to travel and be stored on untrusted infrastructure, including public networks and server equipment operated by external entities. Given these challenges associated with protecting large-scale distributed networks, measures must be taken to protect patient identifiable information while guarding against tampering, denial of service attacks, and providing robust audit mechanisms. The proposed framework outlines a series of security practices for the protection of medical images, incorporating Transport Layer Security (TLS), public and secret key cryptography, certificate management and a token based trusted computing base. It outlines measures that can be utilized to protect information stored within databases, online and nearline storage, and during transport over trusted and untrusted networks. In addition, it provides a framework for ensuring end-to-end integrity of image data from acquisition to viewing, and presents a potential solution to the challenges associated with access control across multiple administrative domains and institution user bases.
3-D transient hydraulic tomography in unconfined aquifers with fast drainage response
NASA Astrophysics Data System (ADS)
Cardiff, M.; Barrash, W.
2011-12-01
We investigate, through numerical experiments, the viability of three-dimensional transient hydraulic tomography (3DTHT) for identifying the spatial distribution of groundwater flow parameters (primarily, hydraulic conductivity K) in permeable, unconfined aquifers. To invert the large amount of transient data collected from 3DTHT surveys, we utilize an iterative geostatistical inversion strategy in which outer iterations progressively increase the number of data points fitted and inner iterations solve the quasi-linear geostatistical formulas of Kitanidis. In order to base our numerical experiments around realistic scenarios, we utilize pumping rates, geometries, and test lengths similar to those attainable during 3DTHT field campaigns performed at the Boise Hydrogeophysical Research Site (BHRS). We also utilize hydrologic parameters that are similar to those observed at the BHRS and in other unconsolidated, unconfined fluvial aquifers. In addition to estimating K, we test the ability of 3DTHT to estimate both average storage values (specific storage Ss and specific yield Sy) as well as spatial variability in storage coefficients. The effects of model conceptualization errors during unconfined 3DTHT are investigated including: (1) assuming constant storage coefficients during inversion and (2) assuming stationary geostatistical parameter variability. Overall, our findings indicate that estimation of K is slightly degraded if storage parameters must be jointly estimated, but that this effect is quite small compared with the degradation of estimates due to violation of "structural" geostatistical assumptions. Practically, we find for our scenarios that assuming constant storage values during inversion does not appear to have a significant effect on K estimates or uncertainty bounds.
NASA Astrophysics Data System (ADS)
Smakhtin, V.
2017-12-01
Humans stored water - in various forms - for ages, coping with water resources variability, and its extremes - floods and droughts. Storage per capita, and other storage-related indicators, have essentially become one way of reflecting the progress of economic development. Massive investments went into large surface water reservoirs that have become the characteristic feature of the earth's landscapes, bringing both benefits and controversy. As water variability progressively increases with changing climate, globally, on one hand, and the idea of sustainable development receives strong traction, on another - it may be worth the while to comprehensively examine current trends and future prospects for water storage development. The task is surely big, to say the least. The presentation will aim to initiate a structured discussion on this multi-facet issue and identify which aspects and trends of water storage development may be most important in the context of Sustainable Development Goals, Sendai Framework for Disaster Risk Reduction, Paris Agreement on Climate Change, and examine how, where and to what extent water storage planning can be improved. It will cover questions like i) aging of large water storage infrastructure, the current extent of this trend in various geographical regions, and possible impacts on water security and security of nations; ii) improved water storage development planning overall in the context of various water development alternatives and storage options themselves and well as their combinations iii) prospects for another "storage revolution" - speed increase in dam numbers, and where, if at all this is most likely iv) recent events in storage development, e.g. is dam decommissioning a trend that picks pace, or whether some developing economies in Asia can do without going through the period of water storage construction, with alternatives, or suggestions for alleviation of negative impacts v) the role of subsurface storage as an alternative to large surface dams, and vi) the role of nature based solutions in large storage development and overall storage functioning and management - to mention some. The presentation will call for coordinated effort that will help with environmentally and economically sound strategies of future storage development in national water planning.
NASA Astrophysics Data System (ADS)
Sharma, D.; Patnaik, S.; Reager, J. T., II; Biswal, B.
2017-12-01
Despite the fact that streamflow occurs mainly due to depletion of storage, our knowledge on how a drainage basin stores and releases water is very limited because of measurement limitations. As a result storage has largely remained an elusive entity in hydrological analysis and modelling. A window of opportunity, however, is given to us by GRACE satellite mission that provides storage anomaly (TWSA) data for the entire globe. Many studies have used TWSA data for storage-discharge analysis, uncovering a range of potential applications of TWSA data. Here we argue that the capability of GRACE satellite mission has not been fully explored as most of the studies in the past have performed storage-discharge analysis using monthly TWSA data for large river basins. With such coarse data we are quite unlikely to fully understand variation of storage and discharge in space and time. In this study, we therefore use daily TWSA data for several mid-sized catchments and perform storage-discharge analysis. Daily storage-discharge relationship is highly dynamic, which generates large amount of scatter in storage-discharge plots. Yet a careful analysis of those scatter plots reveals interesting information on storage-discharge relationships of basins, particularly by looking at the relationships during individual recession events. It is observed that storage-discharge relationship is exponential in nature, contrary to the general assumption that the relationship is linear. We find that there is a strong relationship between power-law recession coefficient and initial storage (TWSA at the beginning of recession event). Furthermore, appreciable relationships are observed between recession coefficient and past TWSA values implying that storage takes time to deplete completely. Overall, insights drawn from this study expands our knowledge on how discharge is dynamically linked to storage.
7 CFR 58.525 - Storage of finished product.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Storage of finished product. 58.525 Section 58.525... Procedures § 58.525 Storage of finished product. Cottage cheese after packaging shall be promptly stored at a... distribution and storage prior to sale the product should be maintained at a temperature of 45 °F. or lower...
7 CFR 58.525 - Storage of finished product.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Storage of finished product. 58.525 Section 58.525... Procedures § 58.525 Storage of finished product. Cottage cheese after packaging shall be promptly stored at a... distribution and storage prior to sale the product should be maintained at a temperature of 45 °F. or lower...
7 CFR 58.525 - Storage of finished product.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Storage of finished product. 58.525 Section 58.525... Procedures § 58.525 Storage of finished product. Cottage cheese after packaging shall be promptly stored at a... distribution and storage prior to sale the product should be maintained at a temperature of 45 °F. or lower...
2010-09-01
Cloud computing describes a new distributed computing paradigm for IT data and services that involves over-the-Internet provision of dynamically scalable and often virtualized resources. While cost reduction and flexibility in storage, services, and maintenance are important considerations when deciding on whether or how to migrate data and applications to the cloud, large organizations like the Department of Defense need to consider the organization and structure of data on the cloud and the operations on such data in order to reap the full benefit of cloud
Seasonal air and water mass redistribution effects on LAGEOS and Starlette
NASA Technical Reports Server (NTRS)
Gutierrez, Roberto; Wilson, Clark R.
1987-01-01
Zonal geopotential coefficients have been computed from average seasonal variations in global air and water mass distribution. These coefficients are used to predict the seasonal variations of LAGEOS' and Starlette's orbital node, the node residual, and the seasonal variation in the 3rd degree zonal coefficient for Starlette. A comparison of these predictions with the observed values indicates that air pressure and, to a lesser extent, water storage may be responsible for a large portion of the currently unmodeled variation in the earth's gravity field.
From Physics to industry: EOS outside HEP
NASA Astrophysics Data System (ADS)
Espinal, X.; Lamanna, M.
2017-10-01
In the competitive market for large-scale storage solutions the current main disk storage system at CERN EOS has been showing its excellence in the multi-Petabyte high-concurrency regime. It has also shown a disruptive potential in powering the service in providing sync and share capabilities and in supporting innovative analysis environments along the storage of LHC data. EOS has also generated interest as generic storage solution ranging from university systems to very large installations for non-HEP applications.
Optimizing tertiary storage organization and access for spatio-temporal datasets
NASA Technical Reports Server (NTRS)
Chen, Ling Tony; Rotem, Doron; Shoshani, Arie; Drach, Bob; Louis, Steve; Keating, Meridith
1994-01-01
We address in this paper data management techniques for efficiently retrieving requested subsets of large datasets stored on mass storage devices. This problem represents a major bottleneck that can negate the benefits of fast networks, because the time to access a subset from a large dataset stored on a mass storage system is much greater that the time to transmit that subset over a network. This paper focuses on very large spatial and temporal datasets generated by simulation programs in the area of climate modeling, but the techniques developed can be applied to other applications that deal with large multidimensional datasets. The main requirement we have addressed is the efficient access of subsets of information contained within much larger datasets, for the purpose of analysis and interactive visualization. We have developed data partitioning techniques that partition datasets into 'clusters' based on analysis of data access patterns and storage device characteristics. The goal is to minimize the number of clusters read from mass storage systems when subsets are requested. We emphasize in this paper proposed enhancements to current storage server protocols to permit control over physical placement of data on storage devices. We also discuss in some detail the aspects of the interface between the application programs and the mass storage system, as well as a workbench to help scientists to design the best reorganization of a dataset for anticipated access patterns.
Analysis of Energy Storage System with Distributed Hydrogen Production and Gas Turbine
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Bartela, Łukasz; Dubiel-Jurgaś, Klaudia
2017-12-01
Paper presents the concept of energy storage system based on power-to-gas-to-power (P2G2P) technology. The system consists of a gas turbine co-firing hydrogen, which is supplied from a distributed electrolysis installations, powered by the wind farms located a short distance from the potential construction site of the gas turbine. In the paper the location of this type of investment was selected. As part of the analyses, the area of wind farms covered by the storage system and the share of the electricity production which is subjected storage has been changed. The dependence of the changed quantities on the potential of the hydrogen production and the operating time of the gas turbine was analyzed. Additionally, preliminary economic analyses of the proposed energy storage system were carried out.
EMASS (tm): An expandable solution for NASA space data storage needs
NASA Technical Reports Server (NTRS)
Peterson, Anthony L.; Cardwell, P. Larry
1992-01-01
The data acquisition, distribution, processing, and archiving requirements of NASA and other U.S. Government data centers present significant data management challenges that must be met in the 1990's. The Earth Observing System (EOS) project alone is expected to generate daily data volumes greater than 2 Terabytes (2(10)(exp 12) Bytes). As the scientific community makes use of this data their work product will result in larger, increasingly complex data sets to be further exploited and managed. The challenge for data storage systems is to satisfy the initial data management requirements with cost effective solutions that provide for planned growth. This paper describes the expandable architecture of the E-Systems Modular Automated Storage System (EMASS (TM)), a mass storage system which is designed to support NASA's data capture, storage, distribution, and management requirements into the 21st century.
EMASS (trademark): An expandable solution for NASA space data storage needs
NASA Technical Reports Server (NTRS)
Peterson, Anthony L.; Cardwell, P. Larry
1991-01-01
The data acquisition, distribution, processing, and archiving requirements of NASA and other U.S. Government data centers present significant data management challenges that must be met in the 1990's. The Earth Observing System (EOS) project alone is expected to generate daily data volumes greater than 2 Terabytes (2 x 10(exp 12) Bytes). As the scientific community makes use of this data, their work will result in larger, increasingly complex data sets to be further exploited and managed. The challenge for data storage systems is to satisfy the initial data management requirements with cost effective solutions that provide for planned growth. The expendable architecture of the E-Systems Modular Automated Storage System (EMASS(TM)), a mass storage system which is designed to support NASA's data capture, storage, distribution, and management requirements into the 21st century is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, J.; Herner, K.; Jayatilaka, B.
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less
NASA Astrophysics Data System (ADS)
Popov, V. N.; Botygin, I. A.; Kolochev, A. S.
2017-01-01
The approach allows representing data of international codes for exchange of meteorological information using metadescription as the formalism associated with certain categories of resources. Development of metadata components was based on an analysis of the data of surface meteorological observations, atmosphere vertical sounding, atmosphere wind sounding, weather radar observing, observations from satellites and others. A common set of metadata components was formed including classes, divisions and groups for a generalized description of the meteorological data. The structure and content of the main components of a generalized metadescription are presented in detail by the example of representation of meteorological observations from land and sea stations. The functional structure of a distributed computing system is described. It allows organizing the storage of large volumes of meteorological data for their further processing in the solution of problems of the analysis and forecasting of climatic processes.
Data preservation at the Fermilab Tevatron
Amerio, S.; Behari, S.; Boyd, J.; ...
2017-01-22
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less
Data preservation at the Fermilab Tevatron
Boyd, J.; Herner, K.; Jayatilaka, B.; ...
2015-12-23
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in bothmore » software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. Furthermore, these efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.« less
Sector and Sphere: the design and implementation of a high-performance data cloud
Gu, Yunhong; Grossman, Robert L.
2009-01-01
Cloud computing has demonstrated that processing very large datasets over commodity clusters can be done simply, given the right programming model and infrastructure. In this paper, we describe the design and implementation of the Sector storage cloud and the Sphere compute cloud. By contrast with the existing storage and compute clouds, Sector can manage data not only within a data centre, but also across geographically distributed data centres. Similarly, the Sphere compute cloud supports user-defined functions (UDFs) over data both within and across data centres. As a special case, MapReduce-style programming can be implemented in Sphere by using a Map UDF followed by a Reduce UDF. We describe some experimental studies comparing Sector/Sphere and Hadoop using the Terasort benchmark. In these studies, Sector is approximately twice as fast as Hadoop. Sector/Sphere is open source. PMID:19451100
Data preservation at the Fermilab Tevatron
NASA Astrophysics Data System (ADS)
Boyd, J.; Herner, K.; Jayatilaka, B.; Roser, R.; Sakumoto, W.
2015-12-01
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and DO experiments each have nearly 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 or beyond. To achieve this, we are implementing a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology as well as leveraging resources available from currently-running experiments at Fermilab. These efforts will provide useful lessons in ensuring long-term data access for numerous experiments throughout high-energy physics, and provide a roadmap for high-quality scientific output for years to come.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, B; Hoober-Burkhardt, L; Wang, F
We introduce a novel Organic Redox Flow Battery (ORBAT), for Meeting the demanding requirements of cost, eco-friendliness, and durability for large-scale energy storage. ORBAT employs two different water-soluble organic redox couples on the positive and negative side of a flow battery. Redox couples such as quinones are particularly attractive for this application. No precious metal catalyst is needed because of the fast proton-coupled electron transfer processes. Furthermore, in acid media, the quinones exhibit good chemical stability. These properties render quinone-based redox couples very attractive for high-efficiency metal-free rechargeable batteries. We demonstrate the rechargeability of ORBAT with anthraquinone-2-sulfonic acid or anthraquinone-2,6-disulfonicmore » acid on the negative side, and 1,2-dihydrobenzoquinone- 3,5-disulfonic acid on the positive side. The ORBAT cell uses a membrane-electrode assembly configuration similar to that used in polymer electrolyte fuel cells. Such a battery can be charged and discharged multiple times at high faradaic efficiency without any noticeable degradation of performance. We show that solubility and mass transport properties of the reactants and products are paramount to achieving high current densities and high efficiency. The ORBAT configuration presents a unique opportunity for developing an inexpensive and sustainable metal-free rechargeable battery for large-scale electrical energy storage. (C) The Author(s) 2014. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 License (CC BY, http://creativecommons.orgilicenses/by/4.0/), which permits unrestricted reuse of the work in any medium, provided the original work is properly cited. All rights reserved.« less
Storage system software solutions for high-end user needs
NASA Technical Reports Server (NTRS)
Hogan, Carole B.
1992-01-01
Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.
Storage and Retrieval of Large RDF Graph Using Hadoop and MapReduce
NASA Astrophysics Data System (ADS)
Farhan Husain, Mohammad; Doshi, Pankil; Khan, Latifur; Thuraisingham, Bhavani
Handling huge amount of data scalably is a matter of concern for a long time. Same is true for semantic web data. Current semantic web frameworks lack this ability. In this paper, we describe a framework that we built using Hadoop to store and retrieve large number of RDF triples. We describe our schema to store RDF data in Hadoop Distribute File System. We also present our algorithms to answer a SPARQL query. We make use of Hadoop's MapReduce framework to actually answer the queries. Our results reveal that we can store huge amount of semantic web data in Hadoop clusters built mostly by cheap commodity class hardware and still can answer queries fast enough. We conclude that ours is a scalable framework, able to handle large amount of RDF data efficiently.
NASA Astrophysics Data System (ADS)
Harman, C. J.
2014-12-01
Models that faithfully represent spatially-integrated hydrologic transport through the critical zone at sub-watershed scales are essential building blocks for large-scale models of land use and climate controls on non-point source contaminant delivery. A particular challenge facing these models is the need to represent the delay between inputs of soluble contaminants (such as nitrate) at the field scale, and the solute load that appears in streams. Recent advances in the theory of time-variable transit time distributions (e.g. Botter et al., GRL 38(L11403), 2011) have provided a rigorous framework for representing conservative solute transport and its coupling to hydrologic variability and partitioning. Here I will present a reformulation of this framework that offers several distinct advantages over existing formulations: 1) the derivation of the governing conservation equation is simple and intuitive, 2) the closure relations are expressed in a convenient and physically meaningful way as probability distributions Ω(ST)Omega(S_T) over the storage ranked by age STS_T, and 3) changes in transport behavior determined by storage-dependent dilution and flow-path dynamics (as distinct from those due only to changes in the rates and partitioning of water flux) are completely encapsulated by these probability distributions. The framework has been implemented to model to the rich dataset of long-term stream and precipitation chloride from the Plynlimon watershed in Wales, UK. With suitable choices for the functional form of the closure relationships, only a small number of free parameters are required to reproduce the observed chloride dynamics as well as previous models with many more parameters, including reproducing the observed fractal 1/f filtering of the streamflow chloride variability. The modeled transport dynamics are sensitive to the input precipitation variability and water balance partitioning to evapotranspiration. Apparent storage-dependent age-sampling suggests that the model can account for shifts in flow pathways across high and low flows. This approach suggests a path forward for catchment-scale coupled flow and transport modeling.
Distributed Economic Dispatch in Microgrids Based on Cooperative Reinforcement Learning.
Liu, Weirong; Zhuang, Peng; Liang, Hao; Peng, Jun; Huang, Zhiwu; Weirong Liu; Peng Zhuang; Hao Liang; Jun Peng; Zhiwu Huang; Liu, Weirong; Liang, Hao; Peng, Jun; Zhuang, Peng; Huang, Zhiwu
2018-06-01
Microgrids incorporated with distributed generation (DG) units and energy storage (ES) devices are expected to play more and more important roles in the future power systems. Yet, achieving efficient distributed economic dispatch in microgrids is a challenging issue due to the randomness and nonlinear characteristics of DG units and loads. This paper proposes a cooperative reinforcement learning algorithm for distributed economic dispatch in microgrids. Utilizing the learning algorithm can avoid the difficulty of stochastic modeling and high computational complexity. In the cooperative reinforcement learning algorithm, the function approximation is leveraged to deal with the large and continuous state spaces. And a diffusion strategy is incorporated to coordinate the actions of DG units and ES devices. Based on the proposed algorithm, each node in microgrids only needs to communicate with its local neighbors, without relying on any centralized controllers. Algorithm convergence is analyzed, and simulations based on real-world meteorological and load data are conducted to validate the performance of the proposed algorithm.
Grid Simulation and Power Hardware-in-the-Loop | Grid Modernization | NREL
used PHIL to investigate the effects of advanced solar PV inverters on Hawaii's grid. A variety of PV Evaluating the Performance of Methods for Coordinated Control of Distributed Residential PV/Energy Storage photovoltaics (PV)-battery energy storage inverter control applied across an electric distribution system
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Requests for classified material, For Official Use Only material, accountable forms, storage safeguard forms, Limited (L) distribution items, and items with restrictive distribution caveats. 807.3 Section 807.3 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE...
NASA Astrophysics Data System (ADS)
Hesselink, Lambertus; Orlov, Sergei S.
Optical data storage is a phenomenal success story. Since its introduction in the early 1980s, optical data storage devices have evolved from being focused primarily on music distribution, to becoming the prevailing data distribution and recording medium. Each year, billions of optical recordable and prerecorded disks are sold worldwide. Almost every computer today is shipped with a CD or DVD drive installed.
Luo, Da; Feng, Qiu-hong; Shi, Zuo-min; Li, Dong-sheng; Yang, Chang-xu; Liu, Qian-li; He, Jian-she
2015-04-01
The carbon and nitrogen storage and distribution patterns of Cupressus chengiana plantation ecosystems with different stand ages in the arid valley of Minjiang River were studied. The results showed that carbon contents in different organs of C. chengiana were relatively stable, while nitrogen contents were closely related to different organs, and soil organic carbon and nitrogen contents increased with the stand age. Carbon and nitrogen storage in vegetation layer, soil layer, and the whole ecosystem of the plantation increased with the stand age. The values of total carbon storage in the 13-, 11-, 8-, 6- and 4-year-old C. chengiana plantation ecosystems were 190.90, 165.91, 144.57, 119.44, and 113.49 t x hm(-2), and the values of total nitrogen storage were 19.09, 17.97, 13.82, 13.42, and 12.26 t x hm(-2), respectively. Most of carbon and nitrogen were stored in the 0-60 cm soil layer in the plantation ecosystems and occupied 92.8% and 98.8%, respectively, and the amounts of carbon and nitrogen stored in the top 0-20 cm soil layer, accounted for 54.4% and 48.9% of those in the 0-60 cm soil layer, respectively. Difference in distribution of carbon and nitrogen storage was observed in the vegetation layer. The percentage of carbon storage in tree layer (3.7%) were higher than that in understory vegetation (3.5%), while the percentage of nitrogen storage in tree layer (0.5%) was lower than that in understory (0.7%). The carbon and nitrogen storage and distribution patterns in the plantations varied obviously with the stand age, and the plantation ecosystems at these age stages could accumulate organic carbon and nitrogen continuously.
NASA Astrophysics Data System (ADS)
Cass, Christine J.; Daly, Kendra L.; Wakeham, Stuart G.
2014-11-01
Members of the copepod family Eucalanidae are widely distributed throughout the world's oceans and have been noted for their accumulation of storage lipids in high- and low-latitude environments. However, little is known about the lipid composition of eucalanoid copepods in low-latitude environments. The purpose of this study was to examine fatty acid and alcohol profiles in the storage lipids (wax esters and triacylglycerols) of Eucalanus inermis, Rhincalanus rostrifrons, R. nasutus, Pareucalanus attenuatus, and Subeucalanus subtenuis, collected primarily in the eastern tropical north Pacific near the Tehuantepec Bowl and Costa Rica Dome regions, noted for its oxygen minimum zone, during fall 2007 and winter 2008/2009. Adult copepods and particulate material were collected in the upper 50 m and from 200 to 300 m in the upper oxycline. Lipid profiles of particulate matter were generated to help ascertain information on ecological strategies of these species and on differential accumulation of dietary and modified fatty acids in the wax ester and triacylglycerol storage lipid components of these copepods in relation to their vertical distributions around the oxygen minimum zone. Additional data on phospholipid fatty acid and sterol/fatty alcohol fractions were also generated to obtain a comprehensive lipid data set for each sample. Rhincalanus spp. accumulated relatively large amounts of storage lipids (31-80% of dry mass (DM)), while E. inermis had moderate amounts (2-9% DM), and P. attenuatus and S. subtenuis had low quantities of storage lipid (0-1% DM). E. inermis and S. subtenuis primarily accumulated triacylglycerols (>90% of storage lipids), while P. attenuatus and Rhincalanus spp. primarily accumulated wax esters (>84% of storage lipids). Based on previously generated molecular phylogenies of the Eucalanidae family, these results appear to support genetic predisposition as a major factor explaining why a given species accumulates primarily triacylglycerols or wax esters, and also potentially dictating major fatty acid and alcohol accumulation patterns within the more highly modified wax ester fraction. Comparisons of fatty acid profiles between triacylglycerol and wax ester components in copepods with that in available prey suggested that copepod triacylglycerols were more reflective of dietary fatty acids, while wax esters contained a higher proportion of modified or de novo synthesized forms. Sterols and phospholipid fatty acids were similar between species, confirming high levels of regulation within these components. Similarities between triacylglycerol fatty acid profiles of E. inermis collected in surface waters and at >200 m depth indicate little to no feeding during their ontogenetic migration to deeper, low-oxygen waters.
Cooperative Optimal Coordination for Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Ren, Wei
In this paper, we consider the optimal coordination problem for distributed energy resources (DERs) including distributed generators and energy storage devices. We propose an algorithm based on the push-sum and gradient method to optimally coordinate storage devices and distributed generators in a distributed manner. In the proposed algorithm, each DER only maintains a set of variables and updates them through information exchange with a few neighbors over a time-varying directed communication network. We show that the proposed distributed algorithm solves the optimal DER coordination problem if the time-varying directed communication network is uniformly jointly strongly connected, which is a mildmore » condition on the connectivity of communication topologies. The proposed distributed algorithm is illustrated and validated by numerical simulations.« less
NASA Astrophysics Data System (ADS)
Wang, Bo; Bauer, Sebastian
2017-04-01
With the rapid growth of energy production from intermittent renewable sources like wind and solar power plants, large-scale energy storage options are required to compensate for fluctuating power generation on different time scales. Compressed air energy storage (CAES) in porous formations is seen as a promising option for balancing short-term diurnal fluctuations. CAES is a power-to-power energy storage, which converts electricity to mechanical energy, i.e. highly pressurized air, and stores it in the subsurface. This study aims at designing the storage setup and quantifying the pressure response of a large-scale CAES operation in a porous sandstone formation, thus assessing the feasibility of this storage option. For this, numerical modelling of a synthetic site and a synthetic operational cycle is applied. A hypothetic CAES scenario using a typical anticline structure in northern Germany was investigated. The top of the storage formation is at 700 m depth and the thickness is 20 m. The porosity and permeability were assumed to have a homogenous distribution with a value of 0.35 and 500 mD, respectively. According to the specifications of the Huntorf CAES power plant, a gas turbine producing 321 MW power with a minimum inlet pressure of 43 bars at an air mass flowrate of 417 kg/s was assumed. Pressure loss in the gas wells was accounted for using an analytical solution, which defines a minimum bottom hole pressure of 47 bars. Two daily extraction cycles of 6 hours each were set to the early morning and the late afternoon in order to bypass the massive solar energy production around noon. A two-year initial filling of the reservoir with air and ten years of daily cyclic operation were numerically simulated using the Eclipse E300 reservoir simulator. The simulation results show that using 12 wells the storage formation with a permeability of 500 mD can support the required 6-hour continuous power output of 321MW, which corresponds an energy output of 3852 MWh per day. The average bottom hole pressure is 87 bars at the beginning of cyclic operation and reduces to 79 bars after 10 years. This pressure drop over time is caused by the open boundary conditions defined at the model edges and is not influenced by the cyclic operation. In the storage formation, the pressure response induced by the initial filling can be observed in the whole model domain, and a maximum pressure built-up of about 31 bars and 3 bars are observed near the wells and at a distance of 10 km from the wells, respectively. During the cyclic operation, however, pressure fluctuations of more than 1 bar can only be observed within the gas phase. Assuming formations with different permeabilities, a sensitivity analysis is carried out to find the number of wells required. Results show that the number of wells required does not linearly decrease with increasing permeability of the storage formation due to well interference during air extraction.
NASA Astrophysics Data System (ADS)
Eriyagama, Nishadi; Smakhtin, Vladimir; Udamulla, Lakshika
2018-06-01
Storage of surface water is widely regarded as a form of insurance against rainfall variability. However, creation of surface storage often endanger the functions of natural ecosystems, and, in turn, ecosystem services that benefit humans. The issues of optimal size, placement and the number of reservoirs in a river basin - which maximizes sustainable benefits from storage - remain subjects for debate. This study examines the above issues through the analysis of a range of reservoir configurations in the Malwatu Oya river basin in the dry zone of Sri Lanka. The study produced multiple surface storage development pathways for the basin under different scenarios of environmental flow (EF) releases and reservoir network configurations. The EF scenarios ranged from zero
to very healthy
releases. It is shown that if the middle ground
between the two extreme EF scenarios is considered, the theoretical maximum safe
yield from surface storage is about 65-70 % of the mean annual runoff (MAR) of the basin. It is also identified that although distribution of reservoirs in the river network reduces the cumulative yield from the basin, this cumulative yield is maximized if the ratio among the storage capacities placed in each sub drainage basin is equivalent to the ratio among their MAR. The study suggests a framework to identify drainage regions having higher surface storage potential, to plan for the right distribution of storage capacity within a river basin, as well as to plan for EF allocations.
A SCR Model Calibration Approach with Spatially Resolved Measurements and NH 3 Storage Distributions
Song, Xiaobo; Parker, Gordon G.; Johnson, John H.; ...
2014-11-27
The selective catalytic reduction (SCR) is a technology used for reducing NO x emissions in the heavy-duty diesel (HDD) engine exhaust. In this study, the spatially resolved capillary inlet infrared spectroscopy (Spaci-IR) technique was used to study the gas concentration and NH 3 storage distributions in a SCR catalyst, and to provide data for developing a SCR model to analyze the axial gaseous concentration and axial distributions of NH 3 storage. A two-site SCR model is described for simulating the reaction mechanisms. The model equations and a calculation method was developed using the Spaci-IR measurements to determine the NH 3more » storage capacity and the relationships between certain kinetic parameters of the model. Moreover, a calibration approach was then applied for tuning the kinetic parameters using the spatial gaseous measurements and calculated NH3 storage as a function of axial position instead of inlet and outlet gaseous concentrations of NO, NO 2, and NH 3. The equations and the approach for determining the NH 3 storage capacity of the catalyst and a method of dividing the NH 3 storage capacity between the two storage sites are presented. It was determined that the kinetic parameters of the adsorption and desorption reactions have to follow certain relationships for the model to simulate the experimental data. Finally, the modeling results served as a basis for developing full model calibrations to SCR lab reactor and engine data and state estimator development as described in the references (Song et al. 2013a, b; Surenahalli et al. 2013).« less
iRODS: A Distributed Data Management Cyberinfrastructure for Observatories
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; Vernon, F.
2007-12-01
Large-scale and long-term preservation of both observational and synthesized data requires a system that virtualizes data management concepts. A methodology is needed that can work across long distances in space (distribution) and long-periods in time (preservation). The system needs to manage data stored on multiple types of storage systems including new systems that become available in the future. This concept is called infrastructure independence, and is typically implemented through virtualization mechanisms. Data grids are built upon concepts of data and trust virtualization. These concepts enable the management of collections of data that are distributed across multiple institutions, stored on multiple types of storage systems, and accessed by multiple types of clients. Data virtualization ensures that the name spaces used to identify files, users, and storage systems are persistent, even when files are migrated onto future technology. This is required to preserve authenticity, the link between the record and descriptive and provenance metadata. Trust virtualization ensures that access controls remain invariant as files are moved within the data grid. This is required to track the chain of custody of records over time. The Storage Resource Broker (http://www.sdsc.edu/srb) is one such data grid used in a wide variety of applications in earth and space sciences such as ROADNet (roadnet.ucsd.edu), SEEK (seek.ecoinformatics.org), GEON (www.geongrid.org) and NOAO (www.noao.edu). Recent extensions to data grids provide one more level of virtualization - policy or management virtualization. Management virtualization ensures that execution of management policies can be automated, and that rules can be created that verify assertions about the shared collections of data. When dealing with distributed large-scale data over long periods of time, the policies used to manage the data and provide assurances about the authenticity of the data become paramount. The integrated Rule-Oriented Data System (iRODS) (http://irods.sdsc.edu) provides the mechanisms needed to describe not only management policies, but also to track how the policies are applied and their execution results. The iRODS data grid maps management policies to rules that control the execution of the remote micro-services. As an example, a rule can be created that automatically creates a replica whenever a file is added to a specific collection, or extracts its metadata automatically and registers it in a searchable catalog. For the replication operation, the persistent state information consists of the replica location, the creation date, the owner, the replica size, etc. The mechanism used by iRODS for providing policy virtualization is based on well-defined functions, called micro-services, which are chained into alternative workflows using rules. A rule engine, based on the event-condition-action paradigm executes the rule-based workflows after an event. Rules can be deferred to a pre-determined time or executed on a periodic basis. As the data management policies evolve, the iRODS system can implement new rules, new micro-services, and new state information (metadata content) needed to manage the new policies. Each sub- collection can be managed using a different set of policies. The discussion of the concepts in rule-based policy virtualization and its application to long-term and large-scale data management for observatories such as ORION and NEON will be the basis of the paper.
Phase-Change Heat-Storage Module
NASA Technical Reports Server (NTRS)
Mulligan, James C.
1989-01-01
Heat-storage module accommodates momentary heating or cooling overload in pumped-liquid heat-transfer system. Large heat-storage capacity of module provided by heat of fusion of material that freezes at or near temperature desired to maintain object to be heated or cooled. Module involves relatively small penalties in weight, cost, and size and more than compensates by enabling design of rest of system to handle only average load. Latent heat of fusion of phase-change material provides large heat-storage capacity in small volume.
NASA Astrophysics Data System (ADS)
Ray, Prakash K.; Mohanty, Soumya R.; Kishor, Nand
2010-07-01
This paper presents small-signal analysis of isolated as well as interconnected autonomous hybrid distributed generation system for sudden variation in load demand, wind speed and solar radiation. The hybrid systems comprise of different renewable energy resources such as wind, photovoltaic (PV) fuel cell (FC) and diesel engine generator (DEG) along with the energy storage devices such as flywheel energy storage system (FESS) and battery energy storage system (BESS). Further ultracapacitors (UC) as an alternative energy storage element and interconnection of hybrid systems through tie-line is incorporated into the system for improved performance. A comparative assessment of deviation of frequency profile for different hybrid systems in the presence of different storage system combinations is carried out graphically as well as in terms of the performance index (PI),
Application of a reversible chemical reaction system to solar thermal power plants
NASA Technical Reports Server (NTRS)
Hanseth, E. J.; Won, Y. S.; Seibowitz, L. P.
1980-01-01
Three distributed dish solar thermal power systems using various applications of SO2/SO3 chemical energy storage and transport technology were comparatively assessed. Each system features various roles for the chemical system: (1) energy storage only, (2) energy transport, or (3) energy transport and storage. These three systems were also compared with the dish-Stirling, using electrical transport and battery storage, and the central receiver Rankine system, with thermal storage, to determine the relative merit of plants employing a thermochemical system. As an assessment criterion, the busbar energy costs were compared. Separate but comparable solar energy cost computer codes were used for distributed receiver and central receiver systems. Calculations were performed for capacity factors ranging from 0.4 to 0.8. The results indicate that SO2/SO3 technology has the potential to be more cost effective in transporting the collected energy than in storing the energy for the storage capacity range studied (2-15 hours)
Cloud Computing and Its Applications in GIS
NASA Astrophysics Data System (ADS)
Kang, Cao
2011-12-01
Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through this assessment of cloud computing technology, the exploration of the challenges and solutions to the migration of GIS algorithms to cloud computing infrastructures, and the examination of strategies for serving large amounts of GIS data in a cloud computing infrastructure, this dissertation lends support to the feasibility of building a cloud-based GIS system. However, there are still challenges that need to be addressed before a full-scale functional cloud-based GIS system can be successfully implemented. (Abstract shortened by UMI.)
1977-04-01
task of data organization, management, and storage has been given to a select group of specialists . These specialists (the Data Base Administrators...report writers, etc.) the task of data organi?9tion, management, and storage has been given to a select group of specialists . These specialists (the...distributed DBMS Involves first identifying a set of two or more tasks blocking each other from a collection of shared 12 records. Once the set of
Comparison of Decadal Water Storage Trends from Global Hydrological Models and GRACE Satellite Data
NASA Astrophysics Data System (ADS)
Scanlon, B. R.; Zhang, Z. Z.; Save, H.; Sun, A. Y.; Mueller Schmied, H.; Van Beek, L. P.; Wiese, D. N.; Wada, Y.; Long, D.; Reedy, R. C.; Doll, P. M.; Longuevergne, L.
2017-12-01
Global hydrology is increasingly being evaluated using models; however, the reliability of these global models is not well known. In this study we compared decadal trends (2002-2014) in land water storage from 7 global models (WGHM, PCR-GLOBWB, and GLDAS: NOAH, MOSAIC, VIC, CLM, and CLSM) to storage trends from new GRACE satellite mascon solutions (CSR-M and JPL-M). The analysis was conducted over 186 river basins, representing about 60% of the global land area. Modeled total water storage trends agree with those from GRACE-derived trends that are within ±0.5 km3/yr but greatly underestimate large declining and rising trends outside this range. Large declining trends are found mostly in intensively irrigated basins and in some basins in northern latitudes. Rising trends are found in basins with little or no irrigation and are generally related to increasing trends in precipitation. The largest decline is found in the Ganges (-12 km3/yr) and the largest rise in the Amazon (43 km3/yr). Differences between models and GRACE are greatest in large basins (>0.5x106 km2) mostly in humid regions. There is very little agreement in storage trends between models and GRACE and among the models with values of r2 mostly <0.1. Various factors can contribute to discrepancies in water storage trends between models and GRACE, including uncertainties in precipitation, model calibration, storage capacity, and water use in models and uncertainties in GRACE data related to processing, glacier leakage, and glacial isostatic adjustment. The GRACE data indicate that land has a large capacity to store water over decadal timescales that is underrepresented by the models. The storage capacity in the modeled soil and groundwater compartments may be insufficient to accommodate the range in water storage variations shown by GRACE data. The inability of the models to capture the large storage trends indicates that model projections of climate and human-induced changes in water storage may be mostly underestimated. Future GRACE and model studies should try to reduce the various sources of uncertainty in water storage trends and should consider expanding the modeled storage capacity of the soil profiles and their interaction with groundwater.
Grid site availability evaluation and monitoring at CMS
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
Effect of rainfall seasonality on carbon storage in tropical dry ecosystems
NASA Astrophysics Data System (ADS)
Rohr, Tyler; Manzoni, Stefano; Feng, Xue; Menezes, Rômulo S. C.; Porporato, Amilcare
2013-07-01
seasonally dry conditions are typical of large areas of the tropics, their biogeochemical responses to seasonal rainfall and soil carbon (C) sequestration potential are not well characterized. Seasonal moisture availability positively affects both productivity and soil respiration, resulting in a delicate balance between C deposition as litterfall and C loss through heterotrophic respiration. To understand how rainfall seasonality (i.e., duration of the wet season and rainfall distribution) affects this balance and to provide estimates of long-term C sequestration, we develop a minimal model linking the seasonal behavior of the ensemble soil moisture, plant productivity, related C inputs through litterfall, and soil C dynamics. A drought-deciduous caatinga ecosystem in northeastern Brazil is used as a case study to parameterize the model. When extended to different patterns of rainfall seasonality, the results indicate that for fixed annual rainfall, both plant productivity and soil C sequestration potential are largely, and nonlinearly, dependent on wet season duration. Moreover, total annual rainfall is a critical driver of this relationship, leading at times to distinct optima in both production and C storage. These theoretical predictions are discussed in the context of parameter uncertainties and possible changes in rainfall regimes in tropical dry ecosystems.
High capacitance of coarse-grained carbide derived carbon electrodes
Dyatkin, Boris; Gogotsi, Oleksiy; Malinovskiy, Bohdan; ...
2016-01-01
Here, we report exceptional electrochemical properties of supercapacitor electrodes composed of large, granular carbide-derived carbon (CDC) particles. We synthesized 70–250 μm sized particles with high surface area and a narrow pore size distribution, using a titanium carbide (TiC) precursor. Electrochemical cycling of these coarse-grained powders defied conventional wisdom that a small particle size is strictly required for supercapacitor electrodes and allowed high charge storage densities, rapid transport, and good rate handling ability. Moreover, the material showcased capacitance above 100 F g -1 at sweep rates as high as 250 mV s -1 in organic electrolyte. 250–1000 micron thick dense CDCmore » films with up to 80 mg cm -2 loading showed superior areal capacitances. The material significantly outperformed its activated carbon counterpart in organic electrolytes and ionic liquids. Furthermore, large internal/external surface ratio of coarse-grained carbons allowed the resulting electrodes to maintain high electrochemical stability up to 3.1 V in ionic liquid electrolyte. In addition to presenting novel insights into the electrosorption process, these coarse-grained carbons offer a pathway to low-cost, high-performance implementation of supercapacitors in automotive and grid-storage applications.« less
Grid site availability evaluation and monitoring at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
High capacitance of coarse-grained carbide derived carbon electrodes
NASA Astrophysics Data System (ADS)
Dyatkin, Boris; Gogotsi, Oleksiy; Malinovskiy, Bohdan; Zozulya, Yuliya; Simon, Patrice; Gogotsi, Yury
2016-02-01
We report exceptional electrochemical properties of supercapacitor electrodes composed of large, granular carbide-derived carbon (CDC) particles. Using a titanium carbide (TiC) precursor, we synthesized 70-250 μm sized particles with high surface area and a narrow pore size distribution. Electrochemical cycling of these coarse-grained powders defied conventional wisdom that a small particle size is strictly required for supercapacitor electrodes and allowed high charge storage densities, rapid transport, and good rate handling ability. The material showcased capacitance above 100 F g-1 at sweep rates as high as 250 mV s-1 in organic electrolyte. 250-1000 micron thick dense CDC films with up to 80 mg cm-2 loading showed superior areal capacitances. The material significantly outperformed its activated carbon counterpart in organic electrolytes and ionic liquids. Furthermore, large internal/external surface ratio of coarse-grained carbons allowed the resulting electrodes to maintain high electrochemical stability up to 3.1 V in ionic liquid electrolyte. In addition to presenting novel insights into the electrosorption process, these coarse-grained carbons offer a pathway to low-cost, high-performance implementation of supercapacitors in automotive and grid-storage applications.
Grid site availability evaluation and monitoring at CMS
NASA Astrophysics Data System (ADS)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.
el May, M; Jeusset, J; el May, A; Mtimet, S; Fragu, P
1996-06-01
We measured the 127I distribution within tyroid tissue to find out where intrathyroid iodine was deposited during iodine treatment in eight Tunisian female patients (aged 33-58 yr) with endemic euthyroid goiter. Before surgery, five patients were treated during 6 months either by Lugol's solution (group 1: three patients) or by Lugol's and L-thyroxine (group 2: two patients). All patients remained euthyroid during the course of the treatment, which supplied 3.8 mg/day iodine. Three other patients did not receive Lugol's solution (control group). Secondary ion mass spectrometry microscopy was used to map 127-I quantitatively on thyroid sections. Specimens obtained at thyroid surgery were divided macroscopically into nodular and extranodular tissue and chemically fixed to preserve organified iodine. The iodine profile of patients in group 1 did not differ from that in group 2: large amounts of iodine were localized in thyroid follicles and stroma of both nodular and extranodular tissues. In the control group, iodine within stroma was found only in the extranodular tissue. Despite the limited number of patients studied, these data suggest that stromal iodine might represent a storage compartment in times of large iodine supply.
Global EOS: exploring the 300-ms-latency region
NASA Astrophysics Data System (ADS)
Mascetti, L.; Jericho, D.; Hsu, C.-Y.
2017-10-01
EOS, the CERN open-source distributed disk storage system, provides the highperformance storage solution for HEP analysis and the back-end for various work-flows. Recently EOS became the back-end of CERNBox, the cloud synchronisation service for CERN users. EOS can be used to take advantage of wide-area distributed installations: for the last few years CERN EOS uses a common deployment across two computer centres (Geneva-Meyrin and Budapest-Wigner) about 1,000 km apart (∼20-ms latency) with about 200 PB of disk (JBOD). In late 2015, the CERN-IT Storage group and AARNET (Australia) set-up a challenging R&D project: a single EOS instance between CERN and AARNET with more than 300ms latency (16,500 km apart). This paper will report about the success in deploy and run a distributed storage system between Europe (Geneva, Budapest), Australia (Melbourne) and later in Asia (ASGC Taipei), allowing different type of data placement and data access across these four sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.
A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less
FDA's Activities Supporting Regulatory Application of "Next Gen" Sequencing Technologies.
Wilson, Carolyn A; Simonyan, Vahan
2014-01-01
Applications of next-generation sequencing (NGS) technologies require availability and access to an information technology (IT) infrastructure and bioinformatics tools for large amounts of data storage and analyses. The U.S. Food and Drug Administration (FDA) anticipates that the use of NGS data to support regulatory submissions will continue to increase as the scientific and clinical communities become more familiar with the technologies and identify more ways to apply these advanced methods to support development and evaluation of new biomedical products. FDA laboratories are conducting research on different NGS platforms and developing the IT infrastructure and bioinformatics tools needed to enable regulatory evaluation of the technologies and the data sponsors will submit. A High-performance Integrated Virtual Environment, or HIVE, has been launched, and development and refinement continues as a collaborative effort between the FDA and George Washington University to provide the tools to support these needs. The use of a highly parallelized environment facilitated by use of distributed cloud storage and computation has resulted in a platform that is both rapid and responsive to changing scientific needs. The FDA plans to further develop in-house capacity in this area, while also supporting engagement by the external community, by sponsoring an open, public workshop to discuss NGS technologies and data formats standardization, and to promote the adoption of interoperability protocols in September 2014. Next-generation sequencing (NGS) technologies are enabling breakthroughs in how the biomedical community is developing and evaluating medical products. One example is the potential application of this method to the detection and identification of microbial contaminants in biologic products. In order for the U.S. Food and Drug Administration (FDA) to be able to evaluate the utility of this technology, we need to have the information technology infrastructure and bioinformatics tools to be able to store and analyze large amounts of data. To address this need, we have developed the High-performance Integrated Virtual Environment, or HIVE. HIVE uses a combination of distributed cloud storage and distributed cloud computations to provide a platform that is both rapid and responsive to support the growing and increasingly diverse scientific and regulatory needs of FDA scientists in their evaluation of NGS in research and ultimately for evaluation of NGS data in regulatory submissions. © PDA, Inc. 2014.
Energy storage inherent in large tidal turbine farms
Vennell, Ross; Adcock, Thomas A. A.
2014-01-01
While wind farms have no inherent storage to supply power in calm conditions, this paper demonstrates that large tidal turbine farms in channels have short-term energy storage. This storage lies in the inertia of the oscillating flow and can be used to exceed the previously published upper limit for power production by currents in a tidal channel, while simultaneously maintaining stronger currents. Inertial storage exploits the ability of large farms to manipulate the phase of the oscillating currents by varying the farm's drag coefficient. This work shows that by optimizing how a large farm's drag coefficient varies during the tidal cycle it is possible to have some flexibility about when power is produced. This flexibility can be used in many ways, e.g. producing more power, or to better meet short predictable peaks in demand. This flexibility also allows trading total power production off against meeting peak demand, or mitigating the flow speed reduction owing to power extraction. The effectiveness of inertial storage is governed by the frictional time scale relative to either the duration of a half tidal cycle or the duration of a peak in power demand, thus has greater benefits in larger channels. PMID:24910516
Energy storage inherent in large tidal turbine farms.
Vennell, Ross; Adcock, Thomas A A
2014-06-08
While wind farms have no inherent storage to supply power in calm conditions, this paper demonstrates that large tidal turbine farms in channels have short-term energy storage. This storage lies in the inertia of the oscillating flow and can be used to exceed the previously published upper limit for power production by currents in a tidal channel, while simultaneously maintaining stronger currents. Inertial storage exploits the ability of large farms to manipulate the phase of the oscillating currents by varying the farm's drag coefficient. This work shows that by optimizing how a large farm's drag coefficient varies during the tidal cycle it is possible to have some flexibility about when power is produced. This flexibility can be used in many ways, e.g. producing more power, or to better meet short predictable peaks in demand. This flexibility also allows trading total power production off against meeting peak demand, or mitigating the flow speed reduction owing to power extraction. The effectiveness of inertial storage is governed by the frictional time scale relative to either the duration of a half tidal cycle or the duration of a peak in power demand, thus has greater benefits in larger channels.
NASA Langley Research Center's distributed mass storage system
NASA Technical Reports Server (NTRS)
Pao, Juliet Z.; Humes, D. Creig
1993-01-01
There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guest, Geoffrey, E-mail: geoffrey.guest@ntnu.no; Bright, Ryan M., E-mail: ryan.m.bright@ntnu.no; Cherubini, Francesco, E-mail: francesco.cherubini@ntnu.no
2013-11-15
Temporary and permanent carbon storage from biogenic sources is seen as a way to mitigate climate change. The aim of this work is to illustrate the need to harmonize the quantification of such mitigation across all possible storage pools in the bio- and anthroposphere. We investigate nine alternative storage cases and a wide array of bio-resource pools: from annual crops, short rotation woody crops, medium rotation temperate forests, and long rotation boreal forests. For each feedstock type and biogenic carbon storage pool, we quantify the carbon cycle climate impact due to the skewed time distribution between emission and sequestration fluxesmore » in the bio- and anthroposphere. Additional consideration of the climate impact from albedo changes in forests is also illustrated for the boreal forest case. When characterizing climate impact with global warming potentials (GWP), we find a large variance in results which is attributed to different combinations of biomass storage and feedstock systems. The storage of biogenic carbon in any storage pool does not always confer climate benefits: even when biogenic carbon is stored long-term in durable product pools, the climate outcome may still be undesirable when the carbon is sourced from slow-growing biomass feedstock. For example, when biogenic carbon from Norway Spruce from Norway is stored in furniture with a mean life time of 43 years, a climate change impact of 0.08 kg CO{sub 2}eq per kg CO{sub 2} stored (100 year time horizon (TH)) would result. It was also found that when biogenic carbon is stored in a pool with negligible leakage to the atmosphere, the resulting GWP factor is not necessarily − 1 CO{sub 2}eq per kg CO{sub 2} stored. As an example, when biogenic CO{sub 2} from Norway Spruce biomass is stored in geological reservoirs with no leakage, we estimate a GWP of − 0.56 kg CO{sub 2}eq per kg CO{sub 2} stored (100 year TH) when albedo effects are also included. The large variance in GWPs across the range of resource and carbon storage options considered indicates that more accurate accounting will require case-specific factors derived following the methodological guidelines provided in this and recent manuscripts. -- Highlights: • Climate impacts of stored biogenic carbon (bio-C) are consistently quantified. • Temporary storage of bio-C does not always equate to a climate cooling impact. • 1 unit of bio-C stored over a time horizon does not always equate to − 1 unit CO{sub 2}eq. • Discrepancies of climate change impact quantification in literature are clarified.« less
Next Generation Distributed Computing for Cancer Research
Agarwal, Pankaj; Owzar, Kouros
2014-01-01
Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539
Next generation distributed computing for cancer research.
Agarwal, Pankaj; Owzar, Kouros
2014-01-01
Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing.
Delivery of video-on-demand services using local storages within passive optical networks.
Abeywickrama, Sandu; Wong, Elaine
2013-01-28
At present, distributed storage systems have been widely studied to alleviate Internet traffic build-up caused by high-bandwidth, on-demand applications. Distributed storage arrays located locally within the passive optical network were previously proposed to deliver Video-on-Demand services. As an added feature, a popularity-aware caching algorithm was also proposed to dynamically maintain the most popular videos in the storage arrays of such local storages. In this paper, we present a new dynamic bandwidth allocation algorithm to improve Video-on-Demand services over passive optical networks using local storages. The algorithm exploits the use of standard control packets to reduce the time taken for the initial request communication between the customer and the central office, and to maintain the set of popular movies in the local storage. We conduct packet level simulations to perform a comparative analysis of the Quality-of-Service attributes between two passive optical networks, namely the conventional passive optical network and one that is equipped with a local storage. Results from our analysis highlight that strategic placement of a local storage inside the network enables the services to be delivered with improved Quality-of-Service to the customer. We further formulate power consumption models of both architectures to examine the trade-off between enhanced Quality-of-Service performance versus the increased power requirement from implementing a local storage within the network.
A MapReduce approach to diminish imbalance parameters for big deoxyribonucleic acid dataset.
Kamal, Sarwar; Ripon, Shamim Hasnat; Dey, Nilanjan; Ashour, Amira S; Santhi, V
2016-07-01
In the age of information superhighway, big data play a significant role in information processing, extractions, retrieving and management. In computational biology, the continuous challenge is to manage the biological data. Data mining techniques are sometimes imperfect for new space and time requirements. Thus, it is critical to process massive amounts of data to retrieve knowledge. The existing software and automated tools to handle big data sets are not sufficient. As a result, an expandable mining technique that enfolds the large storage and processing capability of distributed or parallel processing platforms is essential. In this analysis, a contemporary distributed clustering methodology for imbalance data reduction using k-nearest neighbor (K-NN) classification approach has been introduced. The pivotal objective of this work is to illustrate real training data sets with reduced amount of elements or instances. These reduced amounts of data sets will ensure faster data classification and standard storage management with less sensitivity. However, general data reduction methods cannot manage very big data sets. To minimize these difficulties, a MapReduce-oriented framework is designed using various clusters of automated contents, comprising multiple algorithmic approaches. To test the proposed approach, a real DNA (deoxyribonucleic acid) dataset that consists of 90 million pairs has been used. The proposed model reduces the imbalance data sets from large-scale data sets without loss of its accuracy. The obtained results depict that MapReduce based K-NN classifier provided accurate results for big data of DNA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Dubrou, S; Konjek, J; Macheras, E; Welté, B; Guidicelli, L; Chignon, E; Joyeux, M; Gaillard, J L; Heym, B; Tully, T; Sapriel, G
2013-09-01
Nonpigmented and late-pigmenting rapidly growing mycobacteria (RGM) have been reported to commonly colonize water production and distribution systems. However, there is little information about the nature and distribution of RGM species within the different parts of such complex networks or about their clustering into specific RGM species communities. We conducted a large-scale survey between 2007 and 2009 in the Parisian urban tap water production and distribution system. We analyzed 1,418 water samples from 36 sites, covering all production units, water storage tanks, and distribution units; RGM isolates were identified by using rpoB gene sequencing. We detected 18 RGM species and putative new species, with most isolates being Mycobacterium chelonae and Mycobacterium llatzerense. Using hierarchical clustering and principal-component analysis, we found that RGM were organized into various communities correlating with water origin (groundwater or surface water) and location within the distribution network. Water treatment plants were more specifically associated with species of the Mycobacterium septicum group. On average, M. chelonae dominated network sites fed by surface water, and M. llatzerense dominated those fed by groundwater. Overall, the M. chelonae prevalence index increased along the distribution network and was associated with a correlative decrease in the prevalence index of M. llatzerense, suggesting competitive or niche exclusion between these two dominant species. Our data describe the great diversity and complexity of RGM species living in the interconnected environments that constitute the water production and distribution system of a large city and highlight the prevalence index of the potentially pathogenic species M. chelonae in the distribution network.
Dubrou, S.; Konjek, J.; Macheras, E.; Welté, B.; Guidicelli, L.; Chignon, E.; Joyeux, M.; Gaillard, J. L.; Heym, B.; Tully, T.
2013-01-01
Nonpigmented and late-pigmenting rapidly growing mycobacteria (RGM) have been reported to commonly colonize water production and distribution systems. However, there is little information about the nature and distribution of RGM species within the different parts of such complex networks or about their clustering into specific RGM species communities. We conducted a large-scale survey between 2007 and 2009 in the Parisian urban tap water production and distribution system. We analyzed 1,418 water samples from 36 sites, covering all production units, water storage tanks, and distribution units; RGM isolates were identified by using rpoB gene sequencing. We detected 18 RGM species and putative new species, with most isolates being Mycobacterium chelonae and Mycobacterium llatzerense. Using hierarchical clustering and principal-component analysis, we found that RGM were organized into various communities correlating with water origin (groundwater or surface water) and location within the distribution network. Water treatment plants were more specifically associated with species of the Mycobacterium septicum group. On average, M. chelonae dominated network sites fed by surface water, and M. llatzerense dominated those fed by groundwater. Overall, the M. chelonae prevalence index increased along the distribution network and was associated with a correlative decrease in the prevalence index of M. llatzerense, suggesting competitive or niche exclusion between these two dominant species. Our data describe the great diversity and complexity of RGM species living in the interconnected environments that constitute the water production and distribution system of a large city and highlight the prevalence index of the potentially pathogenic species M. chelonae in the distribution network. PMID:23835173
Effects of voltage control in utility interactive dispersed storage and generation systems
NASA Technical Reports Server (NTRS)
Kirkham, H.; Das, R.
1983-01-01
When a small generator is connected to the distribution system, the voltage at the point of interconnection is determined largely by the system and not the generator. The effect on the generator, on the load voltage and on the distribution system of a number of different voltage control strategies in the generator is examined. Synchronous generators with three kinds of exciter control are considered, as well as induction generators and dc/ac inverters, with and without capacitor compensation. The effect of varying input power during operation (which may be experienced by generators based on renewable resources) is explored, as well as the effect of connecting and disconnecting the generator at ten percent of its rated power. Operation with a constant slightly lagging factor is shown to have some advantages.
Lossless compression of image data products on th e FIFE CD-ROM series
NASA Technical Reports Server (NTRS)
Newcomer, Jeffrey A.; Strebel, Donald E.
1993-01-01
How do you store enough of the key data sets, from a total of 120 gigabytes of data collected for a scientific experiment, on a collection of CD-ROM's, small enough to distribute to a broad scientific community? In such an application where information loss in unacceptable, lossless compression algorithms are the only choice. Although lossy compression algorithms can provide an order of magnitude improvement in compression ratios over lossless algorithms the information that is lost is often part of the key scientific precision of the data. Therefore, lossless compression algorithms are and will continue to be extremely important in minimizing archiving storage requirements and distribution of large earth and space (ESS) data sets while preserving the essential scientific precision of the data.
Data distribution method of workflow in the cloud environment
NASA Astrophysics Data System (ADS)
Wang, Yong; Wu, Junjuan; Wang, Ying
2017-08-01
Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.
Hybrid electric vehicle power management system
Bissontz, Jay E.
2015-08-25
Level voltage levels/states of charge are maintained among a plurality of high voltage DC electrical storage devices/traction battery packs that are arrayed in series to support operation of a hybrid electric vehicle drive train. Each high voltage DC electrical storage device supports a high voltage power bus, to which at least one controllable load is connected, and at least a first lower voltage level electrical distribution system. The rate of power transfer from the high voltage DC electrical storage devices to the at least first lower voltage electrical distribution system is controlled by DC-DC converters.
Battery management system with distributed wireless sensors
Farmer, Joseph C.; Bandhauer, Todd M.
2016-02-23
A system for monitoring parameters of an energy storage system having a multiplicity of individual energy storage cells. A radio frequency identification and sensor unit is connected to each of the individual energy storage cells. The radio frequency identification and sensor unit operates to sense the parameter of each individual energy storage cell and provides radio frequency transmission of the parameters of each individual energy storage cell. A management system monitors the radio frequency transmissions from the radio frequency identification and sensor units for monitoring the parameters of the energy storage system.
Energy storage management system with distributed wireless sensors
Farmer, Joseph C.; Bandhauer, Todd M.
2015-12-08
An energy storage system having a multiple different types of energy storage and conversion devices. Each device is equipped with one or more sensors and RFID tags to communicate sensor information wirelessly to a central electronic management system, which is used to control the operation of each device. Each device can have multiple RFID tags and sensor types. Several energy storage and conversion devices can be combined.
NASA Astrophysics Data System (ADS)
Czuba, Jonathan A.; Foufoula-Georgiou, Efi; Gran, Karen B.; Belmont, Patrick; Wilcock, Peter R.
2017-05-01
Understanding how sediment moves along source to sink pathways through watersheds—from hillslopes to channels and in and out of floodplains—is a fundamental problem in geomorphology. We contribute to advancing this understanding by modeling the transport and in-channel storage dynamics of bed material sediment on a river network over a 600 year time period. Specifically, we present spatiotemporal changes in bed sediment thickness along an entire river network to elucidate how river networks organize and process sediment supply. We apply our model to sand transport in the agricultural Greater Blue Earth River Basin in Minnesota. By casting the arrival of sediment to links of the network as a Poisson process, we derive analytically (under supply-limited conditions) the time-averaged probability distribution function of bed sediment thickness for each link of the river network for any spatial distribution of inputs. Under transport-limited conditions, the analytical assumptions of the Poisson arrival process are violated (due to in-channel storage dynamics) where we find large fluctuations and periodicity in the time series of bed sediment thickness. The time series of bed sediment thickness is the result of dynamics on a network in propagating, altering, and amalgamating sediment inputs in sometimes unexpected ways. One key insight gleaned from the model is that there can be a small fraction of reaches with relatively low-transport capacity within a nonequilibrium river network acting as "bottlenecks" that control sediment to downstream reaches, whereby fluctuations in bed elevation can dissociate from signals in sediment supply.
NASA Astrophysics Data System (ADS)
Tulebekova, S.; Saliyev, D.; Zhang, D.; Kim, J. R.; Karabay, A.; Turlybek, A.; Kazybayeva, L.
2017-11-01
Compressed air energy storage technology is one of the promising methods that have high reliability, economic feasibility and low environmental impact. Current applications of the technology are mainly limited to energy storage for power plants using large scale underground caverns. This paper explores the possibility of making use of reinforced concrete pile foundations to store renewable energy generated from solar panels or windmills attached to building structures. The energy will be stored inside the pile foundation with hollow sections via compressed air. Given the relatively small volume of storage provided by the foundation, the required storage pressure is expected to be higher than that in the large-scale underground cavern. The high air pressure typically associated with large temperature increase, combined with structural loads, will make the pile foundation in a complicated loading condition, which might cause issues in the structural and geotechnical safety. This paper presents a preliminary analytical study on the performance of the pile foundation subjected to high pressure, large temperature increase and structural loads. Finite element analyses on pile foundation models, which are built from selected prototype structures, have been conducted. The analytical study identifies maximum stresses in the concrete of the pile foundation under combined pressure, temperature change and structural loads. Recommendations have been made for the use of reinforced concrete pile foundations for renewable energy storage.
NASA Astrophysics Data System (ADS)
Lv, Z. H.; Li, Q.; Huang, R. W.; Liu, H. M.; Liu, D.
2016-08-01
Based on the discussion about topology structure of integrated distributed photovoltaic (PV) power generation system and energy storage (ES) in single or mixed type, this paper focuses on analyzing grid-connected performance of integrated distributed photovoltaic and energy storage (PV-ES) systems, and proposes a comprehensive evaluation index system. Then a multi-level fuzzy comprehensive evaluation method based on grey correlation degree is proposed, and the calculations for weight matrix and fuzzy matrix are presented step by step. Finally, a distributed integrated PV-ES power generation system connected to a 380 V low voltage distribution network is taken as the example, and some suggestions are made based on the evaluation results.
Entropy generation minimization for the sloshing phenomenon in half-full elliptical storage tanks
NASA Astrophysics Data System (ADS)
Saghi, Hassan
2018-02-01
In this paper, the entropy generation in the sloshing phenomenon was obtained in elliptical storage tanks and the optimum geometry of tank was suggested. To do this, a numerical model was developed to simulate the sloshing phenomenon by using coupled Reynolds-Averaged Navier-Stokes (RANS) solver and the Volume-of-Fluid (VOF) method. The RANS equations were discretized and solved using the staggered grid finite difference and SMAC methods, and the available data were used for the model validation. Some parameters consisting of maximum free surface displacement (MFSD), maximum horizontal force exerted on the tank perimeter (MHF), tank perimeter (TP), and total entropy generation (Sgen) were introduced as design criteria for elliptical storage tanks. The entropy generation distribution provides designers with useful information about the causes of the energy loss. In this step, horizontal periodic sway motions as X =amsin(ωt) were applied to elliptical storage tanks with different aspect ratios namely ratios of large diameter to small diameter of elliptical storage tank (AR). Then, the effect of am and ω was studied on the results. The results show that the relation between MFSD and MHF is almost linear relative to the sway motion amplitude. Moreover, the results show that an increase in the AR causes a decrease in the MFSD and MHF. The results, also, show that the relation between MFSD and MHF is nonlinear relative to the sway motion angular frequency. Furthermore, the results show that an increase in the AR causes that the relation between MFSD and MHF becomes linear relative to the sway motion angular frequency. In addition, MFSD and MHF were minimized in a sway motion with a 7 rad/s angular frequency. Finally, the results show that the elliptical storage tank with AR =1.2-1.4 is the optimum section.
Wang, Guanyao; Huang, Yanhui; Wang, Yuxin; Jiang, Pingkai; Huang, Xingyi
2017-08-09
Dielectric polymer nanocomposites have received keen interest due to their potential application in energy storage. Nevertheless, the large contrast in dielectric constant between the polymer and nanofillers usually results in a significant decrease of breakdown strength of the nanocomposites, which is unfavorable for enhancing energy storage capability. Herein, BaTiO 3 nanowires (NWs) encapsulated by TiO 2 shells of variable thickness were utilized to fabricate dielectric polymer nanocomposites. Compared with nanocomposites with bare BaTiO 3 NWs, significantly enhanced energy storage capability was achieved for nanocomposites with TiO 2 encapsulated BaTiO 3 NWs. For instance, an ultrahigh energy density of 9.53 J cm -3 at 440 MV m -1 could be obtained for nanocomposites comprising core-shell structured nanowires, much higher than that of nanocomposites with 5 wt% raw ones (5.60 J cm -3 at 360 MV m -1 ). The discharged energy density of the proposed nanocomposites with 5 wt% mTiO 2 @BaTiO 3 -1 NWs at 440 MV m -1 seems to rival or exceed those of some previously reported nanocomposites (mostly comprising core-shell structured nanofillers). More notably, this study revealed that the energy storage capability of the nanocomposites can be tailored by the TiO 2 shell thickness. Finite element simulations were employed to analyze the electric field distribution in the nanocomposites. The enhanced energy storage capability should be mainly attributed to the smoother gradient of dielectric constant between the nanofillers and polymer matrix, which alleviated the electric field concentration and leakage current in the polymer matrix. The methods and results herein offer a feasible approach to construct high-energy-density polymer nanocomposites with core-shell structured nanowires.
Classification of Prairie basins by their hysteretic connected functions
NASA Astrophysics Data System (ADS)
Shook, K.; Pomeroy, J. W.
2017-12-01
Diagnosing climate change impacts in the post-glacial landscapes of the North American Prairies through hydrological modelling is made difficult by drainage basin physiography. The region is cold, dry and flat with poorly developed stream networks, and so the basin area that is hydrologically connected to the stream outlet varies with basin depressional storage. The connected area controls the contributing area for runoff reaching the stream outlet. As depressional storage fills, ponds spill from one to another; the chain of spilling ponds allows water to flow over the landscape and increases the connected area of the basin. As depressional storage decreases, the connected fraction drops dramatically. Detailed, fine-scale models and remote sensing have shown that the relationship between connected area and the depressional storage is hysteretic in Prairie basins and that the nature of hysteresis varies with basin physiography. This hysteresis needs to be represented in hydrological models to calculate contributing area, and therefore streamflow hydrographs. Parameterisations of the hysteresis are needed for large-scale models used for climate change diagnosis. However, use of parameterisations of hysteresis requires guidance on how to represent them for a particular basin. This study shows that it is possible to relate the shape of hysteretic functions as determined by detailed models to the overall physiography of the basin, such as the fraction of the basin below the outlet, and remote sensing estimates of depressional storage, using the size distribution and location of maximum ponded water areas. By classifying basin physiography, the hysteresis of connected area - storage relationships can be estimated for basins that do not have high-resolution topographic data, and without computationally-expensive high-resolution modelling.
Use of Schema on Read in Earth Science Data Archives
NASA Astrophysics Data System (ADS)
Petrenko, M.; Hegde, M.; Smit, C.; Pilone, P.; Pham, L.
2017-12-01
Traditionally, NASA Earth Science data archives have file-based storage using proprietary data file formats, such as HDF and HDF-EOS, which are optimized to support fast and efficient storage of spaceborne and model data as they are generated. The use of file-based storage essentially imposes an indexing strategy based on data dimensions. In most cases, NASA Earth Science data uses time as the primary index, leading to poor performance in accessing data in spatial dimensions. For example, producing a time series for a single spatial grid cell involves accessing a large number of data files. With exponential growth in data volume due to the ever-increasing spatial and temporal resolution of the data, using file-based archives poses significant performance and cost barriers to data discovery and access. Storing and disseminating data in proprietary data formats imposes an additional access barrier for users outside the mainstream research community. At the NASA Goddard Earth Sciences Data Information Services Center (GES DISC), we have evaluated applying the "schema-on-read" principle to data access and distribution. We used Apache Parquet to store geospatial data, and have exposed data through Amazon Web Services (AWS) Athena, AWS Simple Storage Service (S3), and Apache Spark. Using the "schema-on-read" approach allows customization of indexing—spatial or temporal—to suit the data access pattern. The storage of data in open formats such as Apache Parquet has widespread support in popular programming languages. A wide range of solutions for handling big data lowers the access barrier for all users. This presentation will discuss formats used for data storage, frameworks with support for "schema-on-read" used for data access, and common use cases covering data usage patterns seen in a geospatial data archive.
Software Defined Cyberinfrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, Ian; Blaiszik, Ben; Chard, Kyle
Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policiesmore » by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.« less
Global Cryptosporidium Loads from Livestock Manure.
Vermeulen, Lucie C; Benders, Jorien; Medema, Gertjan; Hofstra, Nynke
2017-08-01
Understanding the environmental pathways of Cryptosporidium is essential for effective management of human and animal cryptosporidiosis. In this paper we aim to quantify livestock Cryptosporidium spp. loads to land on a global scale using spatially explicit process-based modeling, and to explore the effect of manure storage and treatment on oocyst loads using scenario analysis. Our model GloWPa-Crypto L1 calculates a total global Cryptosporidium spp. load from livestock manure of 3.2 × 10 23 oocysts per year. Cattle, especially calves, are the largest contributors, followed by chickens and pigs. Spatial differences are linked to animal spatial distributions. North America, Europe, and Oceania together account for nearly a quarter of the total oocyst load, meaning that the developing world accounts for the largest share. GloWPa-Crypto L1 is most sensitive to oocyst excretion rates, due to large variation reported in literature. We compared the current situation to four alternative management scenarios. We find that although manure storage halves oocyst loads, manure treatment, especially of cattle manure and particularly at elevated temperatures, has a larger load reduction potential than manure storage (up to 4.6 log units). Regions with high reduction potential include India, Bangladesh, western Europe, China, several countries in Africa, and New Zealand.
Carbon Storages in Plantation Ecosystems in Sand Source Areas of North Beijing, China
Liu, Xiuping; Zhang, Wanjun; Cao, Jiansheng; Shen, Huitao; Zeng, Xinhua; Yu, Zhiqiang; Zhao, Xin
2013-01-01
Afforestation is a mitigation option to reduce the increased atmospheric carbon dioxide levels as well as the predicted high possibility of climate change. In this paper, vegetation survey data, statistical database, National Forest Resource Inventory database, and allometric equations were used to estimate carbon density (carbon mass per hectare) and carbon storage, and identify the size and spatial distribution of forest carbon sinks in plantation ecosystems in sand source areas of north Beijing, China. From 2001 to the end of 2010, the forest areas increased more than 2.3 million ha, and total carbon storage in forest ecosystems was 173.02 Tg C, of which 82.80 percent was contained in soil in the top 0–100 cm layer. Younger forests have a large potential for enhancing carbon sequestration in terrestrial ecosystems than older ones. Regarding future afforestation efforts, it will be more effective to increase forest area and vegetation carbon density through selection of appropriate tree species and stand structure according to local climate and soil conditions, and application of proper forest management including land-shaping, artificial tending and fencing plantations. It would be also important to protect the organic carbon in surface soils during forest management. PMID:24349223
Upper Atmosphere Research Satellite (UARS) trade analysis
NASA Technical Reports Server (NTRS)
Fox, M. M.; Nebb, J.
1983-01-01
The Upper Atmosphere Research Satellite (UARS) which will collect data pertinent to the Earth's upper atmosphere is described. The collected data will be sent to the central data handling facility (CDHF) via the UARS ground system and the data will be processed and distributed to the remote analysis computer systems (RACS). An overview of the UARS ground system is presented. Three configurations were developed for the CDHF-RACS system. The CDHF configurations are discussed. The IBM CDHF configuration, the UNIVAC CDHF configuration and the vax cluster CDHF configuration are presented. The RACS configurations, the IBM RACS configurations, UNIVAC RACS and VAX RACS are detailed. Due to the large on-line data estimate to approximately 100 GB, a mass storage system is considered essential to the UARS CDHF. Mass storage systems were analyzed and the Braegan ATL, the RCA optical disk, the IBM 3850 and the MASSTOR M860 are discussed. It is determined that the type of mass storage system most suitable to UARS is the automated tape/cartridge device. Two devices of this type, the IBM 3850 and the MASSTOR MSS are analyzed and the applicable tape/cartridge device is incorporated into the three CDHF-RACS configurations.
The performance of residential micro-cogeneration coupled with thermal and electrical storage
NASA Astrophysics Data System (ADS)
Kopf, John
Over 80% of residential secondary energy consumption in Canada and Ontario is used for space and water heating. The peak electricity demands resulting from residential energy consumption increase the reliance on fossil-fuel generation stations. Distributed energy resources can help to decrease the reliance on central generation stations. Presently, distributed energy resources such as solar photovoltaic, wind and bio-mass generation are subsidized in Ontario. Micro-cogeneration is an emerging technology that can be implemented as a distributed energy resource within residential or commercial buildings. Micro-cogeneration has the potential to reduce a building's energy consumption by simultaneously generating thermal and electrical power on-site. The coupling of a micro-cogeneration device with electrical storage can improve the system's ability to reduce peak electricity demands. The performance potential of micro-cogeneration devices has yet to be fully realized. This research addresses the performance of a residential micro-cogeneration device and it's ability to meet peak occupant electrical loads when coupled with electrical storage. An integrated building energy model was developed of a residential micro-cogeneration system: the house, the micro-cogeneration device, all balance of plant and space heating components, a thermal storage device, an electrical storage device, as well as the occupant electrical and hot water demands. This model simulated the performance of a micro-cogeneration device coupled to an electrical storage system within a Canadian household. A customized controller was created in ESP-r to examine the impact of various system control strategies. The economic performance of the system was assessed from the perspective of a local energy distribution company and an end-user under hypothetical electricity export purchase price scenarios. It was found that with certain control strategies the micro-cogeneration system was able to improve the economic performance for both the end user and local distribution company.
Ren, Long; Hui, K. N.; Hui, K. S.; Liu, Yundan; Qi, Xiang; Zhong, Jianxin; Du, Yi; Yang, Jianping
2015-01-01
New and novel 3D hierarchical porous graphene aerogels (HPGA) with uniform and tunable meso-pores (e.g., 21 and 53 nm) on graphene nanosheets (GNS) were prepared by a hydrothermal self-assembly process and an in-situ carbothermal reaction. The size and distribution of the meso-pores on the individual GNS were uniform and could be tuned by controlling the sizes of the Co3O4 NPs used in the hydrothermal reaction. This unique architecture of HPGA prevents the stacking of GNS and promises more electrochemically active sites that enhance the electrochemical storage level significantly. HPGA, as a lithium-ion battery anode, exhibited superior electrochemical performance, including a high reversible specific capacity of 1100 mAh/g at a current density of 0.1 A/g, outstanding cycling stability and excellent rate performance. Even at a large current density of 20 A/g, the reversible capacity was retained at 300 mAh/g, which is larger than that of most porous carbon-based anodes reported, suggesting it to be a promising candidate for energy storage. The proposed 3D HPGA is expected to provide an important platform that can promote the development of 3D topological porous systems in a range of energy storage and generation fields. PMID:26382852
I/O load balancing for big data HPC applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Arnab K.; Goyal, Arpit; Wang, Feiyi
High Performance Computing (HPC) big data problems require efficient distributed storage systems. However, at scale, such storage systems often experience load imbalance and resource contention due to two factors: the bursty nature of scientific application I/O; and the complex I/O path that is without centralized arbitration and control. For example, the extant Lustre parallel file system-that supports many HPC centers-comprises numerous components connected via custom network topologies, and serves varying demands of a large number of users and applications. Consequently, some storage servers can be more loaded than others, which creates bottlenecks and reduces overall application I/O performance. Existing solutionsmore » typically focus on per application load balancing, and thus are not as effective given their lack of a global view of the system. In this paper, we propose a data-driven approach to load balance the I/O servers at scale, targeted at Lustre deployments. To this end, we design a global mapper on Lustre Metadata Server, which gathers runtime statistics from key storage components on the I/O path, and applies Markov chain modeling and a minimum-cost maximum-flow algorithm to decide where data should be placed. Evaluation using a realistic system simulator and a real setup shows that our approach yields better load balancing, which in turn can improve end-to-end performance.« less
Signatures of Extended Storage of Used Nuclear Fuel in Casks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauch, Eric Benton
2016-09-28
As the amount of used nuclear fuel continues to grow, more and more used nuclear fuel will be transferred to storage casks. A consolidated storage facility is currently in the planning stages for storing these casks, where at least 10,000 MTHM of fuel will be stored. This site will have potentially thousands of casks once it is operational. A facility this large presents new safeguards and nuclear material accounting concerns. A new signature based on the distribution of neutron sources and multiplication within casks was part of the Department of Energy Office of Nuclear Energy’s Material Protection, Account and Controlmore » Technologies (MPACT) campaign. Under this project we looked at fingerprinting each cask's neutron signature. Each cask has a unique set of fuel, with a unique spread of initial enrichment, burnup, cooling time, and power history. The unique set of fuel creates a unique signature of neutron intensity based on the arrangement of the assemblies. The unique arrangement of neutron sources and multiplication produces a reliable and unique identification of the cask that has been shown to be relatively constant over long time periods. The work presented here could be used to restore from a loss of continuity of knowledge at the storage site. This presentation will show the steps used to simulate and form this signature from the start of the effort through its conclusion in September 2016.« less
Efficient Management of Certificate Revocation Lists in Smart Grid Advanced Metering Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cebe, Mumin; Akkaya, Kemal
Advanced Metering Infrastructure (AMI) forms a communication network for the collection of power data from smart meters in Smart Grid. As the communication within an AMI needs to be secure, key management becomes an issue due to overhead and limited resources. While using public-keys eliminate some of the overhead of key management, there is still challenges regarding certificates that store and certify the publickeys. In particular, distribution and storage of certificate revocation list (CRL) is major a challenge due to cost of distribution and storage in AMI networks which typically consist of wireless multi-hop networks. Motivated by the need ofmore » keeping the CRL distribution and storage cost effective and scalable, in this paper, we present a distributed CRL management model utilizing the idea of distributed hash trees (DHTs) from peer-to-peer (P2P) networks. The basic idea is to share the burden of storage of CRLs among all the smart meters by exploiting the meshing capability of the smart meters among each other. Thus, using DHTs not only reduces the space requirements for CRLs but also makes the CRL updates more convenient. We implemented this structure on ns-3 using IEEE 802.11s mesh standard as a model for AMI and demonstrated its superior performance with respect to traditional methods of CRL management through extensive simulations.« less
Popovic, Olga; Jensen, Lars Stoumann
2012-08-01
Chemical-mechanical separation of pig slurry into a solid fraction rich in dry matter, P, Cu and Zn and a liquid fraction rich in inorganic N but poor in dry matter may allow farmers to manage surplus slurry by exporting the solid fraction to regions with no nutrient surplus. Pig slurry can be applied to arable land only in certain periods during the year, so it is commonly stored prior to field application. This study investigated the effect of storage duration and temperature on chemical characteristics and P, Cu and Zn distribution between particle size classes of raw slurry and its liquid separation fraction. Dry matter, VFA, total N and ammonium content of both slurry products decreased during storage and were affected by temperature, showing higher losses at higher storage temperatures. In both products, total P, Cu and Zn concentrations were not significantly affected by storage duration or temperature. Particle size distribution was affected by slurry separation, storage duration and temperature. In raw slurry, particles larger than 1 mm decreased, whereas particles 250 μm-1 mm increased. The liquid fraction produced was free of particles >500 μm, with the highest proportions of P, Cu and Zn in the smallest particle size class (<25 μm). The proportion of particles <25 μm increased when the liquid fraction was stored at 5 °C, but decreased at 25 °C. Regardless of temperature, distribution of P, Cu and Zn over particle size classes followed a similar pattern to dry matter. Copyright © 2012 Elsevier Ltd. All rights reserved.
Cost-Efficient Storage of Cryogens
NASA Technical Reports Server (NTRS)
Fesmire, J. E.; Sass, J. P.; Nagy, Z.; Sojoumer, S. J.; Morris, D. L.; Augustynowicz, S. D.
2007-01-01
NASA's cryogenic infrastructure that supports launch vehicle operations and propulsion testing is reaching an age where major refurbishment will soon be required. Key elements of this infrastructure are the large double-walled cryogenic storage tanks used for both space vehicle launch operations and rocket propulsion testing at the various NASA field centers. Perlite powder has historically been the insulation material of choice for these large storage tank applications. New bulk-fill insulation materials, including glass bubbles and aerogel beads, have been shown to provide improved thermal and mechanical performance. A research testing program was conducted to investigate the thermal performance benefits as well as to identify operational considerations and associated risks associated with the application of these new materials in large cryogenic storage tanks. The program was divided into three main areas: material testing (thermal conductivity and physical characterization), tank demonstration testing (liquid nitrogen and liquid hydrogen), and system studies (thermal modeling, economic analysis, and insulation changeout). The results of this research work show that more energy-efficient insulation solutions are possible for large-scale cryogenic storage tanks worldwide and summarize the operational requirements that should be considered for these applications.
NASA Astrophysics Data System (ADS)
Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.
2017-12-01
Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the outcrops. Our results suggest that, for the LPC at least, global scale models might be useful for highlighting the locations with higher recharge rates potential and nitrate contamination risk. Global modelling simulations appear ideal as a primary step in recognizing locations which require investigations at the plot, field and local scales.
NASA Astrophysics Data System (ADS)
Shiino, Masatoshi; Fukai, Tomoki
1993-08-01
Based on the self-consistent signal-to-noise analysis (SCSNA) capable of dealing with analog neural networks with a wide class of transfer functions, enhancement of the storage capacity of associative memory and the related statistical properties of neural networks are studied for random memory patterns. Two types of transfer functions with the threshold parameter θ are considered, which are derived from the sigmoidal one to represent the output of three-state neurons. Neural networks having a monotonically increasing transfer function FM, FM(u)=sgnu (||u||>θ), FM(u)=0 (||u||<=θ), are shown to make it impossible for the spin-glass state to coexist with retrieval states in a certain parameter region of θ and α (loading rate of memory patterns), implying the reduction of the number of spurious states. The behavior of the storage capacity with changing θ is qualitatively the same as that of the Ising spin neural networks with varying temperature. On the other hand, the nonmonotonic transfer function FNM, FNM(u)=sgnu (||u||<θ), FNM(u)=0 (||u||>=θ) gives rise to remarkable features in several respects. First, it yields a large enhancement of the storage capacity compared with the Amit-Gutfreund-Sompolinsky (AGS) value: with decreasing θ from θ=∞, the storage capacity αc of such a network is increased from the AGS value (~=0.14) to attain its maximum value of ~=0.42 at θ~=0.7 and afterwards is decreased to vanish at θ=0. Whereas for θ>~1 the storage capacity αc coincides with the value αc~ determined by the SCSNA as the upper bound of α ensuring the existence of retrieval solutions, for θ<~1 the αc is shown to differ from the αc~ with the result that the retrieval solutions claimed by the SCSNA are unstable for αc<α<αc~. Second, in the case of θ<1 the network can exhibit a new type of phase which appears as a result of a phase transition with respect to the non-Gaussian distribution of the local fields of neurons: the standard type of retrieval state with r≠0 (i.e., finite width of the local field distribution), which is implied by the order-parameter equations of the SCSNA, disappears at a certain critical loading rate α0, and for α<=α0 a qualitatively different type of retrieval state comes into existence in which the width of the local field distribution vanishes (i.e., r=0+). As a consequence, memory retrieval without errors becomes possible even in the saturation limit α≠0. Results of the computer simulations on the statistical properties of the novel phase with α<=α0 are shown to be in satisfactory agreement with the theoretical results. The effect of introducing self-couplings on the storage capacity is also analyzed for the two types of networks. It is conspicuous for the networks with FNM, where the self-couplings increase the stability of the retrieval solutions of the SCSNA with small values of θ, leading to a remarkable enhancement of the storage capacity.
DAX - The Next Generation: Towards One Million Processes on Commodity Hardware.
Damon, Stephen M; Boyd, Brian D; Plassard, Andrew J; Taylor, Warren; Landman, Bennett A
2017-01-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with >100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.
DAX - the next generation: towards one million processes on commodity hardware
NASA Astrophysics Data System (ADS)
Damon, Stephen M.; Boyd, Brian D.; Plassard, Andrew J.; Taylor, Warren; Landman, Bennett A.
2017-03-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with <100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.
DAX - The Next Generation: Towards One Million Processes on Commodity Hardware
Boyd, Brian D.; Plassard, Andrew J.; Taylor, Warren; Landman, Bennett A.
2017-01-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with >100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner. PMID:28919661
Peer-to-peer architecture for multi-departmental distributed PACS
NASA Astrophysics Data System (ADS)
Rosset, Antoine; Heuberger, Joris; Pysher, Lance; Ratib, Osman
2006-03-01
We have elected to explore peer-to-peer technology as an alternative to centralized PACS architecture for the increasing requirements for wide access to images inside and outside a radiology department. The goal being to allow users across the enterprise to access any study anytime without the need for prefetching or routing of images from central archive. Images can be accessed between different workstations and local storage nodes. We implemented "bonjour" a new remote file access technology developed by Apple allowing applications to share data and files remotely with optimized data access and data transfer. Our Open-source image display platform called OsiriX was adapted to allow sharing of local DICOM images through direct access of each local SQL database to be accessible from any other OsiriX workstation over the network. A server version of Osirix Core Data database also allows to access distributed archives servers in the same way. The infrastructure implemented allows fast and efficient access to any image anywhere anytime independently from the actual physical location of the data. It also allows benefiting from the performance of distributed low-cost and high capacity storage servers that can provide efficient caching of PACS data that was found to be 10 to 20 x faster that accessing the same date from the central PACS archive. It is particularly suitable for large hospitals and academic environments where clinical conferences, interdisciplinary discussions and successive sessions of image processing are often part of complex workflow or patient management and decision making.
NASA Astrophysics Data System (ADS)
McKee, Shawn; Kissel, Ezra; Meekhof, Benjeman; Swany, Martin; Miller, Charles; Gregorowicz, Michael
2017-10-01
We report on the first year of the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) which is targeting the creation of a distributed Ceph storage infrastructure coupled together with software-defined networking to provide high-performance access for well-connected locations on any participating campus. The projects goal is to provide a single scalable, distributed storage infrastructure that allows researchers at each campus to read, write, manage and share data directly from their own computing locations. The NSF CC*DNI DIBBS program which funded OSiRIS is seeking solutions to the challenges of multi-institutional collaborations involving large amounts of data and we are exploring the creative use of Ceph and networking to address those challenges. While OSiRIS will eventually be serving a broad range of science domains, its first adopter will be the LHC ATLAS detector project via the ATLAS Great Lakes Tier-2 (AGLT2) jointly located at the University of Michigan and Michigan State University. Part of our presentation will cover how ATLAS is using the OSiRIS infrastructure and our experiences integrating our first user community. The presentation will also review the motivations for and goals of the project, the technical details of the OSiRIS infrastructure, the challenges in providing such an infrastructure, and the technical choices made to address those challenges. We will conclude with our plans for the remaining 4 years of the project and our vision for what we hope to deliver by the projects end.
NASA Astrophysics Data System (ADS)
Fuller, T. K.; Venditti, J. G.; Nelson, P. A.; Popescu, V.; Palen, W.
2014-12-01
Run-of-river (RoR) hydropower has emerged as an important alternative to large reservoir-based dams in the renewable energy portfolios of China, India, Canada, and other areas around the globe. RoR projects generate electricity by diverting a portion of the channel discharge through a large pipe for several kilometers downhill where it is used to drive turbines before being returned to the channel. Individual RoR projects are thought to be less disruptive to local ecosystems than large hydropower because they involve minimal water storage, more closely match the natural hydrograph downstream of the project, and are capable of bypassing trapped sediment. However, there is concern that temporary sediment supply disruption may degrade the productivity of salmon spawning habitat downstream of the dam by causing changes in the grain size distribution of bed surface sediment. We hypothesize that salmon populations will be most susceptible to disruptions in sediment supply in channels where; 1) sediment supply is high relative to transport capacity prior to RoR development, and 2) project design creates substantial sediment storage volume. Determining the geomorphic effect of RoR development on aquatic habitat requires many years of field data collection, and even then it can be difficult to link geomorphic change to RoR development alone. As an alternative, we used a one-dimensional morphodynamic model to test our hypothesis across a range of pre-development sediment supply conditions and sediment storage volumes. Our results confirm that coarsening of the median surface grain-size is greatest in cases where pre-development sediment supply was highest and sediment storage volumes were large enough to disrupt supply over the course of the annual hydrograph or longer. In cases where the pre-development sediment supply is low, coarsening of the median surface grain-size is less than 2 mm over a multiple-year disruption period. When sediment supply is restored, our results show that the time required for a channel to re-establish its pre-development median surface grain-size is inversely correlated to the pre-development sediment supply conditions. These results demonstrate that morphodynamic models can be a valuable tool in assessing the risk to aquatic habitat from RoR development.
A distributed, dynamic, parallel computational model: the role of noise in velocity storage
Merfeld, Daniel M.
2012-01-01
Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288
NASA Astrophysics Data System (ADS)
Yang, Shaw-Yang; Yeh, Hund-Der; Li, Kuang-Yi
2010-10-01
Heat storage systems are usually used to store waste heat and solar energy. In this study, a mathematical model is developed to predict both the steady-state and transient temperature distributions of an aquifer thermal energy storage (ATES) system after hot water is injected through a well into a confined aquifer. The ATES has a confined aquifer bounded by aquicludes with different thermomechanical properties and geothermal gradients along the depth. Consider that the heat is transferred by conduction and forced convection within the aquifer and by conduction within the aquicludes. The dimensionless semi-analytical solutions of temperature distributions of the ATES system are developed using Laplace and Fourier transforms and their corresponding time-domain results are evaluated numerically by the modified Crump method. The steady-state solution is obtained from the transient solution through the final-value theorem. The effect of the heat transfer coefficient on aquiclude temperature distribution is appreciable only near the outer boundaries of the aquicludes. The present solutions are useful for estimating the temperature distribution of heat injection and the aquifer thermal capacity of ATES systems.
NASA Astrophysics Data System (ADS)
Zhao, Chunyu; You, Shijun; Zhu, Chunying; Yu, Wei
2016-12-01
This paper presents an experimental investigation of the performance of a system combining a low-temperature water wall radiant heating system and phase change energy storage technology with an active solar system. This system uses a thermal storage wall that is designed with multilayer thermal storage plates. The heat storage material is expanded graphite that absorbs a mixture of capric acid and lauric acid. An experiment is performed to study the actual effect. The following are studied under winter conditions: (1) the temperature of the radiation wall surface, (2) the melting status of the thermal storage material in the internal plate, (3) the density of the heat flux, and (4) the temperature distribution of the indoor space. The results reveal that the room temperature is controlled between 16 and 20 °C, and the thermal storage wall meets the heating and temperature requirements. The following are also studied under summer conditions: (1) the internal relationship between the indoor temperature distribution and the heat transfer within the regenerative plates during the day and (2) the relationship between the outlet air temperature and inlet air temperature in the thermal storage wall in cooling mode at night. The results indicate that the indoor temperature is approximately 27 °C, which satisfies the summer air-conditioning requirements.
NASA Astrophysics Data System (ADS)
Tatchyn, Roman
1997-05-01
In recent years studies have been initiated on a new class of multipole field generators consisting of cuboid planar permanent magnet (PM) pieces arranged in bi-planar arrays of 2-fold rotational symmetry(R. Tatchyn, "Planar Permanent Magnet Multipoles: for Particle Accelerator and Storage Ring Applications ," IEEE Trans. Mag. 30, 5050(1994).)(T. Cremer, R. Tatchyn, "Planar Permanent Magnet Multipoles: Measurements and Configurations," in Proceedings of the 1995 Particle Accelerator Conference, IEEE Catalog No. 95CH35843, paper FAQ-20.). These structures, first introduced for Free Electron Laser (FEL) applications(R. Tatchyn, "Selected applications of planar permanent magnet multipoles in FEL insertion device design," NIM A341, 449(1994).), are based on reducing the rotational symmetry of conventional N-pole field generators from N-fold to 2-fold. One consequence of this reduction is a large higher-multipole content in a planar PM multipole's field at distances relatively close to the structure's axis, making it generally unsuitable for applications requiring a large high-quality field aperture. In this paper we outline an economical field-cancellation algorithm that can substantially decrease the harmonic content of a planar PM's field without breaking its biplanar geometry or 2-fold rotational symmetry. This will enable planar PM multipoles to be employed in a broader range of applications than heretofore possible, in particular as distributed focusing elements installed in insertion device gaps on synchrotron storage rings. This accomplishment is expected to remove the conventional restriction of an insertion device's length to the scale of the local focusing beta, enabling short-period, small-gap undulators to be installed and operated as high-brightness sources on lower-energy storage rings(R. Tatchyn, P. Csonka, A. Toor, "Perspectives on micropole undulators in synchrotron radiation technology," Rev. Sci. Instrum. 60(7), 1796(1989).). Operation as ordinary focusing elements in storage ring magnetic lattices, as well as the performance of other high-quality multipole applications, should also becomes possible with the realization of the proposed structures.
BAMSI: a multi-cloud service for scalable distributed filtering of massive genome data.
Ausmees, Kristiina; John, Aji; Toor, Salman Z; Hellander, Andreas; Nettelblad, Carl
2018-06-26
The advent of next-generation sequencing (NGS) has made whole-genome sequencing of cohorts of individuals a reality. Primary datasets of raw or aligned reads of this sort can get very large. For scientific questions where curated called variants are not sufficient, the sheer size of the datasets makes analysis prohibitively expensive. In order to make re-analysis of such data feasible without the need to have access to a large-scale computing facility, we have developed a highly scalable, storage-agnostic framework, an associated API and an easy-to-use web user interface to execute custom filters on large genomic datasets. We present BAMSI, a Software as-a Service (SaaS) solution for filtering of the 1000 Genomes phase 3 set of aligned reads, with the possibility of extension and customization to other sets of files. Unique to our solution is the capability of simultaneously utilizing many different mirrors of the data to increase the speed of the analysis. In particular, if the data is available in private or public clouds - an increasingly common scenario for both academic and commercial cloud providers - our framework allows for seamless deployment of filtering workers close to data. We show results indicating that such a setup improves the horizontal scalability of the system, and present a possible use case of the framework by performing an analysis of structural variation in the 1000 Genomes data set. BAMSI constitutes a framework for efficient filtering of large genomic data sets that is flexible in the use of compute as well as storage resources. The data resulting from the filter is assumed to be greatly reduced in size, and can easily be downloaded or routed into e.g. a Hadoop cluster for subsequent interactive analysis using Hive, Spark or similar tools. In this respect, our framework also suggests a general model for making very large datasets of high scientific value more accessible by offering the possibility for organizations to share the cost of hosting data on hot storage, without compromising the scalability of downstream analysis.
Black start research of the wind and storage system based on the dual master-slave control
NASA Astrophysics Data System (ADS)
Leng, Xue; Shen, Li; Hu, Tian; Liu, Li
2018-02-01
Black start is the key to solving the problem of large-scale power failure, while the introduction of new renewable clean energy as a black start power supply was a new hotspot. Based on the dual master-slave control strategy, the wind and storage system was taken as the black start reliable power, energy storage and wind combined to ensure the stability of the micorgrid systems, to realize the black start. In order to obtain the capacity ratio of the storage in the small system based on the dual master-slave control strategy, and the black start constraint condition of the wind and storage combined system, obtain the key points of black start of wind storage combined system, but also provide reference and guidance for the subsequent large-scale wind and storage combined system in black start projects.
Vehicle electrical system state controller
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bissontz, Jay E.
A motor vehicle electrical power distribution system includes a plurality of distribution sub-systems, an electrical power storage sub-system and a plurality of switching devices for selective connection of elements of and loads on the power distribution system to the electrical power storage sub-system. A state transition initiator provides inputs to control system operation of switching devices to change the states of the power distribution system. The state transition initiator has a plurality of positions selection of which can initiate a state transition. The state transition initiator can emulate a four position rotary ignition switch. Fail safe power cutoff switches providemore » high voltage switching device protection.« less
Ensuring Data Storage Security in Tree cast Routing Architecture for Sensor Networks
NASA Astrophysics Data System (ADS)
Kumar, K. E. Naresh; Sagar, U. Vidya; Waheed, Mohd. Abdul
2010-10-01
In this paper presents recent advances in technology have made low-cost, low-power wireless sensors with efficient energy consumption. A network of such nodes can coordinate among themselves for distributed sensing and processing of certain data. For which, we propose an architecture to provide a stateless solution in sensor networks for efficient routing in wireless sensor networks. This type of architecture is known as Tree Cast. We propose a unique method of address allocation, building up multiple disjoint trees which are geographically inter-twined and rooted at the data sink. Using these trees, routing messages to and from the sink node without maintaining any routing state in the sensor nodes is possible. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, this routing architecture moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this paper, we focus on data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in this architecture, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
Assessment of CO2 Mineralization and Dynamic Rock Properties at the Kemper Pilot CO2 Injection Site
NASA Astrophysics Data System (ADS)
Qin, F.; Kirkland, B. L.; Beckingham, L. E.
2017-12-01
CO2-brine-mineral reactions following CO2 injection may impact rock properties including porosity, permeability, and pore connectivity. The rate and extent of alteration largely depends on the nature and evolution of reactive mineral interfaces. In this work, the potential for geochemical reactions and the nature of the reactive mineral interface and corresponding hydrologic properties are evaluated for samples from the Lower Tuscaloosa, Washita-Fredericksburg, and Paluxy formations. These formations have been identified as future regionally extensive and attractive CO2 storage reservoirs at the CO2 Storage Complex in Kemper County, Mississippi, USA (Project ECO2S). Samples from these formations were obtained from the Geological Survey of Alabama and evaluated using a suite of complementary analyses. The mineral composition of these samples will be determined using petrography and powder X-ray Diffraction (XRD). Using these compositions, continuum-scale reactive transport simulations will be developed and the potential CO2-brine-mineral interactions will be examined. Simulations will focus on identifying potential reactive minerals as well as the corresponding rate and extent of reactions. The spatial distribution and accessibility of minerals to reactive fluids is critical to understanding mineral reaction rates and corresponding changes in the pore structure, including pore connectivity, porosity and permeability. The nature of the pore-mineral interface, and distribution of reactive minerals, will be determined through imaging analysis. Multiple 2D scanning electron microscopy (SEM) backscattered electron (BSE) images and energy dispersive x-ray spectroscopy (EDS) images will be used to create spatial maps of mineral distributions. These maps will be processed to evaluate the accessibility of reactive minerals and the potential for flow-path modifications following CO2 injection. The "Establishing an Early CO2 Storage Complex in Kemper, MS" project is funded by the U.S. Department of Energy's National Energy Technology Laboratory and cost-sharing partners.
An analytic solution of the stochastic storage problem applicable to soil water
Milly, P.C.D.
1993-01-01
The accumulation of soil water during rainfall events and the subsequent depletion of soil water by evaporation between storms can be described, to first order, by simple accounting models. When the alternating supplies (precipitation) and demands (potential evaporation) are viewed as random variables, it follows that soil-water storage, evaporation, and runoff are also random variables. If the forcing (supply and demand) processes are stationary for a sufficiently long period of time, an asymptotic regime should eventually be reached where the probability distribution functions of storage, evaporation, and runoff are stationary and uniquely determined by the distribution functions of the forcing. Under the assumptions that the potential evaporation rate is constant, storm arrivals are Poisson-distributed, rainfall is instantaneous, and storm depth follows an exponential distribution, it is possible to derive the asymptotic distributions of storage, evaporation, and runoff analytically for a simple balance model. A particular result is that the fraction of rainfall converted to runoff is given by (1 - R−1)/(eα(1−R−1) − R−1), in which R is the ratio of mean potential evaporation to mean rainfall and a is the ratio of soil water-holding capacity to mean storm depth. The problem considered here is analogous to the well-known problem of storage in a reservoir behind a dam, for which the present work offers a new solution for reservoirs of finite capacity. A simple application of the results of this analysis suggests that random, intraseasonal fluctuations of precipitation cannot by themselves explain the observed dependence of the annual water balance on annual totals of precipitation and potential evaporation.
Sparse distributed memory overview
NASA Technical Reports Server (NTRS)
Raugh, Mike
1990-01-01
The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.
Generation system impacts of storage heating and storage water heating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gellings, C.W.; Quade, A.W.; Stovall, J.P.
Thermal energy storage systems offer the electric utility a means to change customer energy use patterns. At present, however, the costs and benefit to both the customers and utility are uncertain. As part of a nationwide demonstration program Public Service Electric and Gas Company installed storage space heating and water heating appliances in residential homes. Both the test homes and similiar homes using conventional space and water heating appliances were monitored, allowing for detailed comparisons between the two systems. The purpose of this paper is to detail the methodology used and the results of studies completed on the generation systemmore » impacts of storage space and water heating systems. Other electric system impacts involving service entrance size, metering, secondary distribution and primary distribution were detailed in two previous IEEE Papers. This paper is organized into three main sections. The first gives background data on PSEandG and their experience in a nationwide thermal storage demonstration project. The second section details results of the demonstration project and studies that have been performed on the impacts of thermal storage equipment. The last section reports on the conclusions arrived at concerning the impacts of thermal storage on generation. The study was conducted in early 1982 using available data at that time, while PSEandG system plans have changed since then, the conclusions are pertinent and valuable to those contemplating inpacts of thermal energy storage.« less
NASA Astrophysics Data System (ADS)
Chen, Xiaotao; Song, Jie; Liang, Lixiao; Si, Yang; Wang, Le; Xue, Xiaodai
2017-10-01
Large-scale energy storage system (ESS) plays an important role in the planning and operation of smart grid and energy internet. Compressed air energy storage (CAES) is one of promising large-scale energy storage techniques. However, the high cost of the storage of compressed air and the low capacity remain to be solved. This paper proposes a novel non-supplementary fired compressed air energy storage system (NSF-CAES) based on salt cavern air storage to address the issues of air storage and the efficiency of CAES. Operating mechanisms of the proposed NSF-CAES are analysed based on thermodynamics principle. Key factors which has impact on the system storage efficiency are thoroughly explored. The energy storage efficiency of the proposed NSF-CAES system can be improved by reducing the maximum working pressure of the salt cavern and improving inlet air pressure of the turbine. Simulation results show that the electric-to-electric conversion efficiency of the proposed NSF-CAES can reach 63.29% with a maximum salt cavern working pressure of 9.5 MPa and 9 MPa inlet air pressure of the turbine, which is higher than the current commercial CAES plants.
Design and Implement of Astronomical Cloud Computing Environment In China-VO
NASA Astrophysics Data System (ADS)
Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu
2017-06-01
Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.
Application of superconducting technology to earth-to-orbit electromagnetic launch systems
NASA Technical Reports Server (NTRS)
Hull, J. R.; Carney, L. M.
1988-01-01
Benefits may occur by incorporating superconductors, both existing and those currently under development, in one or more parts of a large-scale electromagnetic launch (EML) system that is capable of delivering payloads from the surface of the Earth to space. The use of superconductors for many of the EML components results in lower system losses; consequently, reductions in the size and number of energy storage devices are possible. Applied high-temperature superconductivity may eventually enable novel design concepts for energy distribution and switching. All of these technical improvements have the potential to reduce system complexity and lower payload launch costs.
Clinical experience with a high-performance ATM-connected DICOM archive for cardiology
NASA Astrophysics Data System (ADS)
Solomon, Harry P.
1997-05-01
A system to archive large image sets, such as cardiac cine runs, with near realtime response must address several functional and performance issues, including efficient use of a high performance network connection with standard protocols, an architecture which effectively integrates both short- and long-term mass storage devices, and a flexible data management policy which allows optimization of image distribution and retrieval strategies based on modality and site-specific operational use. Clinical experience with such as archive has allowed evaluation of these systems issues and refinement of a traffic model for cardiac angiography.
RMP Guidance for Propane Storage Facilities - Main Text
This document is intended as comprehensive Risk Management Program guidance for larger propane storage or distribution facilities who already comply with propane industry standards. Includes sample RMP, and release calculations.
Removal of mouse ovary fat pad affects sex hormones, folliculogenesis and fertility.
Wang, Hong-Hui; Cui, Qian; Zhang, Teng; Guo, Lei; Dong, Ming-Zhe; Hou, Yi; Wang, Zhen-Bo; Shen, Wei; Ma, Jun-Yu; Sun, Qing-Yuan
2017-02-01
As a fat storage organ, adipose tissue is distributed widely all over the body and is important for energy supply, body temperature maintenance, organ protection, immune regulation and so on. In humans, both underweight and overweight women find it hard to become pregnant, which suggests that appropriate fat storage can guarantee the female reproductive capacity. In fact, a large mass of adipose tissue distributes around the reproductive system both in the male and female. However, the functions of ovary fat pad (the nearest adipose tissue to ovary) are not known. In our study, we found that the ovary fat pad-removed female mice showed decreased fertility and less ovulated mature eggs. We further identified that only a small proportion of follicles developed to antral follicle, and many follicles were blocked at the secondary follicle stage. The overall secretion levels of estrogen and FSH were lower in the whole estrus cycle (especially at proestrus); however, the LH level was higher in ovary fat pad-removed mice than that in control groups. Moreover, the estrus cycle of ovary fat pad-removed mice showed significant disorder. Besides, the expression of FSH receptor decreased, but the LH receptor increased in ovary fat pad-removed mice. These results suggest that ovary fat pad is important for mouse reproduction. © 2017 Society for Endocrinology.
NASA Astrophysics Data System (ADS)
Burchfield, E. K.
2014-12-01
The island nation of Sri Lanka is divided into two agro-climatic zones: the southwestern wet zone and the northeastern dry zone. The dry zone is exposed to drought-like conditions for several months each year. Due to the sporadic nature of rainfall, dry zone livelihoods depend on the successful storage, capture, and distribution of water. Traditionally, water has been captured in rain-fed tanks and distributed through a system of dug canals. Recently, the Sri Lankan government has diverted the waters of the nation's largest river through a system of centrally managed reservoirs and canals and resettled farmers to cultivate this newly irrigated land. This study uses remotely sensed MODIS and LANDSAT imagery to compare vegetation health and cropping patterns in these distinct water management regimes under different conditions of water scarcity. Of particular interest are the socioeconomic, infrastructural, and institutional factors that affect cropping patterns, including field position, water storage capacity, and control of water resources. Results suggest that under known conditions of water scarcity, farmers cultivate other field crops in lieu of paddy. Cultivation changes depend to a large extent on the institutional distance between water users and water managers as well as the fragmentation of water resources within the system.
NASA Astrophysics Data System (ADS)
Chubar, O.
2006-09-01
The paper describes methods of efficient calculation of spontaneous synchrotron radiation (SR) by relativistic electrons in storage rings, and propagation of this radiation through optical elements and drift spaces of beamlines, using the principles of wave optics. In addition to the SR from one electron, incoherent and coherent synchrotron radiation (CSR) emitted by electron bunches is treated. CPU-efficient CSR calculation method taking into account 6D phase space distribution of electrons in a bunch is proposed. The properties of CSR emitted by electron bunches with small longitudinal and large transverse size are studied numerically (such situation can be realized in storage rings e.g. by transverse deflection of the electron bunches in special RF cavities). It is shown that if the transverse size of a bunch is much larger than the diffraction limit for single-electron SR at a given wavelength - it affects the angular distribution of the CSR at this wavelength and reduces the coherent flux. Nevertheless, for transverse bunch dimensions up to several millimeters and the longitudinal bunch size smaller than hundred micrometers, the resulting CSR flux in the far infrared spectral range is still many orders of magnitude higher than the flux of incoherent SR.
DIRAC File Replica and Metadata Catalog
NASA Astrophysics Data System (ADS)
Tsaregorodtsev, A.; Poss, S.
2012-12-01
File replica and metadata catalogs are essential parts of any distributed data management system, which are largely determining its functionality and performance. A new File Catalog (DFC) was developed in the framework of the DIRAC Project that combines both replica and metadata catalog functionality. The DFC design is based on the practical experience with the data management system of the LHCb Collaboration. It is optimized for the most common patterns of the catalog usage in order to achieve maximum performance from the user perspective. The DFC supports bulk operations for replica queries and allows quick analysis of the storage usage globally and for each Storage Element separately. It supports flexible ACL rules with plug-ins for various policies that can be adopted by a particular community. The DFC catalog allows to store various types of metadata associated with files and directories and to perform efficient queries for the data based on complex metadata combinations. Definition of file ancestor-descendent relation chains is also possible. The DFC catalog is implemented in the general DIRAC distributed computing framework following the standard grid security architecture. In this paper we describe the design of the DFC and its implementation details. The performance measurements are compared with other grid file catalog implementations. The experience of the DFC Catalog usage in the CLIC detector project are discussed.
Developing a Hadoop-based Middleware for Handling Multi-dimensional NetCDF
NASA Astrophysics Data System (ADS)
Li, Z.; Yang, C. P.; Schnase, J. L.; Duffy, D.; Lee, T. J.
2014-12-01
Climate observations and model simulations are collecting and generating vast amounts of climate data, and these data are ever-increasing and being accumulated in a rapid speed. Effectively managing and analyzing these data are essential for climate change studies. Hadoop, a distributed storage and processing framework for large data sets, has attracted increasing attentions in dealing with the Big Data challenge. The maturity of Infrastructure as a Service (IaaS) of cloud computing further accelerates the adoption of Hadoop in solving Big Data problems. However, Hadoop is designed to process unstructured data such as texts, documents and web pages, and cannot effectively handle the scientific data format such as array-based NetCDF files and other binary data format. In this paper, we propose to build a Hadoop-based middleware for transparently handling big NetCDF data by 1) designing a distributed climate data storage mechanism based on POSIX-enabled parallel file system to enable parallel big data processing with MapReduce, as well as support data access by other systems; 2) modifying the Hadoop framework to transparently processing NetCDF data in parallel without sequencing or converting the data into other file formats, or loading them to HDFS; and 3) seamlessly integrating Hadoop, cloud computing and climate data in a highly scalable and fault-tolerance framework.
RAID Unbound: Storage Fault Tolerance in a Distributed Environment
NASA Technical Reports Server (NTRS)
Ritchie, Brian
1996-01-01
Mirroring, data replication, backup, and more recently, redundant arrays of independent disks (RAID) are all technologies used to protect and ensure access to critical company data. A new set of problems has arisen as data becomes more and more geographically distributed. Each of the technologies listed above provides important benefits; but each has failed to adapt fully to the realities of distributed computing. The key to data high availability and protection is to take the technologies' strengths and 'virtualize' them across a distributed network. RAID and mirroring offer high data availability, which data replication and backup provide strong data protection. If we take these concepts at a very granular level (defining user, record, block, file, or directory types) and them liberate them from the physical subsystems with which they have traditionally been associated, we have the opportunity to create a highly scalable network wide storage fault tolerance. The network becomes the virtual storage space in which the traditional concepts of data high availability and protection are implemented without their corresponding physical constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi; Zhu, Michelle Mengxia
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less
A global distributed storage architecture
NASA Technical Reports Server (NTRS)
Lionikis, Nemo M.; Shields, Michael F.
1996-01-01
NSA architects and planners have come to realize that to gain the maximum benefit from, and keep pace with, emerging technologies, we must move to a radically different computing architecture. The compute complex of the future will be a distributed heterogeneous environment, where, to a much greater extent than today, network-based services are invoked to obtain resources. Among the rewards of implementing the services-based view are that it insulates the user from much of the complexity of our multi-platform, networked, computer and storage environment and hides its diverse underlying implementation details. In this paper, we will describe one of the fundamental services being built in our envisioned infrastructure; a global, distributed archive with near-real-time access characteristics. Our approach for adapting mass storage services to this infrastructure will become clear as the service is discussed.
Scaling to diversity: The DERECHOS distributed infrastructure for analyzing and sharing data
NASA Astrophysics Data System (ADS)
Rilee, M. L.; Kuo, K. S.; Clune, T.; Oloso, A.; Brown, P. G.
2016-12-01
Integrating Earth Science data from diverse sources such as satellite imagery and simulation output can be expensive and time-consuming, limiting scientific inquiry and the quality of our analyses. Reducing these costs will improve innovation and quality in science. The current Earth Science data infrastructure focuses on downloading data based on requests formed from the search and analysis of associated metadata. And while the data products provided by archives may use the best available data sharing technologies, scientist end-users generally do not have such resources (including staff) available to them. Furthermore, only once an end-user has received the data from multiple diverse sources and has integrated them can the actual analysis and synthesis begin. The cost of getting from idea to where synthesis can start dramatically slows progress. In this presentation we discuss a distributed computational and data storage framework that eliminates much of the aforementioned cost. The SciDB distributed array database is central as it is optimized for scientific computing involving very large arrays, performing better than less specialized frameworks like Spark. Adding spatiotemporal functions to the SciDB creates a powerful platform for analyzing and integrating massive, distributed datasets. SciDB allows Big Earth Data analysis to be performed "in place" without the need for expensive downloads and end-user resources. Spatiotemporal indexing technologies such as the hierarchical triangular mesh enable the compute and storage affinity needed to efficiently perform co-located and conditional analyses minimizing data transfers. These technologies automate the integration of diverse data sources using the framework, a critical step beyond current metadata search and analysis. Instead of downloading data into their idiosyncratic local environments, end-users can generate and share data products integrated from diverse multiple sources using a common shared environment, turning distributed active archive centers (DAACs) from warehouses into distributed active analysis centers.
Efficient packing of patterns in sparse distributed memory by selective weighting of input bits
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1991-01-01
When a set of patterns is stored in a distributed memory, any given storage location participates in the storage of many patterns. From the perspective of any one stored pattern, the other patterns act as noise, and such noise limits the memory's storage capacity. The more similar the retrieval cues for two patterns are, the more the patterns interfere with each other in memory, and the harder it is to separate them on retrieval. A method is described of weighting the retrieval cues to reduce such interference and thus to improve the separability of patterns that have similar cues.
NASA Astrophysics Data System (ADS)
Börries, S.; Metz, O.; Pranzas, P. K.; Bellosta von Colbe, J. M.; Bücherl, T.; Dornheim, M.; Klassen, T.; Schreyer, A.
2016-10-01
For the storage of hydrogen, complex metal hydrides are considered as highly promising with respect to capacity, reversibility and safety. The optimization of corresponding storage tanks demands a precise and time-resolved investigation of the hydrogen distribution in scaled-up metal hydride beds. In this study it is shown that in situ fission Neutron Radiography provides unique insights into the spatial distribution of hydrogen even for scaled-up compacts and therewith enables a direct study of hydrogen storage tanks. A technique is introduced for the precise quantification of both time-resolved data and a priori material distribution, allowing inter alia for an optimization of compacts manufacturing process. For the first time, several macroscopic fields are combined which elucidates the great potential of Neutron Imaging for investigations of metal hydrides by going further than solely 'imaging' the system: A combination of in-situ Neutron Radiography, IR-Thermography and thermodynamic quantities can reveal the interdependency of different driving forces for a scaled-up sodium alanate pellet by means of a multi-correlation analysis. A decisive and time-resolved, complex influence of material packing density is derived. The results of this study enable a variety of new investigation possibilities that provide essential information on the optimization of future hydrogen storage tanks.
Global Software Development with Cloud Platforms
NASA Astrophysics Data System (ADS)
Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya
Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.
Interoperating Cloud-based Virtual Farms
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.
2015-12-01
The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.
Zeng, Teng; Mitch, William A
2016-03-15
Distribution system storage facilities are a critical, yet often overlooked, component of the urban water infrastructure. This study showed elevated concentrations of N-nitrosodimethylamine (NDMA), total N-nitrosamines (TONO), regulated trihalomethanes (THMs) and haloacetic acids (HAAs), 1,1-dichloropropanone (1,1-DCP), trichloroacetaldehyde (TCAL), haloacetonitriles (HANs), and haloacetamides (HAMs) in waters with ongoing nitrification as compared to non-nitrifying waters in storage facilities within five different chloraminated drinking water distribution systems. The concentrations of NDMA, TONO, HANs, and HAMs in the nitrifying waters further increased upon application of simulated distribution system chloramination. The addition of a nitrifying biofilm sample collected from a nitrifying facility to its non-nitrifying influent water led to increases in N-nitrosamine and halogenated DBP formation, suggesting the release of precursors from nitrifying biofilms. Periodic treatment of two nitrifying facilities with breakpoint chlorination (BPC) temporarily suppressed nitrification and reduced precursor levels for N-nitrosamines, HANs, and HAMs, as reflected by lower concentrations of these DBPs measured after re-establishment of a chloramine residual within the facilities than prior to the BPC treatment. However, BPC promoted the formation of halogenated DBPs while a free chlorine residual was maintained. Strategies that minimize application of free chlorine while preventing nitrification are needed to control DBP precursor release in storage facilities.
Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun
2016-01-01
In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550
NASA Astrophysics Data System (ADS)
Ceballos-Núñez, Verónika; Richardson, Andrew D.; Sierra, Carlos A.
2018-03-01
The global carbon cycle is strongly controlled by the source/sink strength of vegetation as well as the capacity of terrestrial ecosystems to retain this carbon. These dynamics, as well as processes such as the mixing of old and newly fixed carbon, have been studied using ecosystem models, but different assumptions regarding the carbon allocation strategies and other model structures may result in highly divergent model predictions. We assessed the influence of three different carbon allocation schemes on the C cycling in vegetation. First, we described each model with a set of ordinary differential equations. Second, we used published measurements of ecosystem C compartments from the Harvard Forest Environmental Measurement Site to find suitable parameters for the different model structures. And third, we calculated C stocks, release fluxes, radiocarbon values (based on the bomb spike), ages, and transit times. We obtained model simulations in accordance with the available data, but the time series of C in foliage and wood need to be complemented with other ecosystem compartments in order to reduce the high parameter collinearity that we observed, and reduce model equifinality. Although the simulated C stocks in ecosystem compartments were similar, the different model structures resulted in very different predictions of age and transit time distributions. In particular, the inclusion of two storage compartments resulted in the prediction of a system mean age that was 12-20 years older than in the models with one or no storage compartments. The age of carbon in the wood compartment of this model was also distributed towards older ages, whereas fast cycling compartments had an age distribution that did not exceed 5 years. As expected, models with C distributed towards older ages also had longer transit times. These results suggest that ages and transit times, which can be indirectly measured using isotope tracers, serve as important diagnostics of model structure and could largely help to reduce uncertainties in model predictions. Furthermore, by considering age and transit times of C in vegetation compartments as distributions, not only their mean values, we obtain additional insights into the temporal dynamics of carbon use, storage, and allocation to plant parts, which not only depends on the rate at which this C is transferred in and out of the compartments but also on the stochastic nature of the process itself.
Energy Storage Requirements for Achieving 50% Penetration of Solar Photovoltaic Energy in California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul; Margolis, Robert
2016-09-01
We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less
Energy Storage Requirements for Achieving 50% Solar Photovoltaic Energy Penetration in California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul; Margolis, Robert
2016-08-01
We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaels, A.I.; Sillman, S.; Baylin, F.
1983-05-01
A central solar-heating plant with seasonal heat storage in a deep underground aquifer is designed by means of a solar-seasonal-storage-system simulation code based on the Solar Energy Research Institute (SERI) code for Solar Annual Storage Simulation (SASS). This Solar Seasonal Storage Plant is designed to supply close to 100% of the annual heating and domestic-hot-water (DHW) load of a hypothetical new community, the Fox River Valley Project, for a location in Madison, Wisconsin. Some analyses are also carried out for Boston, Massachusetts and Copenhagen, Denmark, as an indication of weather and insolation effects. Analyses are conducted for five different typesmore » of solar collectors, and for an alternate system utilizing seasonal storage in a large water tank. Predicted seasonal performance and system and storage costs are calculated. To provide some validation of the SASS results, a simulation of the solar system with seasonal storage in a large water tank is also carried out with a modified version of the Swedish Solar Seasonal Storage Code MINSUN.« less
A quantitative assessment of groundwater resources in the Middle East and North Africa region
NASA Astrophysics Data System (ADS)
Lezzaik, Khalil; Milewski, Adam
2018-02-01
The Middle East and North Africa (MENA) region is the world's most water-stressed region, with its countries constituting 12 of the 15 most water-stressed countries globally. Because of data paucity, comprehensive regional-scale assessments of groundwater resources in the MENA region have been lacking. The presented study addresses this issue by using a distributed ArcGIS model, parametrized with gridded data sets, to estimate groundwater storage reserves in the region based on generated aquifer saturated thickness and effective porosity estimates. Furthermore, monthly gravimetric datasets (GRACE) and land surface parameters (GLDAS) were used to quantify changes in groundwater storage between 2003 and 2014. Total groundwater reserves in the region were estimated at 1.28 × 106 cubic kilometers (km3) with an uncertainty range between 816,000 and 1.93 × 106 km3. Most of the reserves are located within large sedimentary basins in North Africa and the Arabian Peninsula, with Algeria, Libya, Egypt, and Saudi Arabia accounting for approximately 75% of the region's total freshwater reserves. Alternatively, small groundwater reserves were found in fractured Precambrian basement exposures. As for groundwater changes between 2003 and 2014, all MENA countries except for Morocco exhibited declines in groundwater storage. However, given the region's large groundwater reserves, groundwater changes between 2003 and 2014 are minimal and represent no immediate short-term threat to the MENA region, with some exceptions. Notwithstanding this, the study recommends the development of sustainable and efficient groundwater management policies to optimally utilize the region's groundwater resources, especially in the face of climate change, demographic expansion, and socio-economic development.
Groundwater response to the 2014 pulse flow in the Colorado River Delta
Kennedy, Jeffrey; Rodriguez-Burgueno, Eliana; Ramirez-Hernandez, Jorge
2017-01-01
During the March-May 2014 Colorado River Delta pulse flow, approximately 102 × 106 m3 (82,000 acre-feet) of water was released into the channel at Morelos Dam, with additional releases further downstream. The majority of pulse flow water infiltrated and recharged the regional aquifer. Using groundwater-level and microgravity data we mapped the spatial and temporal distribution of changes in aquifer storage associated with pulse flow. Surface-water losses to infiltration were greatest around the Southerly International Boundary, where a lowered groundwater level owing to nearby pumping created increased storage potential as compared to other areas with shallower groundwater. Groundwater levels were elevated for several months after the pulse flow but had largely returned to pre-pulse levels by fall 2014. Elevated groundwater levels in the limitrophe (border) reach extended about 2 km to the east around the midway point between the Northerly and Southerly International Boundaries, and about 4 km to the east at the southern end. In the southern part of the delta, although total streamflow in the channel was less due to upstream infiltration, augmented deliveries through irrigation canals and possible irrigation return flows created sustained increases in groundwater levels during summer 2014. Results show that elevated groundwater levels and increases in groundwater storage were relatively short lived (confined to calendar year 2014), and that depressed water levels associated with groundwater pumping around San Luis, Arizona and San Luis Rio Colorado, Sonora cause large, unavoidable infiltration losses of in-channel water to groundwater in the vicinity.
Fuel supply and distribution. Fixed base operation
NASA Technical Reports Server (NTRS)
Burian, L. C.
1983-01-01
Aviation gasoline versus other products, a changing marketplace, the Airline Deregulation Act of 1978, aviation fuel credit card purchases, strategic locations, storage, co-mingling of fuel, and transportation to/from central storage are discussed.
High resolution modeling of reservoir storage and extent dynamics at the continental scale
NASA Astrophysics Data System (ADS)
Shin, S.; Pokhrel, Y. N.
2017-12-01
Over the past decade, significant progress has been made in developing reservoir schemes in large scale hydrological models to better simulate hydrological fluxes and storages in highly managed river basins. These schemes have been successfully used to study the impact of reservoir operation on global river basins. However, improvements in the existing schemes are needed for hydrological fluxes and storages, especially at the spatial resolution to be used in hyper-resolution hydrological modeling. In this study, we developed a reservoir routing scheme with explicit representation of reservoir storage and extent at the grid scale of 5km or less. Instead of setting reservoir area to a fixed value or diagnosing it using the area-storage equation, which is a commonly used approach in the existing reservoir schemes, we explicitly simulate the inundated storage and area for all grid cells that are within the reservoir extent. This approach enables a better simulation of river-floodplain-reservoir storage by considering both the natural flood and man-made reservoir storage. Results of the seasonal dynamics of reservoir storage, river discharge at the downstream of dams, and the reservoir inundation extent are evaluated with various datasets from ground-observations and satellite measurements. The new model captures the dynamics of these variables with a good accuracy for most of the large reservoirs in the western United States. It is expected that the incorporation of the newly developed reservoir scheme in large-scale land surface models (LSMs) will lead to improved simulation of river flow and terrestrial water storage in highly managed river basins.
Large temporal scale and capacity subsurface bulk energy storage with CO2
NASA Astrophysics Data System (ADS)
Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.
2017-12-01
Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.
High-performance mass storage system for workstations
NASA Technical Reports Server (NTRS)
Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.
1993-01-01
Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).
Sediment transfer-storage relations for degrading alluvial reservoirs
Thomas E. Lisle; Michael Church
2001-01-01
The routing of sediment through a drainage system is mediated by transfer-storage relations that are particular to each alluvial reservoir, which contains a channel and floodplain. We propose that sediment transfer rate for a given annual distribution of streamflow is a positive function of sediment storage and examine these relations for degrading reservoirs in which...
McNamara, Daniel E.; Hayes, Gavin; Benz, Harley M.; Williams, Robert; McMahon, Nicole D; Aster, R.C.; Holland, Austin F.; Sickbert, T; Herrmann, Robert B.; Briggs, Richard; Smoczyk, Gregory M.; Bergman, Eric; Earle, Paul S.
2015-01-01
In October 2014 two moderate-sized earthquakes (Mw 4.0 and 4.3) struck south of Cushing, Oklahoma, below the largest crude oil storage facility in the world. Combined analysis of the spatial distribution of earthquakes and regional moment tensor focal mechanisms indicate reactivation of a subsurface unnamed and unmapped left-lateral strike-slip fault. Coulomb failure stress change calculations using the relocated seismicity and slip distribution determined from regional moment tensors, allow for the possibility that the Wilzetta-Whitetail fault zone south of Cushing, Oklahoma, could produce a large, damaging earthquake comparable to the 2011 Prague event. Resultant very strong shaking levels (MMI VII) in the epicentral region present the possibility of this potential earthquake causing moderate to heavy damage to national strategic infrastructure and local communities.
Set processing in a network environment. [data bases and magnetic disks and tapes
NASA Technical Reports Server (NTRS)
Hardgrave, W. T.
1975-01-01
A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.
7 CFR 250.14 - Warehousing, distribution and storage of donated foods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... warehousing and distributing commodities under their current system with the cost of comparable services under... warehousing and distribution services, the distributing agency shall indicate this in its cost comparison... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION DONATION OF...
7 CFR 250.14 - Warehousing, distribution and storage of donated foods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... warehousing and distributing commodities under their current system with the cost of comparable services under... warehousing and distribution services, the distributing agency shall indicate this in its cost comparison... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION DONATION OF...
NASA Astrophysics Data System (ADS)
Rohr, T.; Manzoni, S.; Feng, X.; Menezes, R.; Porporato, A. M.
2013-12-01
Although seasonally dry ecosystems (SDEs), identified by prolonged drought followed by a short, but intense, rainy season, cover large regions of the tropics, their biogeochemical response to seasonal rainfall and soil carbon (C) sequestration potential are not well characterized. Both productivity and soil respiration are positively affected by seasonal soil moisture availability, creating a delicate balance between C deposition through litterfall and C losses through heterotrophic respiration. As climate change projections for the tropics predict decreased annual rainfall and increased dry season length, it is critical to understand how variations in seasonal rainfall distributions control this balance. To address this question, we develop a minimal model linking the seasonal behavior of the ensemble soil moisture, plant productivity, the related soil C inputs through litterfall, and soil C dynamics. The model is parameterized for a case study from a drought-deciduous caatinga ecosystem in northeastern Brazil. Results indicate that when altering the seasonal rainfall patterns for a fixed annual rainfall, both plant productivity and soil C sequestration potential are largely, and nonlinearly, dependent on wet season duration. Moreover, total annual rainfall plays a dominant role in describing this relationship, leading at times to the emergence of distinct optima in both primary production and C sequestration. Examining these results in the context of climate-driven changes to wet season duration and mean annual precipitation indicate that the initial hydroclimatic regime of a particular ecosystem is an important factor to predict both the magnitude and direction of the effects of shifting seasonal distributions on productivity and C storage. Although highly productive ecosystems will likely experience declining C storage with predicted climate shifts, those currently operating well below peak production can potentially see improved C stocks with the onset of declining rainfall due to reduced soil respiration. a) Annual average net primary productivity
NASA Astrophysics Data System (ADS)
Guo, C.; Wu, Y.; Yang, H.; Ni, J.
2015-12-01
Accurate estimation of carbon storage is crucial to better understand the processes of global and regional carbon cycles and to more precisely project ecological and economic scenarios for the future. Southwestern China has broadly and continuously distribution of karst landscapes with harsh and fragile habitats which might lead to rocky desertification, an ecological disaster which has significantly hindered vegetation succession and economic development in karst regions of southwestern China. In this study we evaluated the carbon storage in eight political divisions of southwestern China based on four methods: forest inventory, carbon density based on field investigations, CASA model driven by remote sensing data, and BIOME4/LPJ global vegetation models driven by climate data. The results show that: (1) The total vegetation carbon storage (including agricultural ecosystem) is 6763.97 Tg C based on the carbon density, and the soil organic carbon (SOC) storage (above 20cm depth) is 12475.72 Tg C. Sichuan Province (including Chongqing) possess the highest carbon storage in both vegetation and soil (1736.47 Tg C and 4056.56 Tg C, respectively) among the eight political divisions because of the higher carbon density and larger distribution area. The vegetation carbon storage in Hunan Province is the smallest (565.30 Tg C), and the smallest SOC storage (1127.40 Tg C) is in Guangdong Province; (2) Based on forest inventory data, the total aboveground carbon storage in the woody vegetation is 2103.29 Tg C. The carbon storage in Yunnan Province (819.01 Tg C) is significantly higher than other areas while tropical rainforests and seasonal forests in Yunnan contribute the maximum of the woody vegetation carbon storage (account for 62.40% of the total). (3) The net primary production (NPP) simulated by the CASA model is 68.57 Tg C/yr, while the forest NPP in the non-karst region (account for 72.50% of the total) is higher than that in the karst region. (4) BIOME4 and LPJ models predicted higher carbon storages than the CASA model with various spatial patterns. More investigations should be further performed to clarify processes of carbon cycle in ecosystems on karst terrain and to accelerate the development of a regional dynamic vegetation model which was appropriate for karst ecosystems.
Scanlon, Bridget R.; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y.; van Beek, Ludovicus P. H.; Wiese, David N.; Reedy, Robert C.; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F. P.
2018-01-01
Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002–2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤−0.5 km3/y) and increasing (≥0.5 km3/y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km3/y, whereas most models estimate decreasing trends (−71 to 11 km3/y). Land water storage trends, summed over all basins, are positive for GRACE (∼71–82 km3/y) but negative for models (−450 to −12 km3/y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. PMID:29358394
Scanlon, Bridget R; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y; Müller Schmied, Hannes; van Beek, Ludovicus P H; Wiese, David N; Wada, Yoshihide; Long, Di; Reedy, Robert C; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F P
2018-02-06
Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002-2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤-0.5 km 3 /y) and increasing (≥0.5 km 3 /y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km 3 /y, whereas most models estimate decreasing trends (-71 to 11 km 3 /y). Land water storage trends, summed over all basins, are positive for GRACE (∼71-82 km 3 /y) but negative for models (-450 to -12 km 3 /y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. Copyright © 2018 the Author(s). Published by PNAS.